This reminds me of the story of the US army moving from canvas to metal helmets. To their surprise, they found that the number of head injuries went up, and not down.
Why? Because the metal helmets converted fatalities to head injuries.
Under the risk compensation theory, helmeted cyclists may be expected to ride less carefully; this is supported by evidence for other road safety interventions such as seat belts and anti-lock braking systems. Anecdotally, many riders report feeling safer with a helmet: "When I wear it, I feel safe..." One researcher randomized his helmet use over a year of commuting to work and found that he rode slightly faster with a helmet.
Motorists may also alter their behavior toward helmeted cyclists. One small study from England found that vehicles passed a helmeted cyclist with measurably less clearance (8.5 cm) than that given to the same cyclist unhelmeted (out of an average total passing distance of 1.2 to 1.3 metres).
According to John Forester, being struck by an overtaking car is among the least likely bicycle accidents, even though it is the rationale used to justify mandatory bike lane usage. http://www.johnforester.com/
His book, Effective Cycling is a great way to improve bicycle safety.
Reminds me of Neal Stephenson's character whose strategy for bike safety at night was to assume he was wearing reflective clothing and that everybody in a car would be paid a million dollars to kill him.
Another strategy is to actually wear OSHA-certified hyper-reflective clothing, such that the anyone would assume that a driver who hit him must have been paid a million dollars to do it.
A friend of mine does this, he has a whole closet full of orange and silver sweatshirts that he wears every day. Last time he got hit by a car was years ago, but when the cops showed up they took one look at him and arrested the driver before he'd uttered a word.
Here in England, everybody wears that terrible neon reflective stuff every time they leave the house. Bicycle optional in many cases. They have no shame here.
The downside is that it makes me comparatively less easy to spot.
Another strategy is to assume the drivers are wearing hyper-reflective clothing and that you've been paid a million dollars to hit them. Someone try that out and report back.
I couldn't for the life of me remember the name of the book or the character - I do remember perfectly the gas-masked fish on the cover of the mass-market paperback, though.
His book is a mixed bag. His thoughts about how to ride are good, but his conclusion in favor of wide curbside lanes and against bike lanes did lots of damage, as many locales/planners used his book as justification for not putting in bike lanes. This thinking was difficult to unwind. Of course bike lane use shouldn't be mandatory, as they aren't in California at least. I don't know any cyclists here who aren't grateful for the great bike lanes we have, thanks mainly to good codified, standards for their implementation. [I think I read his book 20 years ago :-)]
The closest I've ever been to being killed on a bicycle was in a traffic configuration where the presence of a bike lane completely upset the rules of the road. It was in Gainesville, Florida which has bike lanes everywhere and where drivers are familiar with people using them.
While I appreciate that some people favor bike lanes, in my experience riding in the traffic lane as a vehicle is safer than accepting secondary status in a separate lane. And the technique is not dependent on infrastructure...and saying, "But California does it," won't help the cause in most places. [I bought the book about 20 years ago, recently returned to it now that I'm teaching my son to ride]
Do you have any citations for that? I'm not doubting you, but genuinely curious (and ambivalent about helmets).
There seems to be a pervasive attitude in the US that if you're a cyclist hit by a car when you're not wearing a helmet, you're automatically at fault, because you're not being responsible. Which is weird, because the risk isn't coming from the bikes...and there's a good chance that it makes cycling seem more dangerous that it is, which
makes cycling more dangerous (by making safe cycling techniques more obscure, making it less likely that drivers know how cyclists are likely to behave, etc.).
As a data point: I've been in two collisions in eight-ish years of cycling as my primary transportation (in and around Grand Rapids, Michigan). The first time resulted in a skinned knee and elbow, the second time a bad mood, a bent wheel, and a hurried (but passed) Islamic history exam.
I don't think urban cycling is particularly risky, once you know what you're doing. I feel more in control cycling than when I'm driving on ice (physics!), and I've been driving here for a decade. (Fixed-gear bikes are particularly stable on ice, though.)
Can't remember the name but an experiment by a prof from UCL showed that cars approached him more closely when he was wearing a helmet - but they gave him more space when he was wearing a long blonde wig.
So the safest solution is a transvestite without a helmet.
American drivers' attitudes may differ from UK/Austrailian drivers', especially regionally (i.e. rural Texas vs. New York City). I'm curious how drivers would behave around bikes with a (scientifically fake) baby / child seat, with a man or a woman riding.
My experience (in the urban midwest) has been that most drivers are agreeable as long as you ride predictably (not weaving erratically, running red lights, etc.), but drivers high on testosterone should be given space - whether it's a red car full of teenagers with something to prove or (in my worst case) a skeezy, ponytailed, balding 40-something man with a considerably younger woman in his convertible. That guy tailgated me for over a mile on a clear two-lane street, trying to prove something (?). I turned off just to let him have his stupid moment and get on with my life.
Places I have lived where more people cycle the drivers have a better attitude. Cambridge, Amsterdam and Vancouver are great to cycle in, London reasonable, smaller industrial cities where cycling is rare = terrible
I used to commute on a motorbike and the cars to be very careful of were:
SUV with mom in the front and a kid in the back, BMW with guy wearing suit, hot hatchback with 4 teenagers in - you just gave any of those a wide berth!
This is why I hate people quoting sociology studies. A comparison between wearing helmets and wearing long blonde wigs gets summarized as:
> research has shown that cars drive closer to people with helmets than they do to people without them
I've taken to just assuming any surprising result from sociology is bullshit unless I really trust the source or have thoroughly investigated the methodology.
Helmets turn fatalities into head injuries but also head injuries into non-head injuries. You need the real statistics for each case before making a judgment.
Also, a metal helmet has the obvious problem that there's no damping for an impact. A Styrofoam helmet absorbs shock.
I'm not talking theory but facts. Metal helmets increased head injuries substantially, and reduced bullet and shrapnel to-the-head fatalities substantially.
Would you mind fleshing that out? Just posting two words and linking to wikipedia doesn't add much to the conversation, and people who care about maintaining the level of discourse on HN might heavily downvote you.
Suppose that we want to study the underlying genetics of a disease. To do this, we want to look at some people with the disease (cases) and some people without (controls). The trouble is that if the disease makes people more likely to die, they won't be enrolled in your study.
This is of particular concern for fast moving diseases like pancreatic cancer. In general this effect manifests as a bias against people with more severe disease -- occasionally these are exactly the people you want to study.
The simplest way to get around this is with an alternative study design. Rather than ascertaining cases and controls, you enroll a large number of individuals in a cohort. After that, you sit back and track the cohort for decades and watch for disease to emerge.
A famous example of this is the Religious Orders Study (http://www.rush.edu/rumc/page-1099611542043.html). A large number of Catholic brothers and sisters graciously agreed to participate on a study of Alzheimer's disease. The researchers recruited a cohort of non-demented clergy and have been tracking them for years. They perform annual cognitive tests on the participants to assess mental decline. All participants were gracious enough to consent to post-mortem brain donation. It would be impossible to get this sort of data with a case/control study.
Another type of ascertainment bias is population stratification which can generate all sorts of misleading results. Imagine that you're a scientist in Boston and you want to study sickle cell anemia. You phone up a doctor friend and say "Please send me the next 10 sickle cell cases and the next 10 non-sickle cell cases." After spending $300,000 and a year and a half on the project you find some great mutations. Ten minutes later you notice that all the markers you found are strongly associated with being African-American. Your cases included 9 persons of African ancestry and one of Mediterranean ancestry, while your controls matched the particular demographic blend that you'd expect to find around Boston. Oops.
I hope that helps.
p.s.
Is anyone in London looking for smart people? E-mail me. ;)
Similar to the famous Boston cemetery study. It shows the age at death of people in a cemetery drops linear as you approach the current date. whereas people 100years ago all died in their 60-80s
this bias tends to come up frequently on HN (often as a reminder that this community shares more articles about businesses that succeed than those that fail), so many may recognize it (and upvote it) --- it's sort of like saying tl;dr and summarizing the article it it's own way.
I think there's value in pointing out a specific topic that might otherwise be difficult to discover. Sure, it would be even better to summarize it a bit, but it's still valuable enough.
When google released some OR software recently that hit the HN front page [http://news.ycombinator.com/item?id=1724580] I followed various links to this particular wikipedia page and then moved on to other things. When the "armor where there are no bullet holes" stories started popping up I assumed that this was the event that triggered the "hey this is cool" reaction from a few people who started the ball rolling...
You must be careful when applying this sort of logic because it assumes that what happens to a plane after its shot is entirely deterministic, ie. if it gets shot in the tail fin it crashes, if it gets shot in the wing it doesn't. That may or may be accurate for planes (I really don't know) but may not for other things.
Suppose anywhere the plane was shot led to a 1/5 chance of the plane crashing, meaning that all places are equally deserving of armor. Suppose also that the wings comprised about 75% of the surface area of the plane. You'd see 3 planes returning with bullet holes in a wing for every 1 with a bullet hole somewhere else. That doesn't mean somewhere else is a better place to put the armor.
It's easy to see how it could be possible, if the odds of a crash weren't uniform (ie if the wings had a slightly higher than average chance of causing a crash if shot in my example above) you could easily come to the wrong conclusion.
Think of the scale. 40,000 US/British planes were lost or damaged beyond repair, nevermind the ones that got a few bullet/flack holes and survived - most planes every mission. Thousands of planes would meet for a single bombing run. One mission would give you a good enough sample.
Also note that between 5 and 20% of the planes didn't come back from each mission. How many more were damaged every run? Most? 15,000 flak cannons protected Europe, many of them radar guided/aimed. Strategic bombing was an incredibly crappy job: you died more often than a Marine in the Pacific.
Suppose anywhere the plane was shot led to a 1/5 chance of the plane crashing, meaning that all places are equally deserving of armor. Suppose also that the wings comprised about 75% of the surface area of the plane. You'd see 3 planes returning with bullet holes in a wing for every 1 with a bullet hole somewhere else.
So do your study with the plane divided up into equal-sized sections. Or look for clustering and divide it based on that.
Yea. The key assumption for the "put the armor where the bullets aren't" is that the initial bullets are spread evenly over the surface area (or, rather, that the bullets are spread evenly over the cross-section, which you then average over the typical orientation of the plane as it takes enemy fire). This turns out to be a good assumption because planes typically take highly dispersed enemy fire compared to the size of the plane.
On the other hand, if the enemy had extremely accurate guns and (say, for visibility reasons) always shot at and hit particular parts of the plane (say, the parts of the wings next to the necessarily shiny propellers) then the naive reasoning would be correct: the areas that were bullet-ridden on returning planes would be the ones that should be armored.
Reminds me of common belief that dolphins push drowning people to the shore. It's hard to check if it's actually true, after all, most people pushed in opposite direction are most likely dead.
There was an article about this in the paper recently, specifically about dolphins protecting swimmers from sharks. The scientist said, dolphins are curious about you because you're warm-blooded like they are. They usually have their kids with them, so they will make sure the area is shark-free for that reason. If you don't get rescued before the dolphins get bored, they'll leave you to the sharks without a second's thought...
It's a nice story, but the distribution of the bullet holes in the illustration ( http://motherjones.com/files/images/blog_raf_bullet_holes.jp... ) looks a bit too neat to me. I wouldn't expect the bullet holes to be placed exactly so that a cursory glance makes it mind-blowingly obvious where the weak spots are, in a neatly symmetrical way. I'd expect a bit of frowning and thinking and calculating to be needed to figure that out.
So, for an article about 'obvious but wrong' conclusions, I think the illustration is kind of deceiving... unless I'm wrong!
In reference to the illustration in question, it says "the result was a graphic that looked something like the image below", in other words it's made up.
> count up all the bullet holes in various places, and then put extra armor in the areas that attracted the most fire.
>
> Obvious but wrong
Please tell me I'm not the only one who instantly thought, obviously wrong. And thought the obvious solution is look at what got shot down (or similar such as the article listed)
Many, many of the stories you read about insight turn out to be quite exaggerated, with everybody at the time understanding something which after many retellings turns into an amazing breakthrough by one person. Hell, it happens all the time at work, with a bunch of people screaming, "We need to do X, we need to do X," for months on end before they give up and just grumble bitterly about it over beer, and then after a mysterious attitude shift in the leadership, the next guy who suggests X (usually someone who hasn't been around long enough to know better) is hailed as a genius.
I actually got to be that guy once. I was randomly venting in a meeting about a chronic problem and the ridiculously easy fix to it, and there happened to be a business guy there who did not want his project held up by the BS problem I was talking about, so he demanded that the fix be implemented. It happened. I was amazed. I didn't cash in on the genius part, though.
The astronomer Freeman Dyson was doing similar work for the RAF (I had heard this story attributed to him)
Two of his calculations weren't acted on because of illogical users.
Very few bombers crashed because of airframe failures as opposed to enemy attacks. Logically it would make sense to make the aircraft weaker but lighter/cheaper/faster the loss of a few % crashing due to breakups would balance being able to escape enemies faster.
Gunners on bombers very rarely hit incoming fighters, it would make sense to remove some or all of the gunners and instead train them as pilots and build more aircraft. The RAF's main problem was losing skilled crews and gunners were just extra wasted human resource when a plane was shot down.
The wrinkle in this story is that both conclusions were eventually discovered by others as well, acted upon, and eventually became the dominant paradigms by the end of the war (Dyson was doing operations research for RAF Bomber Command and claims to have made the suggestion in regards to cutting weight on Lancasters.) Some of the most successful planes in the war embodied this philosophy before Dyson even enlisted. The most successful light bomber in the war was the De Haviland Mosquito, which was made primarily of wood, usually carried no machine guns, and could outrun every other plane in the sky until the arrival of jet engines.
It's not completely illogical to want to protect the places with holes, because the distribution of where bullets hit isn't random. Fighter pilots aimed at specific parts of planes they were attacking. And some surfaces would have been protected either by other surfaces or by having a nearly flat angle of incidence to probable incoming fire.
I thought this too, but then realized the humans doing the shooting are also illogical. For instance, I could see how hitting the wings might feel effective and easy, but it's not a great strategy because the bullet holes don't really impact the flight capability.
How many WW2 fighter pilots were actually good enough shots that they could aim at part of a plane? I always figured that the difference between a good gunner and a bad gunner was whether they hit the plane at all.
In every instance I've read about they aimed at specific parts. Presumably the difference between the good ones and the bad ones was whether they were able to hit what they were aiming at, but they all appear to have at least tried.
Was this because of their incredibly short ammunition reserves? I heard something recently about spitfires having only having about 20s worth of machine gun fire.
Partly, and in the case of Spitfires and Hurricanes (never forget the Hurricanes -- they weren't nearly as pretty, but they were every bit as effective) partly because of the limited structural damage that the .303 round could do. Later variants were more heavily armed, but the Battle of Britain was fought primarily with Brownings chambered for the .303 British. That pretty much left pilots, the cylinders of radial engines and pressurized fuel and oil systems as the vulnerable points. (Those with .50 Brownings and 20mm auto cannon could afford to shoot at other things, but a fuselage hit still rarely did more than add ventilation.)
For bombers thats the main cause of aircraft losses, fighters of WWII were pretty ineffective, especially at night.
It's difficult to believe a WWII era fighter diving into a stream of bombers, at night, while the multiple gun turrets on 100s of bombers were returning fire - were making precise tactical decision about which part of the airframe to target.
The nice thing (statistically) about AA fire is that it's rather uniformly random.
Mostly against primitive fighters particularly on the Russian front. Assuming similar exaggeration of successes by each sides publicity offices you can probably divide the figures by 2 or 4.
Even if all these had been against night heavy bombers it's a tiny part of the losses
The RAF didn't fly in formation during night bombing attacks, rather they flew in "streams" often hundreds of miles long. So there wouldn't have been hundreds of turrets to bring to bear on a single fighter.
I think you're also overestimating how easy it would be to spot a blacked out fighter at night. A popular german night fighter tactic was often to sneak up beneath a bomber and then pitch the nose of the fighter up sharply while firing, racking the bomber's belly with cannon fire.
Many fighters were equipped with guns mounted in a vertical configuration called Schräge Musik. The guns would fire automatically when a magnetometer detected an enemy bomber was overhead.
I think that was the result of Dyson's study - a gunner never saw a fighter so they could just as effectively be removed or repalced with fixed guns.
And in Wald's research a night fighter still had some difficulty finding and approaching abomber even with ground radar - so they fired at it (using proximty fuses or upward firing guns) essentially at random points - there was no statsistical bias of the fighter pilot targetting specific systems
Well, the article is specifically about ack-ack fire. I guess from a statistical perspective, adding enough armour to prevent fighter bullets from killing you would be a bad trade-off.
Unfortunately the enemy are rarely cooperative in allowing your aircrash investigators to visit the site and examine crashed aircraft.
You could examine enemy bombers that you had shot down but apart from the design and vunerabilites being different you would miss those that had exploded leaving little wreakage or had been crippled and crashed over the sea on the way back. You would still select for a set of damage that allowed an enemy place to crash reasonably intact
My immediate thought upon seeing the headline was "that seems counterintuitive," but that's because I assumed they were analyzing planes that were shot down or nearly so.
Seeing how frequently punishing people following below average performance and rewarding them for above average performance is used as a managerial/coaching/teaching technique...yes.
I guess an interesting point to think about, of those 1000 or so shots on the wing, what about the one that finally hit the wire that disconnected the aileron from the yoke? You might have plenty that hit around the wing, but one that takes out that wire and it could be over. The sweeping logic being used doesn't cover those areas.
You aren't taking the circumstances into account. The question these guys got wasn't to optimize plane safety, it was more something like "we will be building 1000 more of these planes next week. What should we different about them to make them better?" There simply wasn't time for nuance.
You can't just say (Bullet hole here) + (returned plane) = (don't need to as much armor there). You also need to understand the circumstances of how the plane returned. Was it crippled and just barely made it back with critical systems straining? Or was it mostly functional?
If there was a common place where planes were shot that caused the airplane to fly home for an emergency landing, then that spot is a good place for armor.
Not necessarily. I'm assuming adding armor incurs some form of debt so to speak on performance, cost, development time, etc. In some cases those debts might overshadow the cost of armor...
The related factors of weight, speed, and fuel consumption would be the obvious downsides to adding armor. Especially considering that a heavier, slower plane is an easier target, thicker skinned or not.
There's a fine balancing point between just enough and too much armor.
Startup lesson: focus on what functions, what your customers use, and your core competencies, not fixing things no body is using anyway.
An example of a company I think not following this advice: Pivotal Labs. Why are they not building premium options into Pivotal Tracker instead of building WebOS apps?
The only black swan connection I see is Taleb's point about "the graveyard of silent evidence". Absence of evidence is not evidence of absence. We didn't see all the planes returning that had catastrophic failures in the known bullet-holed areas and never returned.
Isn't that what the article is saying? "Don't forget the planes that didn't return", which is silent evidence because there's no evidence of its absence
Why? Because the metal helmets converted fatalities to head injuries.