> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead. And what people are saying here is that L2 automation increases the risk that the human will fuck up in that 1%, by decreasing their situational awareness in the remainder of time.
Humans regularly mess up in supposedly-safe scenarios. Consider a machine that kills everyone in those 1% edge cases (which are in reality less frequent than 1%) and drives perfectly 99% of the time. I hypothesise it would still outperform humans.
Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.
> I'd hypothesise that a machine that kills everyone in those 1% edge cases (which are actually less frequent than 1%) but drives perfectly 99% of the time would still outperform humans.
Well, no.
Some quick googling suggests that the fatality rate right now is roughly 1 per 100 million miles. So, for certain fatality in the case of human control to be an improvement, it would have to happen only about once in the lifespan of about every 500 million cars. In other words, the car would, for all practical purposes, have to be self driving.
"Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers."
The part that really bothers me (for some reason) is that those edge cases are frequently extremely mundane, uninteresting driving situations that even a child could resolve. They simply confuse the computer, for whatever reason.
I'm genuinely interested to see how consumers react to a reality wherein their overall driving safety is higher, but their odds of being killed (or killing others) are spread evenly across all driving environments.
Imagine the consumer (and driving habits) response to the first occasion wherein a self-driving car nicely drives itself through a 25MPH neighborhood, comes to a nice stop at a stop sign, and then drives right over the kid in the crosswalk that you're smiling and waving at. Turns out the kids coat was shimmering weirdly against the sunlight. Or whatever.
> making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.
You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.
I hear you trying to make a Trolley Problem argument, but that is not the issue here. L2 is dependent on humans serving as a reliable backup.
> You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.
I understand the concern. I am saying the problem of slow return from periods of extended inattention is not significant in comparison to general human ineptitude.
Level 2 systems may rely on "humans serving as a reliable backup," but they won't always need their humans at a moment's notice. Being able to predict failure modes and (a) give ample warning before handing over control, (b) take default action, e.g. pulling over, and/or (c) refusing to drive when those conditions are likely all emerge as possible solutions.
In any case, I'm arguing that the predictable problem of inattention is outweighed by the stupid mistakes Level 2 autopilots will avoid 99% of the time. Yes, from time to time Level 2 autopilots will abruptly hand control over to an inattentive human who runs off a cliff. But that balances against all the accidents humans regularly get themselves into in situations a Level 2 system would handle with ease. It isn't a trolley problem, it's trading a big problem for a small one.
If you actually look at the SAE J3016_201609 standard, your goalpost-moving takes you beyond level 2. "Giving ample warning" puts you in level 3, whereas "pulling over as a default action" puts you in level 4.
The original point - that level 2 is a terrible development goal for the average human driver - still stands.
Yeah, you're talking about level 3. Most people think that's not a realistic level because "ample warning" requires seeing far into the future. Better to go straight to L4.
Also, you are definitely invoking the trolley problem: trading a big number of deaths that aren't your fault for a smaller number that are. Again, not the issue here. L2 needs an alert human backup. Otherwise it could very well be less safe.
But I would say the thrust of your argument is not that off, if we just understand it as "we need to go beyond L2, pronto".
NO, a higher licensing bar for human drivers will NOT solve the problem, it would only exacerbate it (and I'm ALL FOR setting a higher licensing bar for humans for other reasons).
The problem here is NOT the untrained driver -- it is the attention span and loss of context.
I've undergone extensive higher training levels and passed much higher licensing tests to get my Road Racing license.
I can tell you from direct experience of both that the requirements of high-performance driving are basically the same as the requirements to successfully drive out of an emergency situation: you must
1)have complete command of the vehicle,
2) understand the grip and power situation at all the wheels, AND
3) have a full situational awareness and understand A) all the threats and their relative damage potential (oncoming truck vs tree, vs ditch, vs grass), and B) all the potential escape routes and their potential to mitigate damage (can I fit through that narrowing gap, can I handbrake & back into that wall, do I have the grip to turn into that side road... ?).
Training will improve #1 a lot.
For #2, situational awareness, and #3, understanding the threats and escapes, there is no substitute for being alert and aware IN THE SITUATION AHEAD OF TIME.
When driving at the limit, either racing or in an emergency, even getting a few tenths of a second behind can mean big trouble.
When you are actively driving and engaged, you HAVE CURRENT AWARENESS of road, conditions, traffic, grip, etc. You at least have a chance to stay on top of it.
With autopilot, even with the skills of Lewis Hamilton, you are already so far behind as to be doomed. 60 mph=88 feet/sec. It'll be a minimum of two seconds from when the autopilot alarms before you can even begin to get the situation and the wheel in hand. You're now 50 yards downrange, if you haven't already hit something.
Even with skills tested to exceed the next random 10,000 drivers on the road, the potential for this situation to occur would terrify me.
I might use such a partial system in low-risk situations like slow traffic where its annoying and the energies involved are fender-bender level. Otherwise, no way. Human vigilance and context siwtching is just not that good.
I can't wait for fully-capable autodriving technology, but this is asking for trouble.
Quit cargo-culting technology. There is a big valley of death between assist technologies and full-time automation.
You make an important point. This is something I see a lot of people gloss over in these discussions.
It's a question that both sides of the discussion claim answers to, and both sound reasonable. The only real answer is data.
As you've said, killing 100% of the time in the 1% scenarios may very well be better than humans driving all the time. Better, as defined by less human life lost / injuries.
Though, one minor addition to that - is human perception. Even if numerically I've got a better chance to survive, not be injured, etc - in a 99% perfect auto-car, I'm not sure I'd buy it. Knowing that if I hear that buzzer I'm very likely to die is.. a bit unsettling.
Personally I'm just hoping for more advanced cruise control with radar identifying 2+ cars ahead of me knowing about upcoming stops/etc. It's a nice middle ground for me, until we get the Lvl5 thing.
The statement at the end of your comment made me wonder if there will be a time in the future where you cannot disengage the automation in the car you're currently in unless you have some sort of advanced license; Something like the layman's version of the CDL.
That solution does not work it will just increase the number of people driving without a license. For example, in France, the driving license is quite hard to obtain, you need around 20-30h hours of tutoring before you can attempt the test and it's not a sure thing to get it. So the consequence is that there is a lot of drivers without license, who are implicated in a high number of accidents.
Humans regularly mess up in supposedly-safe scenarios. Consider a machine that kills everyone in those 1% edge cases (which are in reality less frequent than 1%) and drives perfectly 99% of the time. I hypothesise it would still outperform humans.
Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.