> The level-2 driving that Tesla is pushing seems like a worst case scenario to me
What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still.
A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.
> Neither the driver nor the car manufacturer will have clear responsibility when there is an accident
This is not how courts work. The specific circumstances will be considered. Given the novelty of the situation, courts and prosecutors will likely pay extra attention to every detail.
That's not what the concern is based on. It's rooted in what we've learned about autopilot on planes and dead men's switches in trains. Systems that do stuff automatically most of the time and only require human input occasionally are riskier than systems that require continuous human attention, even if the automated portion is better on average than a human would be. There's a cost to regaining situational awareness when retaking control that must be borne exactly when it can't be afforded, in an emergency.
> It's rooted in what we've learned about autopilot on planes and dead men's switches in trains
Pilots and conductors are trained professionals. The bar is lower for the drunk-driving, Facebooking and texting masses.
> Systems that do stuff automatically most of the time and only require human input occasionally are riskier than systems that require continuous human attention, even if the automated portion is better on average than a human would be
This does not appear to be bearing out in the data [1].
You're misunderstanding the data and the concern. Currently, Tesla Autopilot frequently disengages as part of its expected operation, handing control back to the driver. Thus, the human driver remains an attentive and competent partner to the autopilot system. That data is based on today's effective partnership between human and computer.
The concern is that as level 2 autopilot gets better and disengagements go down, the human's attentiveness will degrade, making the remaining disengagement scenarios more dangerous.
> The concern is that as level 2 autopilot gets better and disengagements go down, the human's attentiveness will degrade, making the remaining disengagement scenarios more dangerous
A Level 2 autopilot should be able to better predict when it will need human intervention. If the autopilot keeps itself in situations where it does better than humans most (not all) of the time, the system will outperform.
My view isn't one of technological optimism. Its derived from the low bar set by humans.
The problem is that in L2, the bar for the system as a whole is set by the low bar for humans, specifically their reactions in an emergency. If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead. And what people are saying here is that L2 automation increases the risk that the human will fuck up in that 1%, by decreasing their situational awareness in the remainder of time.
That's why Google concluded that L5 was the only way to go. You only get the benefit of computers being smarter than humans if the computer is in charge 100% of the time, which requires that its performance in the 1% of situations where there is an emergency must be better than the human's performance. That is the low bar to meet, but you still have to meet it.
> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead. And what people are saying here is that L2 automation increases the risk that the human will fuck up in that 1%, by decreasing their situational awareness in the remainder of time.
Humans regularly mess up in supposedly-safe scenarios. Consider a machine that kills everyone in those 1% edge cases (which are in reality less frequent than 1%) and drives perfectly 99% of the time. I hypothesise it would still outperform humans.
Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.
> I'd hypothesise that a machine that kills everyone in those 1% edge cases (which are actually less frequent than 1%) but drives perfectly 99% of the time would still outperform humans.
Well, no.
Some quick googling suggests that the fatality rate right now is roughly 1 per 100 million miles. So, for certain fatality in the case of human control to be an improvement, it would have to happen only about once in the lifespan of about every 500 million cars. In other words, the car would, for all practical purposes, have to be self driving.
"Of course, you won't have 100% death in the edge cases. Either way, making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers."
The part that really bothers me (for some reason) is that those edge cases are frequently extremely mundane, uninteresting driving situations that even a child could resolve. They simply confuse the computer, for whatever reason.
I'm genuinely interested to see how consumers react to a reality wherein their overall driving safety is higher, but their odds of being killed (or killing others) are spread evenly across all driving environments.
Imagine the consumer (and driving habits) response to the first occasion wherein a self-driving car nicely drives itself through a 25MPH neighborhood, comes to a nice stop at a stop sign, and then drives right over the kid in the crosswalk that you're smiling and waving at. Turns out the kids coat was shimmering weirdly against the sunlight. Or whatever.
> making the majority of travel safe in exchange for making edge cases more deadly to untrained drivers has a simple solution: a higher bar for licensing human drivers.
You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.
I hear you trying to make a Trolley Problem argument, but that is not the issue here. L2 is dependent on humans serving as a reliable backup.
> You are still misunderstanding the concern. The problem is not poorly trained drivers. The problem is that humans become less attentive after an extended period of problem-free automated operation.
I understand the concern. I am saying the problem of slow return from periods of extended inattention is not significant in comparison to general human ineptitude.
Level 2 systems may rely on "humans serving as a reliable backup," but they won't always need their humans at a moment's notice. Being able to predict failure modes and (a) give ample warning before handing over control, (b) take default action, e.g. pulling over, and/or (c) refusing to drive when those conditions are likely all emerge as possible solutions.
In any case, I'm arguing that the predictable problem of inattention is outweighed by the stupid mistakes Level 2 autopilots will avoid 99% of the time. Yes, from time to time Level 2 autopilots will abruptly hand control over to an inattentive human who runs off a cliff. But that balances against all the accidents humans regularly get themselves into in situations a Level 2 system would handle with ease. It isn't a trolley problem, it's trading a big problem for a small one.
If you actually look at the SAE J3016_201609 standard, your goalpost-moving takes you beyond level 2. "Giving ample warning" puts you in level 3, whereas "pulling over as a default action" puts you in level 4.
The original point - that level 2 is a terrible development goal for the average human driver - still stands.
Yeah, you're talking about level 3. Most people think that's not a realistic level because "ample warning" requires seeing far into the future. Better to go straight to L4.
Also, you are definitely invoking the trolley problem: trading a big number of deaths that aren't your fault for a smaller number that are. Again, not the issue here. L2 needs an alert human backup. Otherwise it could very well be less safe.
But I would say the thrust of your argument is not that off, if we just understand it as "we need to go beyond L2, pronto".
NO, a higher licensing bar for human drivers will NOT solve the problem, it would only exacerbate it (and I'm ALL FOR setting a higher licensing bar for humans for other reasons).
The problem here is NOT the untrained driver -- it is the attention span and loss of context.
I've undergone extensive higher training levels and passed much higher licensing tests to get my Road Racing license.
I can tell you from direct experience of both that the requirements of high-performance driving are basically the same as the requirements to successfully drive out of an emergency situation: you must
1)have complete command of the vehicle,
2) understand the grip and power situation at all the wheels, AND
3) have a full situational awareness and understand A) all the threats and their relative damage potential (oncoming truck vs tree, vs ditch, vs grass), and B) all the potential escape routes and their potential to mitigate damage (can I fit through that narrowing gap, can I handbrake & back into that wall, do I have the grip to turn into that side road... ?).
Training will improve #1 a lot.
For #2, situational awareness, and #3, understanding the threats and escapes, there is no substitute for being alert and aware IN THE SITUATION AHEAD OF TIME.
When driving at the limit, either racing or in an emergency, even getting a few tenths of a second behind can mean big trouble.
When you are actively driving and engaged, you HAVE CURRENT AWARENESS of road, conditions, traffic, grip, etc. You at least have a chance to stay on top of it.
With autopilot, even with the skills of Lewis Hamilton, you are already so far behind as to be doomed. 60 mph=88 feet/sec. It'll be a minimum of two seconds from when the autopilot alarms before you can even begin to get the situation and the wheel in hand. You're now 50 yards downrange, if you haven't already hit something.
Even with skills tested to exceed the next random 10,000 drivers on the road, the potential for this situation to occur would terrify me.
I might use such a partial system in low-risk situations like slow traffic where its annoying and the energies involved are fender-bender level. Otherwise, no way. Human vigilance and context siwtching is just not that good.
I can't wait for fully-capable autodriving technology, but this is asking for trouble.
Quit cargo-culting technology. There is a big valley of death between assist technologies and full-time automation.
You make an important point. This is something I see a lot of people gloss over in these discussions.
It's a question that both sides of the discussion claim answers to, and both sound reasonable. The only real answer is data.
As you've said, killing 100% of the time in the 1% scenarios may very well be better than humans driving all the time. Better, as defined by less human life lost / injuries.
Though, one minor addition to that - is human perception. Even if numerically I've got a better chance to survive, not be injured, etc - in a 99% perfect auto-car, I'm not sure I'd buy it. Knowing that if I hear that buzzer I'm very likely to die is.. a bit unsettling.
Personally I'm just hoping for more advanced cruise control with radar identifying 2+ cars ahead of me knowing about upcoming stops/etc. It's a nice middle ground for me, until we get the Lvl5 thing.
The statement at the end of your comment made me wonder if there will be a time in the future where you cannot disengage the automation in the car you're currently in unless you have some sort of advanced license; Something like the layman's version of the CDL.
That solution does not work it will just increase the number of people driving without a license. For example, in France, the driving license is quite hard to obtain, you need around 20-30h hours of tutoring before you can attempt the test and it's not a sure thing to get it. So the consequence is that there is a lot of drivers without license, who are implicated in a high number of accidents.
> If the computer safely drives itself 99% of the time but in that 1% when the human needs to take control, the human fucks up, the occupants of the vehicle are still dead
Not dead, which I feel is important to point out. Involved in an incident, possibly a collision or loss of lane, but really it's quite hard to get dead in modern cars. A quick and dirty google shows 30,000 deaths and five and a half million crashes annually in the US - that's half a percent.
So in your hypothetical the computer drives 99% of the time, and of the 1% fuckups, less than 1% are fatal.
I like your creative thinking, but that wouldn't work. An immediate problem is it would only train the driver to pay attention when they hear a disengagement chime. L2 depends on the driver to monitor the autopilot continuously.
More productively, Tesla currently senses hands on the wheel. Perhaps they could extend that with an interior camera that visually analyzes the driver's face to ensure their eyes are on the the road.
Recent Honda CRVs can have a attention monitoring system in them. I'm not sure how it works but it does seem to detect when the driver isn't looking around.
>What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1].
Actually the study explicitly doesn't show that.
First of all, in the study, it purely measures accident rate before and after installation, so miles driven by humans are in both buckets. Second of all the study is actually comparing Tesla before and after the installation of Auotsteer and prior to the installation of Autosteer, Traffic Aware Cruise Control was already present. According to the actual report:
The Tesla Autopilot system is a Level 1 automated system when operated with TACC enabled and a Level 2 system when Autosteer is also activated.
So what this report is actually showing is that Level 2 enabled car is safer than a Level 1 enabled car. Extrapolating that to actual miles driven with level 2 versus level 1 is beyond the scope of the study and comparing level 1 or level 2 to human drivers is certainly beyond the scope of the study.
You are correct. We do not have definitive data that the technology is safe. That said, we have preliminary data that hints it's safer and nothing similar to hint it's less safe.
"What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still."
As far as I know it is indeed correct that autopilot safety is statistically higher than manual driving safety (albeit with a small sample size).
However, something has always bothered me about that comparison ...
Is it fair to compare a manually driven accidental death (like ice, or wildlife collision) with an autopilot death that involves a trivial driving scenario that any human would have no trouble with ?
I don't know the answer - I'm torn.
Somehow those seem like apples and oranges, though ... as if dying in a mundane (but computer-confusing) situation is somehow inexcusable in a way that an "actual accident" is not.
"Appears" is the operative word. The new system is going to kill somebody. It hinges on building a whitelist of geolocated problematic radar signatures to avoid nuissance braking [1]. It's only a matter of time before a real danger that coincides with a whitelisted location causes a crash.
> What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers
That's a good question. Clearly, existing self-driving tech is safer than human drivers on average. However, "average" human driving includes texting while driving, drunk driving, falling asleep at the wheel, etc. Is the appropriate comparison the "average" driver, or a driver who is alert and paying attention?
> A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.
Have there been any Tesla autopilot fatalities with the right conditions to spark outrage? That's a sincere question as maybe I've missed some which would prove your point.
The only major incident I'm aware of is one in which only the driver of the car was killed. In an accident like that it is easy to handwave it away pretty much independent of any specifics (autopilot or no).
A real test of public reaction would involve fatalities to third parties, particularly if the "driver" of the automated vehicle survived the crash.
I'm surprised you believe this. Drivers run people down every day and nobody even investigates the cause. Motorists kill about a dozen pedestrians every month in New York City and historically only half of those people get even a failure-to-yield ticket. Meat-puppets are demonstrably unfit to operate vehicles in crowded urban environments, everybody knows this, and nobody is outraged when the people die.
Indeed, it's probably best not to measure the utility of this tech based on preemptive predictions of how an emotional public will react or the reactions of outrage-driven media with terribly short attention spans.
The actual performance of these machines will be the ultimate test. If it does consistently improve safety then I don't really see much barriers existing here, the current unknowns and semantics surrounding it will be worked out in markets and in courts over an extended period of time and will ultimately be (primarily) driven by rationality in the long run.
> The current autopilot already appears to be materially safer, in certain circumstances
It depends on how you measure this. We always talk about humans being bad at driving. Humans are actually amazingly good drivers conditioned upon being alert, awake, and sober. Unfortunately a good fraction of people are in fact not alert. If you don't condition on this, than yes, humans suck.
(Put another way, the culture we, including companies such as Tesla, foster of working people overtime is probably more responsible for car accident deaths than anything else.)
The FAA takes pilot rest time seriously. Considering car accident deaths exceed airplane deaths by a few orders of magnitude, it's about time the rest of the world take rest equally seriously as well.
What are you measuring? The current autopilot already appears to be materially safer, in certain circumstances, than human drivers [1]. It seems probable Level 2 systems will be better still.
A refrain I hear, and used to believe, is that machine accidents will cause public uproar in a way human-mediated accidents don't. Yet Tesla's autopilot accidents have produced no such reaction. Perhaps assumptions around public perceptions of technology need revisiting.
> Neither the driver nor the car manufacturer will have clear responsibility when there is an accident
This is not how courts work. The specific circumstances will be considered. Given the novelty of the situation, courts and prosecutors will likely pay extra attention to every detail.
[1] https://www.bloomberg.com/news/articles/2017-01-19/tesla-s-a...