According to data obtained from the self-driving system, the system first registered radar and LIDAR
observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph.
As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian
as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.
At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver
was needed to mitigate a collision (see figure 2).
2 According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle
behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to
alert the operator.
I worked on the autonomous pod system at Heathrow airport[1]. We used a very conservative control methodology; essentially the vehicle would remain stopped unless it received a positive "GO" signal from multiple independent sensor and control systems. The loss of any "GO" signal would result in an emergency stop. It was very challenging to get those all of those "GO" indicators reliable enough to prevent false positives and constant emergency-braking.
The reason we were ultimately able to do this is because we were operating in a fully-segregated environment of our own design. We could be certain that every other vehicle in the system was something that should be fully under our control, so anything even slightly anomalous should be treated as a hazard situation.
There are a lot of limitations to this approach, but I'm confident that it could carry literally billions of passengers without a fatality. It is overwhelmingly safe.
Operating in a mixed environment is profoundly different. The control system logic is fully reversed: you must presume that it is safe to proceed unless a "STOP" signal received. And because the interpretation of image & LIDAR data is a rather... fuzzy... process, that "STOP" signal needs to have fairly liberal thresholds, otherwise your vehicle will not move.
Uber made a critical mistake in counting on a human-in-the-loop to suddenly take control of the vehicle (note: this is why Type 3 automation is something I'm very dubious about), but it's important to understand that if you want autonomous vehicles to move through mixed-mode environments at the speeds which humans drive, then it is absolutely necessary for them to take a fuzzy, probabilistic approach to safety. This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless. The design of the overall system is system is inherently unsafe.
Do you find this unacceptable? If so, then then ultimately the only way to address this is through changing the design of the streets and/or our rules about how they are be used. These are fundamentally infrastructural issues. Merely swapping out vehicle control systems -- robot vs. human -- will be less revolutionary than many expect.
> The loss of any "GO" signal would result in an emergency stop.
That's an E-stop chain and that's exactly how it should work.
But the software as described in the NTSB report was apparently bad enough that they essentially hardwired an override on their emergency stop. The software equivalent of putting a steel bar into a fuse receptacle. The words that come to mind are 'criminal negligence'. The vehicle would not have been able to do an E-stop even if it was 100% sure it had to do just that, nor did it warn the human luggage.
The problem here is not that the world is so unsafe that you will have to make compromises to get anywhere at all, the problem here is that the software is still so buggy that there is no way to safely navigate common scenarios. Pedestrian on the road at night is one that I've encountered twice on my trips and they did not lead to any fatalities because when I can't see I slow down. If 6 seconds isn't enough to make a decision you have no business being on the road in the first place.
> Pedestrian on the road at night is one that I've encountered twice on my trips and they did not lead to any fatalities because when I can't see I slow down. If 6 seconds isn't enough to make a decision you have no business being on the road in the first place.
I've seen a few people comment on the footage that they too would have run the pedestrian over, to which my only response is: I sure hope you don't have a driver's license [anymore]!
The vast majority of those people are being (purposely?) deluded by a very misleading video showing an apparently pitch black section of road. In reality, it was a lighted road and the dashcam footage had a very compressed dynamic range.
To me that means they weren't walking directly under the street lamp. If you look at other peoples videos of that street at night on youtube it's well lit. Street lamps cast a wide spotlight so you don't have to be directly under it to still have illumination.
> If you look at other peoples videos of that street at night on youtube it's well lit.
I don't think you can look at videos and judge the level of illumination well; their videos could be more or less accurate than Uber's, and what I see depends on codecs, video drivers, my monitor, etc. Also, any video can easily be edited these days.
Is there a way to precisely measure the illumination besides a light meter? Maybe we can use astronomers' tricks and measure it in relation to objects with known levels of illumination. Much more importantly, I'm not even sure what properties of light we're talking about - brightness? saturation? frequencies? - nor which properties matter how much for vision, for computer vision, and for the sensors used by Uber's car in particular.
I'm not taking a side; I'm saying I have yet to see reliable information on the matter, or even a precise definition of the question.
It is generally unusual for any camera (not using infrared) to outperform the human eye in low light situations. If a camera (any camera) shows a clear image (at all) a person would have almost certainly seen it.
Dashcam videos typically do not capture nighttime scenes very well. Any human would have been able to see the pedestrian well in advance of a collision. There are cell phone videos of that same stretch of road at night and they show the illumination level much better than the Uber video.
It is the case that even very good driverless cars of the future will cause fatalities now and then. Even if they're safer than human drivers.
Don't conflate that with Uber's screw-up here. This wasn't a situation where a fatality was unavoidable or where a very safe system had a once-in-a-blue-moon problem. It's one where they just drove around not-very-safe cars.
Agreed. Uber disabled a safety feature that would have prevented this fatality -- but that doesn't mean that the automation was therefore safe apart from Uber's mismanagement of it. It's entirely believable that had that safety feature been fully enabled, it would have also e-braked in 1,000 other situations which didn't result in a collision. And false-positive e-brake events are definitely worth avoiding: they can get you rear-ended and injure unbelted passengers.
This doesn't mean that Uber therefore did the right thing in disabling the system; it probably means that the system shouldn't have been given control of the car in the first place. But my point is that there is no readiness level where driverless cars will ever be safe -- not in the same way that trains and planes are safe. The driving domain itself is intrinsically dangerous, and changing the vehicle control system doesn't change the nature of that domain. So if we actually care about safety, then we need to be changing the way that streets are designed and the rules by which they are used.
> It is the case that even very good driverless cars of the future will cause fatalities now and then. Even if they're safer than human drivers.
And that is why I am so mad at Uber. They are compromising the public trust in autonomous cars with their reckless release policy. And thereby potentially endangering even more lives, as we have to convince the public of the advantages of this technology.
I agree with all of this except the tense: haven't they shut the whole thing down at this point with no immediate plans to start it up again? Or am I mis-remembering that.
They were testing in three places: San Francisco, Arizona, and Pittsburgh. They didn't want to get a license from California (probably because they couldn't follow the safety regulations), so they threw a tantrum and moved to AZ. Then after this fatality, they shut the AZ program down and are just testing in Pittsburgh.
That's not true. They shut down everywhere after this fatality. They just said that they'll shut down AZ permanently (not that AZ would probably let them do it anyway), and resume testing in Pittsburgh sometime soon, in a more limited way (which apparently the Pittsburgh mayor isn't wild about).
This is absolutely the right analysis of how these systems work and why you can't expect autonomous cars to halt traffic deaths. What the Uber crash has shown us is that the tolerance for AVs killing people is probably exactly zero, not some (very meaningful) reduction like 10x or 100x less.
My company didn't start with this zero tolerance thing in our minds, but it turns out our self-delivering electric bicycles have a huge advantage for real world safety because they weigh ~60lbs when in autonomous mode and are limited to 12mph. This equals the kinetic energy of myself walking at a brisk pace, or basically something that won't kill purely from blunt force impact. I think the future for autonomy will be unlocked by low mass and low speed vehicles, not cars converted to drive themselves.
> What the Uber crash has shown us is that the tolerance for AVs killing people is probably exactly zero, not some (very meaningful) reduction like 10x or 100x less.
It hasn't shown that at all. It has documented beyond reasonable doubt that Uber should not be allowed to participate in real world tests of autonomous vehicles.
There are plenty of situations where people would fully accept a self driving vehicle killing someone but this isn't one of those.
The Uber crash has shown us that the public tolerance for AVs killing people is somewhere lower than presumptively around 30x more dangerous than the mean human driver.
Uber had a fatality after 3 million miles of driving.
The mean fatality rate is approximately 1 per 100 million miles of driving.
It's a sample size of one, so the error bars are big, but it drives me insane that people are acting like the Uber cars are the ideal driverless cars of the imagined future, and are super safe. The available data (which is limited, but not that limited) is that Uber driverless cars are much, much, much more dangerous than mean human drivers.
My company didn't start with this zero tolerance thing in our minds, but it turns out our self-delivering electric bicycles
That actually sounds like a really interesting concept, one of those ideas that seems obvious only after someone suggests it. What company is this?
Right now, in the Seattle area, we are basically seeing a new littering epidemic in the form of sharable bicycles being left to rust away, unused, at random places. If the bike could cruise to its next user autonomously, that would be really be a game-changer. "Bikes on demand" would turn bikesharing from (IMHO) a stupid idea into something that just might work.
Plus, the engineering challenges involved in automating a riderless bicycle sound fun.
Weel, we're in Bellevue. It's a super fun problem to work on and one of the first thing we figured out was that trikes won't work because of their width and difficulty to ride, so we got a two-wheeled bike to balance. The autonomy problems are easier than cars in a lot of ways, and this Uber case is something we don't deal with because our bikes can always stop when presented with a no-go situation since we're only autonomous when no one is riding.
That's good to hear, sounds like a very cool project. I could see this living up to at least some of the hype that the original Segways received.
The biggest challenge will probably be to keep people from screwing with the bikes, of course. :( An unoccupied bicycle cruising down the street or sidewalk will fire all sorts of mischievous neurons that onlookers didn't even know they had.
Definitely, will be interesting to test. We have several cameras onboard so that we can see what happened but an equal concern with vandalism is how people feel about being watched. We want to avoid feeling like your neighborhood is suddenly a panopticon. Still unsolved.
Hah, yeah it reminds me of a runaway shopping cart when you see our bike rolling. We expect people will get used to it eventually but we have some ideas to test in the future on how to make it more obvious, such as giving the bike a ‘face’ and having it lit up with LEDs that are visible from all angles. Def not a solved problem, but as far as design problems go it’s a pretty fun one.
Your analysis leaves much to be desired, though, as it comes perilously close to equating "we can't prevent 100% of fatalities" with "we shouldn't care about, learn from, or make changes in response to a fatality".
What the Uber crash has shown us is mostly the willingness of people on HN to excuse Silicon Valley darlings even when they actually demonstrably kill people.
I don't think it has anything to do with "Silicon Valley darlings" (of which Uber is certainly not anymore). It has more to do with "super cool future tech" that they really want to see implemented in their lifetimes - so much so that they may make dubious arguments to support thier position.
Potentially deadly? Maybe, sure, but at low speeds, up to 10 mph say, it is incredibly unlikely that falling off a bicycle (even with no helmet) will do more than cause bruises and damaged ego.
Is this including the elderly who often will break a hip that way and then die of the complications? Because if so, that would not be comparable to a healthy young (< 60 yo) person falling.
Are there numbers on the average height of those fatal falls? If they're from balconies, roofs, etc., I'd say being on a bike (a few feet from the ground) would make it much safer.
Curious if you have ever fallen off a bike? I have fallen over several times on a bike while stationary (when learning to ride with clipless pedals), I have crashed bikes at much higher speeds as well, and I have watched my kids fall off of bikes lots of times while learning. In all of that, I have never seen a (or had my own) head hit the ground. Typically you hit the ground with your arms (slow speed or stationary fall) or your hips, back, or shoulders (if at higher speed).
Don't underestimate how dangerous even a small fall can be, you can end up fine but you could also end up smashing your face into the curb.
A friend of mine, in his 50's very fit, cycling to work and back every day, broke both his arms while doing literally a 10-meter test drive in front of a bike store.
The bike's brakes were setup reversed compared to what he used to, so he ended up breaking with the front brake, flipping the bike over and breaking both his arms while landing. His fault? Sure, but still a rather scary story how quickly even mundane things can go really wrong.
I don't think he did, not much use for a bike when both your arms are in a plaster cast from hands to shoulders. Poor guy couldn't even go to the toilet without help.
Sure, but "[t]he system is not designed to alert the operator." At least they could have alerted the operator. This seems like reckless endangerment or negligent homicide. Luckily for Uber they hit a poor person and no one will hold them responsible. 1.3 seconds is a long time for the operator to act.
This highlights an interesting general point - in many situations, there is no simple safe fallback policy. On a highway, an emergency stop is not safe. This is a general problem in AI safety and is covered nicely in this youtube video, as well as the paper referenced there - https://www.youtube.com/watch?v=lqJUIqZNzP8
That depends, there could simply be no traffic behind you, which an experienced driver and hopefully and automated one would be monitoring.
Besides, there are many situations on the highway where an E-stop is far safer than any of the alternatives even if there is traffic behind you. Driving as though nothing has changed in the presence of an E-stop worthy situation is definitely not the right decision.
How intelligent is the ML driving the car? If the car slowed down and hit the 49 year old at a reduced speed the insurance payout to a now severely disabled individual would be far more expensive than the alternative insurance pay out with a pedestrian fatality. A choice between paying out for 40 years worth of around-the-clock medical care vs. a one-time lump-sum payout to the victim's family would be pretty obvious from a corporate point of view.
Are you seriously suggesting that the better software strategy is to aim for the kill because it is cheaper than possibly causing 'only' injury?
That should be criminal.
I'm all for chalking this one up to criminal negligence and incompetence, outright malice is - for now - off the table, unless someone leaks meeting notes from Uber where they discussed that exact scenario.
My point is that it's a black box and nobody outside of Uber knows what its priorities are. It could have just as easily mistaken the pedestrian leaned over pushing the bike for a large dog and then proceeded to run her over because it's programmed to always run dogs over at full speed on the highway. Outside of Asimov's "Three Laws of Robotics" there is nothing that dictates how self-driving cars should behave, so my unpopular idea above isn't technically breaking any rules.
Computers have vastly lower reaction time than humans. Computers have sensory input that humans lack (LIDAR). Computers don't get drowsy or agitated.
And "almost" is always a good idea when talking about a future that looks certain. Takes into account the unknown unknowns. And the known unknowns (cough hacking cough).
Fast reaction times, good sensors and unyielding focus are not enough to drive safely. An agent also needs situational awareness and an understanding of the entities in its environment and their relations.
Without the ability to understand its environment and react appropriately to it, all the good the fast reaction times will do to an AI agent is to let it take the wrong decisions faster than a human being.
Just saying "computers" and waving our hands about won't magically solve the hard problems involved in full autonomy. Allegedly, the industry has some sort of plan to go from where we are now (sorta kinda level-2 autonomy) to full, level-5 autonomy where "computers" will drive more safely than humans. It would be very kind of the industry if they could share that plan with the rest of us, because for the time being it sounds just like what I describe above, saying "computers" and hand-waving everything else.
That's a sociopolitical question more than a technical one. I posit that:
1.) Road safety -- as far as the current operating concept of cars is concerned (eg., high speeds in mixed environments) -- is not a problem that can be "solved". At best it can only ever be approximated. The quality of approximation will correspond to the number of fatalities. Algorithm improvements will yield diminishing returns: the operating domain is fundamentally unsafe, and will always result in numerous fatalities even when driven "perfectly".
2.) With regards to factors that contribute to driving safety, there are some things that computers are indisputably better at than humans (raw reaction time). There are other things that humans are still better at than computers (synthesising sensory data into a cohesive model of the world, and then reasoning about that world). Computers are continually improving their performance, however. While we don't have all the theories worked out for how machines will eventually surpass human performance in these domains, we don't have a strong reason to believe that machines won't surpass human performance in these domains. The only question is when. (I don't have an answer to this question).
3.) So the question is not "when will autonomous driving be safe" (it won't be), but rather: "what is the minimum level of safety we will accept from autonomous driving?" I'm quite certain that the bar will be set much higher for autonomous driving than for human driving. This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is. Look at the disparities in sociopolitical responses to, say, plane crashes and Zika virus, versus car crashes and influenza. Autonomous vehicles will be treated more as the former than the latter, and therefore the scrutiny they receive will be vastly higher.
4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
5.) Personally, I think that the algorithms won't be able to pass this public-acceptability threshold on their own, because even the best-imaginable algorithm, if adopted on a global basis, would still kill hundreds of thousands of people every year. That's still probably too many. I expect that full automation eventually will become the norm, but only as enabled by new types of infrastructure / urban design which enable it to be safer than automation alone.
> This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is.
This is a wonderfully concise way of describing a phenomenon that I have not been able to articulate well. Thank you.
OK, this is a very good answer- thanks for taking the time.
I'm too exhausted (health issues) to reply in as much detail as your comment deserves, but here's the best I can do.
>> 4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
Or at least it won't be morally justifiable for them to be a thing at all, unless they're sufficiently safer than humans- whatever "sufficently" is going to mean (which we can't really know; as you say that has to do with public perception and the whims of a fickle press).
I initially took your assertion to mean that self-driving AI will inevitably get to a point where it can be "sufficiently" safer than humans. Your point (2.) above confirms this. I don't think you're wrong, there's no reason to doubt that computers will, one day, be as good as humans at the things that humans are good at.
On the other hand I really don't see this happening any time soon- not in my lifetime and most likely not in the next two or three human generations. It's certainly hard to see how we can go from the AI we have now to AI with human-level intelligence. Despite the successes of statistical machine learning and deep neural nets, their models are extremely specific and the tasks they can perform too restricted to resemble anything like general intelligence. Perhaps we could somehow combine multiple models into some kind of coherent agent with a broader range of aptitudes, but there is very little research in that direction. The hype is great, but the technology is still primitive.
But of course, that's still speculative- maybe something big will happen tomorrow and we'll all watch in awe as we enter a new era of AI research. Probably not, but who knows.
So the question is- where does this leave the efforts of the industry to, well, sell self-driving tech, in the right here and the right now? When you said self-driving cars will almost certainly be safer than humans- you didn't put a date on that. Others in the industry are trying to sell their self-driving tech as safer than humans right now, or in "a few years", "by 2021" and so on. See Elon Musk's claims that Autopilot is safer than human drivers already.
So my concern is that assertions about the safety of self-driving cars by industry players are basically trying to create a climate of acceptance of the technology in the present or near future, before it is even as safe as humans, let alone safer (or "sufficiently" so). If the press and public opinion are irrational, their irrationality can just as well mean that self-driving technology is accepted when it's still far too dangerous. Rather than setting the bar too high and demanding an extreme standard of safety, things can go the other way and we can end up with a diminished standard instead.
Note I'm not saying that is what you were trying to do with your statement about almost certainty etc. Kind of just explaining where I come from, here.
Likewise, thanks for the good reply! Hope your health issues improve!
I share your skepticism that AIs capable of piloting fully driverless cars are coming in the next few years. In the longer term, I'm more optimistic. There are definitely some fundamental breakthroughs which are needed (with regards to causal reasoning etc.) before "full autonomy" can happen -- but a lot of money and creativity is being thrown at these problems, and although none of us will know how hard the Hard problem is until after it's been solved, my hunch is that it will yield within this generation.
But I think that framing this as an AI problem is not really correct in the first place.
Currently car accidents kill about 1.3 million people per year. Given current driving standards, a lot of these fatalities are "inevitable". For example: many real-world car-based trolley problems involve driving around a blind curve too fast to react to what's on the other side. You suddenly encounter an array of obstacles: which one do you choose to hit? Or do you (in some cases) minimise global harm by driving yourself off the road? Faced with these kind of choices, people say "oh, that's easy -- you can instruct autonomous cars to not drive around blind curves faster than they can react". But in that case, the autonomous car just goes from being the thing that does the hitting to the thing that gets hit (by a human). Either way, people gonna die -- not due to a specific fault in how individual vehicles are controlled, but due to collective flaws in the entire premise of automotive infrastructure.
So the problem is that no matter how good the AIs get, as long as they have to interact with humans in any way, they're still going to kill a fair number of people. I sympathise quite a lot with Musk's utilitarian point of view: if AIs are merely better humans, then it shouldn't matter that they still kill a lot of people; the fact that they kill meaningfully fewer people ought to be good enough to prefer them. If this is the basis for fostering a "climate of acceptance", as you say, then I don't think it would be a bad thing at all.
But I don't expect social or legal systems to adopt a pragmatic utilitarian ethos anytime soon!
One barrier it that even apart from the sensational aspect of autonomous-vehicle accidents, it's possible to do so much critiquing of them. When a human driver encounters a real-world trolley problem, they generally freeze up, overcorrect, or do something else that doesn't involve much careful calculation. So shit happens, some poor SOB is liable for it, and there's no black-box to audit.
In contrast, when an autonomous vehicle kills someone, there will be a cool, calculated, auditable trail of decision-making which led to that outcome. The impulse to second-guess the AV's reasoning -- by regulators, lawyers, politicians, and competitors -- will be irresistible. To the extent that this fosters actual safety improvements, it's certainly a good thing. But it can be really hard to make even honest critiques of these things, because any suggested change needs to be tested against a near-infinite number of scenarios -- and in any case, not all of the critiques will be honest. This will be a huge barrier to adoption.
Another barrier is that people's attitudes towards AVs can change how safe they are. Tesla has real data showing that Autopilot makes driving significantly safer. This data isn't wrong. The problem is that this was from a time when Autopilot was being used by people who were relatively uncomfortable with it. This meant that it was being used correctly -- as a second pair of eyes, augmenting those of the driver. That's fine: it's analogous to an aircraft Autopilot when used like that. But the more comfortable people become with Autopilot -- to the point where they start taking naps or climbing into the back seat -- the less safe it becomes. This is the bane of Level 2 and 3 automation: a feedback loop where increasing AV safety/reliability leads to decreasing human attentiveness, leading (perhaps) to a paradoxical overall decrease in safety and reliability.
Even Level 4 and 5 automation isn't immune from this kind of feedback loop. It's just externalised: drivers in Mountain View learned that they could drive more aggressively around the Google AVs, which would always give way to avoid a collision.
So my contention is that while the the AIs may be "good enough" anytime between, say, now and 20 years from now -- the above sort of problems will be real barriers to adoption. These problems can be boiled down to a single word: humans. As long as AVs share a (high-speed) domain with humans, there will be a lot of fatalities, and the AVs will take the blame for this (since humans aren't black-boxed).
Nonetheless, I think we will see AVs become very prominent. Here's how:
1. Initially, small networks of low-speed (~12mph) Level-4 AVs operating in mixed environments, generally restricted to campus environments, pedestrianised town centres, etc. At that speed, it's possible to operate safely around humans even with reasonably stupid AIs. Think Easymile, 2getthere, and others.
2. These networks will become joined-up by fully-segregated higher-speed AV-only right-of-ways, either on existing motorways or in new types of infrastructure (think the Boring Company).
3. As these AVs take a greater mode-share, cities will incrementally convert roads into either mixed low-speed or exclusive high-speed. Development patterns will adapt accordingly. It will be a slow process, but after (say) 40-50 years, the cities will be more or less fully autonomous (with most of the streets being low-speed and heavily shared with pedestrians and bicyclists).
Note that this scenario is largely insensitive to AI advances, because the real problem that needs to be solved is at the point of human interface.
The problem is that drivers rarely maintain the safety distance they should have to not endanger themselves. BUT in that case, the car should have also noticed if there were near traffic behind. Doing nothing in that case doesn’t seem the right decision at all.
Very good write anyways... indeed many things will have to change - probably the infrastructure, the vehicles, the software, the way pedestrians move, and driver behavior as well.
That quote is the crux of it when you pair it with this other section: "In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review."
So you have a "driver" who has to be monitoring a diagnostic console, AND has to be separately watching for non-alerted emergency events to avoid a fatal crash? Why not hire two people? Good god.
Move fast and disable the brakes. They in fact began testing with two people per car, but then decided to go with just one. An Uber spokeswoman stated [1]:
> We decided to make this transition [from two to one] because after testing, we felt we could accomplish the task of the second person—annotating each intervention with information about what was happening around the car—by looking at our logs after the vehicle had returned to base, rather than in real time.
However, this seems to contradict the NTSB report which indicates that it still was the driver's responsibility to perform this event tagging task, which necessarily implies taking your eyes off the road.
"Uber moved from two employees in every car to one. The paired employees had been splitting duties — one ready to take over if the autonomous system failed, and another to keep an eye on what the computers were detecting. The second person was responsible for keeping track of system performance as well as labeling data on a laptop computer. Mr. Kallman, the Uber spokesman, said the second person was in the car for purely data related tasks, not safety."
"So you have a "driver" who has to be monitoring a diagnostic console, AND has to be separately watching for non-alerted emergency events to avoid a fatal crash?"
This gets your license yanked, herearound. Same goes for texting and driving if you're caught. Even in stop - go traffic.
That's not just one error but a whole book of errors, and that last bit combined with the reliance on the operator to take action is criminal. (And if it isn't it should be.)
I hope that whoever was responsible for this piece of crap software loses a lot of sleep over it, and that Uber will admit that they have no business building safety critical software. Idiots.
For 6 seconds the system had crucial information and failed to relay it, for 1.3 seconds the system knew an accident was going to happen and failed to act on that knowledge.
Drunk drivers suck, but this is much worse. This is the equivalent of plowing into a pedestrian with a vehicle while you're in full control of it because you are afraid that your perception of the world is so crappy that you will over-react to such situations often enough that the risk of killing someone you know is there is perceived as the lower one.
Not to mention all the errors in terms of process and oversight that allowed this p.o.s. software to be deployed in traffic.
"for 1.3 seconds the system knew an accident was going to happen and failed to act on that knowledge."
This is so tragic. Even Volvo's own collision avoidance system would (could?) have mitigated the crash a fair bit. From Volvo's own spec. sheet [1]: "For speeds between 45 and 70 km/h, the collision is mitigated."
In this case, the NTSB reports mentions that the car was traveling at 43mph, i.e. 68.8 kmph :(.
What bothers me is that these systems are on public roads, without public oversight. Sure, Uber got permission from the local authorities, but getting an independent team of technologists and ethicists to sign off on the basic parameters should have been the bare minimum ... yes, that would take time, but do we really want to give companies, especially ones like Uber with a history of ethical transgressions, the benefit of the doubt?
[1] https://tinyurl.com/y9sp2fmu (WARNING: This open/downloads a PDF that I referred to above. Page 5 has the paragraph on pedestrian collision detection specs)
Is this necessarily different from a car placed in normal cruise control (automatic throttle, no braking), where the driver is under the obligation to managing braking in an emergency? It seems like the human driver here was still under that obligation, but failed to act. (Possibly because they were distracted, but that's not unique to this situation.)
A cruise control system doesn't look out your front window and doesn't steer the car. So the driver is still actively engaged in operating the vehicle, just has one less lever to work on. And at the first tap on the brake it disengages.
If you want to compare it with a car operating on cruise control you'd have to sedate the driver.
"Looking out the window" and "steering the car" is pretty much exactly what current year cruise control systems do. Just go look at the Subaru Eyesight systems, which depend on cameras that face out the upper part of the windshield.
https://www.subaru.com/engineering/eyesight.html
(Subaru's doesn't do active lanekeeping, but lots of other manufacturers like BMW and Ford do.)
The newer cruise controls have lane-keep assist and adaptive cruise control - you don't have to actively steer or brake. On an open road, there's effectively little difference from the Uber vehicle, which would also let you disengage autonomous mode by breaking or otherwise interacting with the controls. (The newest mass-market cruise controls are "stop and go", which means they'll even bring the car to a full stop, then start driving again.)
I thought breaking is limited by the tire/street contact, not by the break?
Or at least it should be. That's why there is a v_max that a car is not allowed to exceed and a faster or heavier car will have better breaks.
And 70 km/h as here should be far away from v_max.
It's a combination, but any modern (disc) brake can block a wheel so in practice this should not be an issue. The only time it might be a problem is after dragging the brakes on a long downslope (which is why you shouldn't do that, the hydraulic oil will boil and your brakes will stop working).
Ok, so the AI is too panicky and will brake for no apparent reason so they have to disable that bit while they work on it. Fine.
But why the hell wouldn't you have the thing beep to alert the driver that the AI thinks there is a problem and they need to pay extra attention? In fact it seems like this would be helpful when trying to fine tune the system.
Deployment to me means to release into the real world, otherwise it is just a test and whether that test is in a closed course or even in a simulation makes no difference to me. That environment would not endanger random strangers.
If the system had enough false positives that they decided to disable the braking, it's possible it had enough false positives that the operator would have learned to ignore the alert beep too.
What strikes me odd is that once an unknown is detected on the road, the car should already be alerting the driver and slowing down.
That buys you time for the ai classifier to do its thing and isn’t as dangerous as an emergency braking later on, so seems a sensible behavior all around.
The more after reports on accidents involving self driving cars I've read, the more I've become convinced that the current state of the art in this field is merely an illusion. Reading between the lines, the complete disregard for basic safety protocols like you've just described comes across as more than just a brazen continuation of the "move fast and break things" Silicon Valley culture. Viewed in this light, this entire niche of tech R&D begins to take on the appearance of a giant game of smoke and mirrors being played for the benefit of those recipients of Big Investment $$$ that wooed investors to bet big with promises of delivering a marketable self-driving car within the next decade.
The way I see it,there is only way to make sense of a field where the most respectable R&D house (Google/Alphabet) limits their vehicle to a relative snail's pace while everyone else (including notoriously unethical shops like Uber) is taking a gung-ho, "the only limit is the speed limit" approach. That is to assume "everyone else" is cheating the game by choosing a development path that gives the appearance of being functional in the "rough but will get there with enough polish" sense while the truth is that it's merely a mountain of cheap and dirty hacks that will never achieve the goals demanded by investors.
The only reason a company would overlook such a simple safety protocol as "slow down until a potentially dangerous object is positively identified" is if their "AI" fails to positively identify objects so frequently that the car could never achieve what a human passenger would consider a consistent, normal driving pace. The same can be said for any "AI" that can't be trusted to initiate panic braking in response to a positively identified collision scenario with a positively identified object. The fact that they specifically wrote an "AI" absolving workaround for that scenario their software means the frequency of false positives must be so high as to make the frequency of "false alarm" panic braking incidents unacceptable for human passengers.
I'd guess that "unknown objects" happen all the time - it seems like that's the default until something is classified, so tire scrap or plastic bag would also fall into that category. If the car slowed down every time it saw one it would never get anywhere, it should only slow if the object gets classified as something you can't hit and is clearly in the path of the vehicle. Seems like that decision happened too late here, requiring emergency braking... which was disabled (!).
The missing bit here is "...unknown is detected on the road" - if there is a tire scrap or plastic bag or anything that looks suspicious a normal human driver would slow down and give it extra attention, then try to avoid it anyway. You don't drive over / through an object unless you know that it is not harmful, and even then you try to avoid it so you don't drive over a bag... of nails.
It's not that simple. Humans can also predict movement, and that is necessary because cars don't stop instantaneously. So you have a person walking toward the street and your dumb smart car is constantly hitting the brakes.
This tech simply isn't there yet and I doubt it's all that close.
People don't drive like that. People expect reasonable behavior from other people and that includes expectations that they won't jump into the road. If Uber will drive unreasonably, its passengers will prefer other taxi who drive more aggressively.
Except no one drives that way and no one would put up with it. Do you slow down every time someone on the sidewalk takes a step toward the street? I doubt it (and, if you do, please never get in front of me.)
If I think they might walk on the street I of course slow down (as much as I deem necessary). What kind of question is that?
If I think I saw children run around between cars on the parking lane, are the parents probably morons? Yes. Do I slow down and be prepare to slam the brakes in case a child suddenly runs in front of me? Ab-so-lute-ly.
Even if people behave idiotically on the street, it is obviously still my fault if I run them over.
human track gaze and understand desires, so you kinda know if someone exiting a shop will or will not proceed straight into the road.
that said, as I said above whenever a car sense a situation it doesn't understand it should slow down, that's enough to be safe later on as the situation develops and is different than hitting the brakes full force.
and anyway expecting autonomous car to drive full speed all the time is moronic, humans don't do that either, precisely because it's dangerous.
As a human, if I saw a plastic bag blow into the street at night I'd slow down until I was sure it wasn't an animal or something. Seems like basically the same process.
Sure, but if you were on a highway, would you slam on the brakes? I hope not.
There's a calculation here of balancing the perceived risk of an obstruction with the consequences of avoiding it or braking in time. Drivers have to make this decision all the time, on a highway they will generally assume it's safer to hit most things than swerve or panic brake, because it's mostly likely not that dangerous to collide with.
At least one stat I saw from AAA is that ~40% of the deaths from road debris result from drivers swerving to avoid them.
Not as many since they banned them here, but it used to be quite common given the combination between the fact it's pretty windy here in SF every afternoon, and there's lots of trash / debris around. It's still pretty common to see blowing paper or other debris (tire shreds) in the road at least a few times in my 12 mile commute.
This has been a not-minor problem for autonomous cars and the Tesla-style autopilots / adaptive cruise controls that depend on vision only. You have to program it to ignore some types of things that seem like they might be an obstruction, such as road signs, debris in the road, etc. so they don't hit the brakes unnecessarily.
Both adaptive cruise controls and human drivers do this by default. If you're doing 60 MPH on a highway and something pops into the periphery of your vision that you don't recognize, do you slam on the brakes? No.
of course I do slow down, there's a whole load of possibilities between slamming the brakes and get rear ended and driving at the posted limit trough a dangerous situation, you know? car can slow down gently.
it's a 60 zone and rain or smoke impairs the visibility? slow down.
it's a 45mph zone and something that's not a motor vehicle is in a lane that's supposed to only have motor vehicles on it? you slow down until you make sense of the situation.
you're near a playground and a mother is walking a children on the other side of the road and you can't see if she's holding his hand? you slow down.
a person walks near the kerb and it's not looking in your direction? you slow down. a bike is loaded with groceries? a car acting erratically? a person being pulled by his dog? a bus stopped unloading people? you don't drive past them at 30mph.
If people did the super simple stuff, we'd have boundless peace, prosperity, liberty.
I remember a story - stop me if you've heard this one - about a God helping out a group of desperate people, freeing them from slavery, parting seas, feeding them in the desert. They were camped at the foot of a mountain with the God right there on top - right there! And they built the golden calf anyway. And that rule seems easier than all the other 9. WTF did they even need a golden calf for?
So sadly, the criteria of simplicity is irrelevant - people will find a hard way to do it.
I'm not sure why your comment seems to be grey-ed out. Here's the relevant section from the report:
"As the vehicle and pedestrian paths converged, the self-driving system software classified
the pedestrian
as an unknown object, as
a vehicle,
and then as
a bicycle
with varying expectations of future
travel path."
1. It seems like the classifier flipped state between pedestrian/unknown object/vehicle/bicycle; this seems like one of the well-known issues with machine learning. (I'm assuming the classifier is using ML simply because I have never heard of any other (semi-?) successful work on that problem.)
I suggest that the problem is that the rest of the driving system went from 100% certainty of A to 100% certainty of B, etc., with a resulting complete recalculation of what do to about the current classification. I make this hypothesis on the basis of the 4+ seconds when the car did nothing, while a response to any of the individual possibilities would possibly have averted the accident.
2. If the classifier was flipping state, I assume the system interrupted the Decide-Act phases of an OODA loop, resulting in the car continuing its given path rather than executing any actions. This seems like a reasonable thing to do, if the system contains no moment-to-moment state. Which would be strange; it seems like the planning system should have some case for having obstacles A, B, C, and D rapidly and successively appearing in the same area of its path.
3. Assuming the classifier wasn't flipping state, but presenting multiple options with probabilities, I can see no reason why the car wouldn't have taken some action in the 4+ seconds. (I note that the trajectory of the vehicle seems to move towards the right of its lane, which is a rather inadequate response and likely the wrong thing to do for several of the classification options.)
"According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to
reduce the potential for erratic
vehicle
behavior."
That's just idiotic and would be nigh-criminally unprofessional in most engineering situations.
I have that in my 2017 Volkswagen. It brakes by itself in normal operations, gives very loud feedback if intervention is needed, but does not brake if the intervention is unexpected / probably erroneous, only audio then.
I hope but do not know if the frequency and circumstances are logged and sent to Volkswagen when at the shop. Don't expect it though since it will first see the shop after 2 years. That would be too long for an improvement cycle.
Even in the far future when all cars are Level 5, they will still be developed by error-prone humans, and I'm still gonna teach my kids to look both ways before they cross the street.
Absolutely, the thing is humans become habituated to repeated alerts.. even if you make a flashing red hazard symbol and have it audibly screaming at the driver, with enough false alarms it will delay our response over time, require us to look at the dashboard and interpret the message which burns precious seconds, coupled with slow human reaction time... 6 seconds will be tough to react within safely. We need proper autonomous controls doing their job safely without blaming us when then fail.
It really sounds like when this change was put into place, there were two people in the car: one monitoring the road in the driver position and a second monitoring the vehicle and systems in the passenger seat (with the screen). Once they decided to eliminate the second position and make the driver do both, they should have either let the car apply brake or, as you say, provide a loud alert.
It should have alerted anyway in the beginning. My guess is that either the alert would go off all the time which is why they didn't code it up or they didn't think about it during requirements analysis.
I understand that emergency maneuver system was disabled so the car did not brake between t minus 1.3 and t. But why didn't it brake from t minus 6 to t minus 1.3? Looks like it detected that the car's and object's paths were converging, so why didn't it brake during that interval?
Based on a TED talk by someone from Google[1]. I think having the car apply the brakes when there’s a possible disturbance causes the car to apply the breaks too much and makes for a really uncomfortable ride.
I think a major effort in self driving is solving the Goldilocks issue of reacting properly to impending accidents, but also not apply uncomfortable breaking if it’s not needed.
Seems like it was too insensitive at that distance.
There's the third option of slowing down. That's what most human drivers do subconsciously when we see something that we're having trouble identify, and feel it could turn into an obstacle.
This too. An electric car would simple decelerate by letting off the gas pedal and having the regen kick in. On a gas car this would be equivalent to a partial braking.
Shouldn't the brake application be not boolean but 0-100% strength based on confidence levels?
>I think a major effort in self driving is solving the Goldilocks issue of reacting properly to impending accidents, but also not apply uncomfortable breaking if it’s not needed.
This is also an issue with current production automatic braking system. One that was largely solved with on track testing, and on road testing logging false triggers with a driver driving. There's no need to risk lives unless you're just cutting corners to avoid the cost of a test track.
Because then the car would break every time anyone approached the car from an angle (which is constantly). Think every intersection ever, every time driving near a sidewalk ever. The car would be herky/jerky as crap.
They should have had it set to spike the breaks once collision was imminent though, that's (maybe) the biggest programming omission here.
They should have set it to slow down gradually when approached from an angle at T-6 and then speed up once past the intersection risk, so that when the scenario emerged at T-1.6 it could emergency stop safely.
> Think every intersection ever, every time driving near a sidewalk ever. The car would be herky/jerky as crap.
I'm not sure that'd be a huge issue. The vectors have to be intersecting first of all, which most vectors emanating from sidewalks wouldn't be, and then a little hysteresis would smooth out most of the rest.
I don't know if you've been to New York or any other places where people walk, but vectors would absolutely be intersecting on a regular basis up until a fairly short time when the pedestrian would stop. Constantly I walk toward an intersection where, if I kept going for three more seconds, I would be pasted to the street by a passing car. But I stop at the end of the sidewalk, before the road begins, so the vector changes to zero in those last three seconds. It would be super weird if cars would brake at the intersection every time this happened. Cars would be braking at every major street on every major avenue, constantly.
What's actually needed here is some notion of whether the pedestrian is paying attention and will correctly stop and not intersect the path of the car. Humans are constantly making that assessment based on sometimes very subtle cues (is the person looking at/talking on a phone, or are they paying attention, for example).
Yeah, eye contact is a very important signal. Maybe there needs to be some specialized hardware to detect eyes and determine the direction they're looking in.
We use eye contact because we can't infer what another person is thinking and we can't react quickly enough to their actual movements at car speeds. This latter isn't the case with automated vehicles, so eye contact shouldn't be necessary, as long as you get the vector algorithms right.
> Constantly I walk toward an intersection where, if I kept going for three more seconds, I would be pasted to the street by a passing car.
These autonomous systems are evaluating surrounding vectors every few milliseconds. A timescale of 3 seconds simply isn't important, as they would instantly detect you slowing down and conclude that you wouldn't intersect with their vector.
> But why didn't it brake from t minus 6 to t minus 1.3? Looks like it detected that the car's and object's paths were converging, so why didn't it brake during that interval?
You're missing the context of this thread. The software in the Uber car has a clear failure condition. That has nothing to do with whether it's possible to infer such vector collisions without jerky driving, which is the point I'm addressing.
The question was why the car doesn’t brake early. “Because scanning every few milliseconds” is not an answer. Scanning frequency is irrelevant to the fact that emergency braking is not a reasonable strategy in general.
Safe driving often does require slowing down in the face of insufficient information. If a human driver sees an inattentive pedestrian about to intersect traffic, they will slow down. “Drive until collision is unavoidable” is a failing strategy.
And anyway, jerky driving is a symptom of late braking, not early braking.
> And anyway, jerky driving is a symptom of late braking, not early braking.
I see it as more than just jerkyness, I see a massive safety issue in traffic. If your autonomous car is slamming on the brakes spontaneously there's a lot more opportunities for other drivers to plow into you from behind.
They can't detect me slowing down before I start slowing down. So if it's t-4 until impact and I'm still moving at full speed, they would need to start braking now if they can't stop in 4s (assuming the worst case that I continue on my current trajectory).
That being said, I'm happy to find my assumptions about stopping time are incorrect and a car traveling at 25mph can stop in less than a second. So on busy NYC streets this wouldn't be an issue. Even at 50mph it appears that stopping time is sub 3s, so the vehicle could probably have avoided this collision if it were running a more intelligent program.
> They can't detect me slowing down before I start slowing down. So if it's t-4 until impact and I'm still moving at full speed, they would need to start braking now if they can't stop in 4s (assuming the worst case that I continue on my current trajectory).
Right, collision is basic physics accounting for the stopping time and distance of pedestrians and cars. So the question is whether pedestrians on sidewalks really have so many collision vectors with traffic such that autonomous vehicles would be jerky all of the time as the initial poster suggested.
I claim reasonable defaults that would far outperform humans on average wouldn't have that property. Autonomous vehicles should be programmed to follow the rules of the road with reasonable provisions to avoid collisions when possible.
I think the situation might be all the transitions. All the time people on bikes switch from a driveway to a bike lane, during which they could continue straight into the road. Or people step out of a building and walk diagonally across a large sidewalk, they could keep going straight into the road.
Which simply means it is because the car's AI isn't good enough to classify that object as something it should slow down for versus something it can ignore (like an empty plastic bag drifting across the road.)
Well, it is good enough, it just that it develops that confidence over the period of so many seconds. In this case it took until T minus 1.6 seconds to realize "ok this is something we should stop for".
I don’t know their internals but from that report it looks like their recognition system is probabilistic and routinely hops back and forth between “car collision RED ALERT!” and “lol there’s no problem”. If it were to randomly slam on its breaks every other second then it would cause all kinds of other accidents.
Sensors were fine, victim was detected, software was crappy.
That's vanilla testing edge-case stuff, really, and it's known that uber are unter when it comes to this, but the removal of all the useful safety layers after that (braking, alert, second human, hardware system) is reckless and stupid.
Well, exactly. Nothing wrong with the sensors, and the classifier was getting consistent pings. This is the critical failure that led to the crash, as much as the shitty final-second emergency non-process.
I observe that cars and other objects often come very close to each other, so it would seem impossible to simply brake based on "converging paths". It's necessary to know what an object is and how it's going to behave. If you don't, I don't see how you can go anywhere.
People slow down from 35 to 30 or whatever. Humans don't slow down to the point at which an accident is physically impossible for all unexpected movements, because that would be zero considering there are frequently objects within feet or inches.
Self driving cars can emulate humans, but that won't bring them to human level performance without the ability to model other actors. If they try to mathematically rule out the possibility of accidents without such models, they won't be able to go anywhere.
That's not how traffic works at the moment. I often have pedestrians walk towards the street and I assume they will stop so I don't slow down unless they are children or similar. Almost every day I could hit pedestrians if they kept walking.
You should slow down. Those people you are describing sound like they want to cross the street, and they probably have right of way, so yield for them.
Also, my two year old will sometimes walk towards the curb, but she is very good with streets, so I am not worried. She always stops and waits to hold someone's hand before crossing. This behavior freaks some drivers out, causing them to slow or come to a complete stop, which is the nicest outcome because then I can take her hand and cross the street. When I am walking by myself drivers rarely yield even as I am stepping into the street, even at marked crossings.
I guess my point is if my two year old exhibits the behavior you ascribe to a hypothetical non-child pedestrian then how can you be sure your hypothetical pedestrian won't just "keep walking"? What if they are blind, or drunk, or reckless? Perhaps you have been lucky before and never struck a pedestrian but I strongly urge you to assess your behavior. Stop for pedestrians, it's the nice thing to do and it's probably the law where you live.
You make me sound like a crazy driver :). Just watch the traffic along a busy road with pedestrians on the side. Nobody slows down if they pedestrians behave the usual way. You also often have pedestrians walk into the road to just stop right before the zone where cars are. Nobody slows down because they see that the pedestrian is observing traffic.
I have made this observation, I even mentioned it in my comment. I am urging you and anyone else who reads my words to assess your behavior so as to affect a change in the status quo. Sometimes pedestrians do walk into the road and they do get struck. The only foolproof way to prevent this is to change driver behavior which is why the requirement to yield to pedestrians at intersections is codified into most laws.
When I am driving I often see people standing at the curb just staring at their smartphone. Usually these people are wasting time because they don't expect traffic to stop. When I stop for them they are usually pleased, they cross the road and get on with their life. Sometimes these people are just waiting for an Uber or something, when I stop for them they get confused and look at me funny. I don't mind, I just smile at them and resume driving. I am in a car, so I can accelerate and travel very quickly with almost no effort. It is no trouble for me to spend a few seconds stopping for a false positive.
In the context of self-driving cars though, they can't read expressions and exhibit them. They can't necessarily even say whether something is a person or not. So your driving methods are not applicable. A computer can say "given the physics of how an average person can move, it is possible for them to leap in front of me in X amount of time" and then what? I think that a self-driving vehicle that follows your principles without your analytic ability is going to have so many false positives it will be useless. And I think the fact that they aren't attempting to follow your principles is evidence they don't have the ability.
I agree, and perhaps self-driving cars are not yet ready for "prime-time". The solution for the current state of the art might also just be maintaining a lower speed with automated drivers, which may also necessitate limiting the types of roadway on which they can operate. A slower average speed shouldn't be a big problem for automated cars since they don't experience the frustration of human drivers. Given wide-enough adoption, accomodations can be made to traffic signalling apparatus, car-to-car communication, and car-to-cloud integration to develop near-seamless traffic flows, allowing shorter travel times even at slower speeds. From here the tech could be iteratively improved to provide faster speeds without compromising safety.
This only works if you don't have other cars run into you when you slow down unexpectedly. I am not saying that current traffic is sane but just saying "always also down when you see pedestrians " just doesn't reflect reality. In CA you often have speed limits of 45 right next to houses with driveways. Either you play it safe and go 20 or less and get cursed at by other cars or you go way too fast to respond to unexpected obstacles.
When I am driving I routinely check my rearview mirror and assess the following distance of the cars behind me, so I usually know I will not be rear-ended when I am stopping for pedestrians or for any other reason. If I am driving and I notice someone is following too close for our speed I will tap the brake lights so as to encourage them to increase their following distance. If this fails I will slow down to a speed where their following distance becomes appropriate. If they are an uncommonly aggressive driver I might even pull over or change lanes and allow them to pass, I certainly don't want to be rear-ended! That said, even if I were to fail at this, I would prefer to be rear-ended stopping for a pedestrian that would have stopped than to strike a pedestrian who failed to stop walking into the path of my vehicle.
The speed limit is a reference to the maximum allowable speed of the roadway, not the minimum, only, or even recommended speed.
You clearly never have driven a busy four lane street in LA with bicycles and pedestrians mixed in. What you are saying makes sense in theory but nobody drives that way.
I do drive this way, most often in Seattle which has no shortage of the behaviors you're referencing. You can drive safely too because you are in control of your vehicle.
I didn't always drive like this, but I was in an accident that was my fault that totally upended my life, so I made an effort to change my ways. You can do it too, before you get in an accident that sets your life back, or irreparably shatters it...
I recommend taking an advanced drivers education course if you seriously decide you want to improve your driving. A lot of this stuff is covered.
Stopping unexpectedly does not cause accidents, locking down on your brakes unexpectedly causes accidents. In fact, in the US the person who hits you from behind will be held at fault no matter how hard you hit your brakes or why, because they are expected to maintain sufficient distance and attention to stop when you do.
If a light application of the brakes causes the car behind you to slam into you, the fault is with the idiot tailgating you, while playing with their phone.
Nobody's suggesting that anyone should slam the brakes every time a moving object intersects your vector of motion.
> I often have pedestrians walk towards the street and I assume they will stop so I don't slow down unless they are children or similar. Almost every day I could hit pedestrians if they kept walking.
With respect, I don't know what you meant to say but that sounds like a description of a bad (or at least inconsiderate) driver to me.
In any case, when I think about how I would design a self driving car, an "auto-auto", the first principle I came up with was that it should never travel so fast that it couldn't safely slow down or break to avoid a possible collision. This is the bedrock, foundational principal.
> I often have pedestrians walk towards the street and I assume they will stop so I don't slow down
> the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path
Little different, huh? If you see something that looks like it might be in your way, and you aren't sure what it is, you just keep going?
And if I see a pedestrian in the middle of the road at a random spot, especially at night, I'm slowing down since I don't know WTF they're thinking. Or if I'm in a neighborhood with regular street crossings carved out of the sidewalk and someone's coming up to one of those - I don't know how well they're paying attention to their surroundings.
I can tell pretty quickly if something is pedestrian or a bicycle and plan accordingly. In addition I can tell where the pedestrian is looking. Sometimes I slow down, sometimes I don't depending on how I assess the situation.
I think it comes down to the fact that the classification algorithms are not ready for primetime.
with significant fuzz factor, agreed. If i'm on the sidewalk and take a step toward the road, should it make the car jerk? Probably not, it's a hard call for passenger comfort. From another angle, think of subway tracks - the algo your describing would slow to a crawl as it crosses every station.
> From another angle, think of subway tracks - the algo your describing would slow to a crawl as it crosses every station.
As far as trains go, they do slow down when passing trough a track adjacent to a platform. There are some non-platform adjacent tracks the train companies use to avoid slowing down, however they will slow down or even stop if something is going on.
Similarly, high speed rail doesn't have level crossings due to safety considerations. Overall trains are very safe and they are _designed_ for safety. It is highly irresponsible and immoral to just wing it with people's life/safety.
> It is highly irresponsible and immoral to just wing it with people's life/safety
100% agree.
> As far as trains go, they do slow down when passing trough a track adjacent to a platform. There are some non-platform adjacent tracks the train companies use to avoid slowing down, however they will slow down or even stop if something is going on.
The equivalency isn't 'trains slow down through stations' (That would be cars having lower speedlimit in pedestrian areas - they do and the ubers honor) , it would be 'train spikes breaks if someone takes a step toward the edge' (Which they don't, even though it would potentially save lives).
There's always a tradeoff between usability and absolute safety. I'm not saying the uber did nothing wrong, at a minimum it should have spiked it's breaks. The 'perfect world' solution would be the uber knowing mass and momentum of approaching objects, and whether they could stop in time. But honestly, here would that have helped? We'll never get rid of people walking in front of moving cars, just have to be find the happy balance (which we clearly haven't)
> 'train spikes breaks if someone takes a step toward the edge' (Which they don't, even though it would potentially save lives).
A train's deceleration under maximum braking is far, far lower than a car. [1] suggests 1.2m/s² (paragraph 8).
[2] says the deceleration of a low-speed train crashing into the buffers at the end of the line in a station should not be more than 2.45m/s² (paragraph 35). That caused "minor injuries" to some passengers.
Trains do slow down earlier if the platform they are approaching is very crowded, but there's not really anything else they can do.
> If I'm on the sidewalk and take a step toward the road, should it make the car jerk?
No. That's my point. Take less drastic measures earlier, and only escalate when you have to. That's how I drive, and a self-driving car can do the same.
>At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).
>The system is not designed to alert the operator.
>The vehicle operator intervened less than a second before impact by engaging the steering wheel.
>She had been monitoring the self-driving system interface
It seems like this was really aggravated by bad UX. Had the system alerted the user and the user had a big red "take whatever emergency action you think is best or stop ASAP if you don't know what to do" button to mash this could would have had a much better chance of being avoided.
Things coming onto the road unexpectedly isn't exactly an edge case when it comes to crash causing situations. I don't see why they wouldn't at least alert the user if the system detects a possible collision with an object coming from the side and the object is classified as one of a certain type (pedestrian, bike other vehicle, etc, no need to alert for things classified as plastic bags).
I don't see why they disabled the Volvo system. If they were setting up mannequins in a parking lot and teaching the AI to slalom around them I can see why that might be useful but I don't see why they would want to override the Volvo system when on the road. At the very least the cases where the systems disagree are useful for analysis.
Humans cannot react fast enough. There was just 1.3 seconds allowed. Drivers ed teaches 2 second following distance for a reason: it takes you most of that time to realize there is a problem and get your foot on the brake. In the best case a human would just hit the "okay to stop" button as the accident happened.
Of course cars cannot change speed instantly anyway - it is likely that even if the button was hit in time the accident was still unavoidable at 1.3 seconds. The car should have been slowing down hard long before it knew what the danger was. (I haven't read the report - it may or may not have been possible for the computer to avoid the accident)
At 40 mph 1.3 seconds is 76 feet, right around the threshold of stopping distance if the computer slammed on the brakes at that moment. At the very least, it's the difference between an ER visit and a fatality. Far too short a time for a human to react to a warning, though.
Sure. What I mean is that if they expect to correct this with some kind of "oh shit" alarm for the driver, 1.3 seconds isn't enough time to do anything helpful.
The whole thing is nuts. Imagine a human driver seeing the same thing the computer did, and responding the same way: they'd be in handcuffs.
Average reaction time for humans for a visual alarm is 1/4 second. So more than a second left. And that's a second at the current speed. At 60 mph (was that the speed?), that's about 25 meters. With 1 g breaking for 1 s, the car's speed could have been reduced to 27 mph, and the person on the street would have had almost 0.4s more time to jump away, alerted by the sound of the car breaking.
Reaction time is only part of the picture. First you recognize the issue. Then you have to move your body. In court they generally use 1.5 seconds for all of this. Your best case is .7 seconds, but this situation was clearly not the best case - even if the human had been paying attention it would be much worse.
Even more egregious that the governor, Doug Ducey, was happy to issue executive orders to waive any safety oversight to allow Uber to put the public at risk in the first place.[0]
I hope you're right, but that's only part of the solution. Everybody in that reporting chain should be looking down the barrel of consequences. Proportional to their level of control, but harsh to be sure. Implementors need to have it made very clear to them as well that no, just-following-orders isn't enough.
I have fired clients for doing reckless and stupid things orders of magnitude reckless and stupid than what Uber has done here and I would hope that I would walk the hell out were I confronted with "we disabled the brake for a smoother ride and then disabled the alarms because they were too noisy". Do thou likewise, yeah?
I think this case is a bit too hard to prove much more than negligence, but establishing criminal liability would send the right message that you gotta do this right before unleashing it on public streets.
I don't think so. The police have said that 1) the pedestrian was at least partially at-fault for not crossing at a crosswalk and 2) given the circumstances, the same outcome would have occurred with a human driver.
>2) given the circumstances, the same outcome would have occurred with a human driver.
The police are in no position at assert that, nor do they know whether or not Uber is guilty of negligence. Police do not bring charges and they're not running the investigation.
If you’re gonna test something like this on public roads, there need to be better engineering failsafes in place.
The place for the product folks to override safety features is the test track. If the feature didn’t work, they should have pulled the drivers because they were not trained to properly operate the machine.
If you give the “driver” training on a car with an autonomous braking system, then give them a car without it, that’s not on the driver. Someone was negligent with safety in regards to the entire program.
I’m not saying anyone needs to go to jail over this, but there do need to be charges IMO. Personal liability needs to be involved in this or executives will continue to pressure employees to do dangerous things.
Do you have a link or quote for 2)? I was under impression that while the video looks dark, it wasn't quite so dark in reality and human driver would have fared better (if they were driving instead of checking console every 5 seconds, that is).
> "It's very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway," Moir told the San Francisco Chronicle after viewing the footage.
The dashcam footage has poor dynamic range and is not representative of what a human driver would have seen. (This has been pointed out repeatedly in previous discussions here on HN — I'm saying that not to chide you, but just to establish it as a fact.)
I'm not convinced that (2) is literally true — that the pedestrian would have been likely to be killed in this particular instance — but: she was strolling nonchalantly across a four-lane roadway with a 45mph speed limit, in the dark, with dark clothing on, and paying not the slightest attention to oncoming traffic. I'm sure that if she did that regularly, sooner or later she would have had at least a close call.
This was discussed extensively here after the event happened. It's not pitch black around there and a number of people have recorded videos driving at night through that exact area and the entire road is well lit enough to see a person with a bike on the road. The low-fidelity CCD video Uber posted in the immediate aftermath is not representative of human vision or (apparently) the sensors that Uber had on the vehicle.
Right, and I said exactly the same thing elsewhere [0], but I still think she was taking a big chance by being so oblivious. A 45mph speed limit means some people will be doing 55. Stroll casually across a roadway like that enough times, and you will eventually force a driver to swerve around you at high speed or make a panic stop or at the very least blast you with the horn.
I can't imagine doing what she did — even thoroughly stoned, as she may have been (she tested positive for methamphetamine and marijuana), I would have more sense of self-preservation than that.
"the pedestrian was at least partially at-fault for not crossing at a crosswalk"
That should be irrelevant. Even if the pedestrian is jay-walking, it's still not legal to hit them. Further, having solid evidence that the car detected the pedestrian and did nothing to avoid her mitigates the pedestrian's responsibility, no?
Also, the "center median
containing trees, shrubs, and brick landscaping
in the shape of an X" sure looks like it should have some crosswalks, from the aerial photograph. What's it look like from the ground?
Human drivers are also charged when they kill people, so (2) doesn't seem to have any weight.
And partially-at-fault, on one hand, means there's fault on the driver side too, and on the other hand, is a judge's decision to make, not the police, no?
So, in the case that emergency braking is, nothing is designed to happen and no one is informed. I guess they just hoped really hard that it wouldn't murder anyone?
I suspect that Uber was optimizing for the common case, which is normal traffic conditions, and didn't want their emergency braking accidentally firing during normal driving causing rear endings.
So the question becomes why couldn't they get emergency braking solved before driving on the road? Maybe that requires collecting good data first, for training the system?
You don't make things safe by optimizing for the common case, and "it can't be safe until after we have tested it on the road" is not a valid reason to allow testing of an unsafe vehicle on public roads.
Considering that 1) you are supposed to keep sufficient distance to break even if the vehicle/generic object in front of you suddenly freezes in space, e.g. a massive wall flush with the rear end of the vehicle in front of you suddenly appears, and 2) the only things that could conceivably rear end an emergency breaking Uber/Volvo would be bikes and other >=4 wheel vehicles (cars/trucks/etc.), which either drive carefully anyway ((non-)motorized bikes), or have a low probability of human damage (cars/trucks), false positives should be preferred to false negatives by somewhere between 5:1 and 1000:1. The latter only if the following vehicles are civilized enough to hold the distance. The car could figure this in, and computer probabilities for what damage an emergency maneuver would cause, which means that it'd break for a stray cat if it's otherwise alone on the road and the surface is dry. But it won't break for a wolf if it's tailgated by a truck, but potentially even accelerate, to make up for the loss of momentum (but light the rear break lights, to stop as soon as it registers the truck to slow down).
> I suspect that Uber was optimizing for the common case, which is normal traffic conditions
That's like creating HTML forms that only work with the common use case and crash spectacularly on unexpected input. Except that this time it's fatal. That's not the kind of software quality I want to see on roads.
Why not license the software system from Volvo, which obviously works (people drive Volvos with this turned on every day without erratic breaking), instead of disabling theirs because it was apparently broken?
This is beyond incompetence. There is a different level of software engineering when making a website vs making a pacemaker, rocket or flight avionics. You need the quality control of NASA, SpaceX or Boeing, not that of .. whoever they have running their self driving division.
I have a Subaru with EyeSight and it does strange things sometimes. For example, if I happen to be in the left (passing) lane going around a leftward curve and a car in the right lane is stopping or slowing to turn right, the Subaru will hit the brakes because due to the curve of the road, the right lane car is straight ahead. It's scared me a few times.
The other thing about the system that sucks is that it's all optical (AFAIK) so when visibility is poor, it shuts off. They need to add more sensors because those are the conditions I would most like an extra set of eyes.
> I suspect that Uber was optimizing for the common case
Yeah that's not how you design safety critical software. This isn't some web service. Either you're wrong (let's hope) or Uber is completely negligent.
Source: I write safety critical code for a living.
True, and I'm with you, though I'd bet every dollar I have that, if people routinely randomly slammed on their brakes we'd have a whole lot more rear end collisions. We don't expect people to do that, and when it does happen it's rare.
Travis although no longer at Uber should also be held responsible as he was the driving force behind the policies and culture of 'breaking the law' to get ahead.
I hope at least one human being in touch with him tells Travis that he is directly responsible for the death of a fellow human being. The person that died deserves at least that few seconds of remorse.
> 1.3 seconds before impact ... emergency braking maneuver was needed ... not enabled ... to reduce the potential for erratic vehicle behavior
This wind-up toy killed a person.
Transport is a waking nightmare anyway. Every time you get in your car, every mile you drive, you're buying a ticket in a horrifying lottery. If you lose the lottery you reach your destination. If you win... blood, pain, death.
Into this we're setting loose these badly-programmed projections of our science-fiction.
- - - -
A sane "greenfield" transportation network would begin with three separate networks, one each for pedestrians, cyclists, and motor vehicles. (As long as I'm dreaming of sane urban infrastructure, let me sing the praises of C. Alexander's "Pattern Language" et. al., and specifically the "Alternating Fingers of City and Country" pattern!)
My mom has dementia and is losing her mind. We don't trust her to take the bus across town anymore, and she hasn't driven in years. If I wanted an auto-auto[1] to take her places safely I could build that today. It would be limited to about three miles an hour with a big ol' smiley sign on the back saying "Go Around Asshole" in nicer language. Obviously, you would restrict it to routes that didn't gum up major roads. It would be approximately an electric scooter wrapped in safety mechanisms and encased in a carbon fiber monocoque hull. I can't recall the name now but there's a way to set up impact dampers so that if the hull is hit most of the kinetic energy is absorbed into flywheels (as opposed to bouncing the occupant around like a rag doll or hitting them with explosive pillows.) This machine would pick its way across the city like "an old man crossing a river in winter." Its maximum speed would at all times be set by the braking distance to any possible obstacle.
[1] I maintain that "auto-auto" is the obviously cromulent name for self-driving automobiles, and will henceforth use the term unabashedly.
"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior."
If you can't give your "self-driving software" full access to the brakes because it becomes an "erratic driver" when you do that, you do not have self-driving software. You just have some software that is controlling a car that you know is an inadequate driver. If the self-driving software is not fully capable of replacing the driver in the car you have placed it in, as shipped except for the modifications necessary to be driven by software, you do not have a safe driving system.
The irony is that this is 100% the what MobilEye said in their in their model demo I think 3-4 years ago.
Their CEO said that the regulators cannot regulate all breaking conditions and should should only test for false negatives exactly to prevent this.
He stated that dealing with flase positives is going to be in the best interest of every manufacturer and the easiet thing to do is to disable the breaks completely but that would be fairly easy to detect in any structured test or a test drive.
Any thing else would be a problem because you would be stacking tolerances and worse creating test cases with a clear conflict of interest where the interests of the car maker and the regulator align.
Mostly the message is that regulators shouldn't be investing their resources in checking if the car breaks too often (from false positive in obstacle detection) because the car company has strong incentive to reduce unnessary breaking or driving would be unpleasant and slow for a large portion of the drivers.
The regulators should only test for false negatives, where the car should have stopped, but did not detect the obstacle (false negative), because there, it is a clear threat to safety, and the car company's incentive, while definitely still present, is less pure, as the amount of false negatives is a direct trade off with the quantity of false positives (because it is a treshold: a minimum confidence level from which you decide that there is indeed something in front of the car and you need to break), which make driving more awkward for 99% of drivers
Yep, this is the thing that worries me the most on some of these systems right now. They are going to get rear-ended a lot for the next few years, IMO.
Rear ends have basically no fatality rate though. They do have material costs but if optimizing for no loss of human life it sounds more appealing.
These kind of tradeoffs were things every self-driving car software developer KNEW they were going to have to deal with - the most extreme being the one where the software has to decide who to kill and who to save:
Not sudden random braking, no. The most important advice for driving on slippery surfaces is to avoid sudden braking, and in general to be careful when you brake.
> If you can't give your "self-driving software" full access to the brakes because it becomes an "erratic driver"
You misunderstood. The Uber software had sole control of the brakes (plus the human of course). The Volvo factory system was disabled so that it didn’t have negative interaction with the Uber system.
Your mistake is understandable. The article was poorly written, perhaps due to a rush to publish, as is the norm these days. Even if the NTSB report was unclear, that doesn’t excuse clumsy reporting.
If you’ve ever done significant mileage in a car with an emergency braking system you probably have experienced seemingly random braking events. The systems favor false positives over false negatives.
I wrote it carefully and I stand by it. If it can't deal with it because your system is getting too confused for any reason, you don't have a self-driving system. Being able to function enough like a human that the safety features on the car don't produce an unacceptable result is a bare minimum requirement to have a self-driving car.
This isn't horseshoes, as the old saying goes. I unapologetically have a high bar here.
Far as I can tell, the done thing in the industry is to disable all other safety systems (or to not have any in the first place) and to delegate safety entirely to the self-driving AI.
The charitable interpretation of this is that the industry believes that self-driving AI is a safety feature of greater quality than, say, lane assist or auto-breaking.
The less charitable one is that they find it too much work to integrate their AI to other safety systems. Which, to be fair, is really going to be a lot of extra work, on top of developping self-driving.
>You misunderstood. The Uber software had sole control of the brakes (plus the human of course). The Volvo factory system was disabled so that it didn’t have negative interaction with the Uber system.
That's not how I read it, or how any of the journalists who are reporting the story are reading it. Uber disabled the self driving software's ability to do an emergency stop when it detected it was going to crash. The Volvo system is separate and also was disabled when the car was in self driving mode.
>Sensors on an Uber SUV being tested in Tempe detected the woman, who was crossing a street at night outside a crosswalk, eventually concluding “an emergency braking maneuver was needed to mitigate a collision,” the National Transportation Safety Board said in a preliminary report released Thursday.
>But the system couldn’t activate the brakes, the NTSB said.
That's the only reading that makes sense to me, otherwise why did the car fail to attempt to stop when it detected the pedestrian and knew it was going to hit them?
"At
1.3 seconds before
impact, the
self-driving system determined that
an emergency braking maneuver
was
needed
to
mitigate
a collision
(see figure 2). According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to
reduce the potential for erratic
vehicle
behavior."
I think you're wrong. The Uber software controls the brakes, thru whatever means they're controlling the car. It has to – the car has to stop somehow!
"emergency braking maneuvers" refers to an additional automated (software) system for automatically applying the brakes in an emergency (that's detected by that additional system).
>2 In Uber’s self-driving system, an emergency brake maneuver refers to a deceleration greater than 6.5 meters per second squared (m/s^2).
So, Uber's self-driving system has command to brake normally, but it cannot "slam" the brakes. The other system from Volvo's is deactivated when Uber's is working and cannot brake at all. Thus, since Volvo's is deactivated and Uber's won't brake if it judges that a deceleration of >6.5 m/s^2 is needed, it turns out that in automated mode, the car actually lacks the ability to trigger emergency braking at all, hoping instead that the driver will somehow notice. But in a sadistic twist, no warning is given to the driver at any moment that they need to slam the brakes.
No, it would probably take a large part of that time to even react, especially if you are looking at a screen.
However, if the safety driver was trained to brake immediately upon warnings it could have worked quite well. But that would negate the removal of e-brake actuation.....
If the car cannot use emergency braking it could at least decelerate. It looks like some hack where the engineers just commented out the code for brakes and left decision to the driver.
The NTSB report is pretty unclear, I had to re-read it several times and I think you're correct that the "emergency braking maneuvers" they refer to are the Volvo ones. It's strange though that they word it as
> At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision
Is the Uber self-driving system able to interact with the Volvo system? Or are they calling the Volvo safety features a second "self-driving system"? And what were the steps, Uber realizes it needs an emergency braking maneuver and sends a signal to the Volvo system which then responds with "I'm disabled" ?
I understand why the Volvo features may be disabled, but it's alarming that the self-driving system made no attempt to brake at all when it was fully "aware" it would hit someone.
The report does mention the Volvo braking features by name a few paragraphs earlier though... So I'm still not entirely sure.
They definitely seem to be referring to the Uber self-driving system. The sentence "At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2)." has this footnote attached to it: "In Uber’s self-driving system, an emergency brake maneuver refers to a deceleration greater than 6.5 meters per second squared (m/s^2)".
> The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
If what you said is true, then the vehicle operator would not be relied on to intervene because the Uber self-driving software would apply the brakes. Since the vehicle operator is relied on to intervene, this indicates the Uber self-driving software has its emergency braking disabled.
> If you’ve ever done significant mileage in a car with an emergency braking system you probably have experienced seemingly random braking events. The systems favor false positives over false negatives.
This is not my experience with VW's 2016-model-year system.
Soemtimes it stops a second or two before I would've. I haven't had any false positives, though.
I presume that expectation was that the human driver was suppose to be fully engaged to take evasive actions if and when required. What jumps up at me if that fact that the system does not alert the driver when his/her intervention is required.
I'm not sure it matters, because going from disengaged to fully engaged and actually breaking to a stop in 1.3s is not happening for an average human. By the time the system alerts the driver, the ped is already dead.
But an attentive human would have noticed the pedestrian around the same time the system was confused, ~6s before impact. Enough time to swerve and stop.
At 6 sec before impact the driver for sure would assume the system will notice and brake in time. Until the driver realizes that the automated system will NOT brake its very likely too late. Automatic systems which rely on humans braking in time have no place on the street, imho. This may be different for lane assist where malfunctioning is more obvious and leaves more time for intervention. Although even in this case the latest Tesla accidents may tell a different story.
> If the self-driving software is not fully capable of replacing the driver in the car you have placed it in, ...,you do not have a safe driving system.
Nobody disagrees with you and that is explicitly the reason why a human is on board, so I am not sure what you are arguing against.
I think it's very clear that a human driving a normal vehicle is different from a human sitting at the wheel of a self-driving (semi-self-driving?) vehicle. You simply cannot expect a human to remain as engaged and attentive in such a passive situation.
It's baffling to me that they chose to deactivate emergency braking without substituting a driver alert. If the false-positives are so frequent as to render the alert useless (i.e. it's going off all the time and you ignore it) I don't think these vehicles are suitable for on-road testing.
You can lean fairly heavily on the 'still dangerous, but better' argument in the face of 40,000 US vehicle fatalities each year, but there are limits.
Except it’s patently clear that is a flawed concept. Here’s a prime example where a human, not paying attention because the car has been successfully driving for a while, is given 1-2 seconds to emergency stop the car. That is not enough time to process what’s going on and take over. Even 10 seconds is likely not enough.
Worse, under Uber's regime the one human had to deal with both emergency situations (without warnings from the system) and instrumentation feedback (without a HUD), so the intended operation of this test was that the safety driver spends half the time looking down and away from the road.
I doubt the camera can give you a good idea of whether or not the 'driver' starts to daydream. They could be looking at the road and not paying attention. Worse, the 'driver' could be unaware they aren't paying attention, rendering their intention to stay alert moot.
Of course that is possible, but in this case the driver was clearly not looking ahead. I guess it is possible that this was the first time the driver did that, but I think that is unlikely.
On the contrary: the Volvo system is designed not to fight but to override any other input, including the human driver. In any other Volvo car currently on the road that detects an obstacle, the Volvo system will override the driver and bring the car to a stop even if the driver insists in hitting the obstacle: https://www.youtube.com/watch?v=oKoFalJiazQ
Independently-working override-capable systems are the base of engineering redundancy safety. See airborne collision avoidance systems (ACAS), which will automatically and forcefully steer an aircraft to avoid a collision if necessary: https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...
I don't have experience with self-driving cars, Uber or Volvo safety systems, but my Jeep's backup collision prevention often slams the brakes if I'm backing up and the camera detects potholes, oil stains, or irregular painted lines in the parking lot.
If any of the systems are vulnerable to any such false positives, and equipped to enable emergency braking to avoid them, even on the highway, it's not hard to imagine why they might be disabled, especially during testing phases.
I think it's fair to say that at this early stage in product development, there's probably no 'always right' answer for how to handle a given obstacle without considering locality, driving speed, road conditions, likelihood of false positives and negatives, etc.
Yeah mine does that too. I wish they had a sensitivity option like they do with the forward collision sensors.
Hey there's an idea for Uber, maybe instead of disabling the forward collision system entirely they could just decrease the sensitivity to lessen false positives (like in our Jeeps)?
TCAS systems won't automatically take over. Instead they audibly issue an order to the flight crew, describing the action that must be taken, and the flight crew must take the action.
Instructions from TCAS are near absolute in their priority. If ATC says to do something different, you ignore ATC and do what TCAS says. If the pilot in command says to do something different, you ignore the pilot in command and do what TCAS says. If somehow God Himself is on your plane and tells you to do something different, you ignore Him and do what TCAS says. Compliance with TCAS is non-negotiable, and the Überlingen disaster[1] is the bloody example of why it's that way.
Self-driving/autonomous-car systems need to have a similar absolute authority built in. If Uber disabled theirs because of false positives, it's a sign Uber shouldn't be running those cars on public roads.
If the Uber engineers shared your confidence that it would play out that way, they would not have disabled it.
Redundant safety systems are a great idea, evidently Uber needed more of them, and integrating with the Volvo system might have been a reasonable option. It's silly to suggest that the integration would necessarily have been trivial, though. That's what I'm objecting to.
How many people are still alive because Uber had the restraint to not slap control systems together willy-nilly, like so many people here seem to think is an obviously great idea?
Yikes. I would not allow a business to continue to exist whose argument is “at least we’re only partially wreckless”.
See how that holds up in civil court in front of a jury, or any legislative body Uber might need to convince to allow their operation in a jurisdiction in the future.
“Just think of how many more people we would’ve killed if we didn’t care at all!”
"at least we’re only partially wreckless" has been Tesla's standard operating procedure for a while now. I'm guessing you think Tesla shouldn't exist either.
Oh, they'll get crucified for it, I'm sure -- because the public doesn't understand that integration is never trivial, and that "obvious" integrations aren't always good ideas.
I had thought an audience of developers would "get it," since we deal with fallout from ill-conceived integrations every day, although admittedly in a far less spectacular form than control system engineers.
Unfortunately, the uber hate train has already left the station and there's no slowing it down until the investigation finds (or doesn't) actual evidence of negligence rather than clickbait guesswork by armchair engineers. Too bad.
> I had thought an audience of developers would "get it," since we deal with fallout from ill-conceived integrations every day, although admittedly in a far less spectacular form than control system engineers.
You do a disservice to the audience by assuming it would be understanding of grossly negligent behavior.
Failures happen, that is to be expected. If you're building self-driving vehicles, you're supposed to be engineering for those failures. Disabling two life safety systems (Volvo's AEB and Uber's own AEB) and relying on a single inattentive human driver? I don't understand how that's understandable or justifiable in any scenario besides a carefully controlled test track.
I think you’re misunderstanding. According to the report, it’s not just the Volvo emergency braking system that was disabled. Uber’s self-driving system had its own emergency braking feature that was also disabled.
How many people were not killed by Uber? That's a terrible way to look at things. It is better to wonder _why_ Uber killed a person. The answer appears to be right in the NTSB report about the brake system.
However much thought they gave it, I guarantee that everyone in this comment train has given it much less. It wasn't a trivial decision, their choice is not prima facie evidence of negligence.
Again, very charitable. Your assumptions are baseless, seemingly motivated by little more than your trust that incredible incompetence and negligence is rare in the real world.
Again, not at all charitable. Your assumptions are baseless, seemingly motivated by little more than your mistrust that in the competence of people that have been actively working on this problem, many for years.
He said that he can "guarantee" Uber engineers gave it careful consideration (or at least more than people in this thread), but in truth he can do nothing of the sort. There are a plethora of examples where engineers and "engineers" didn't give a problem careful consideration and it got people killed. There is no rational basis for guaranteeing this is not one of them.
There is no "guarantee" that uber "engineers"/engineers put more thought into this than "their system is inconvienent, tear it out" or "we don't need their system because ours will be better, tear it out." Nobody can guarantee Uber engineers were not stupendously negligent until the investigation is complete. Anybody who thinks they can guarantee that has an irrational basis for thinking they can provide such a guarantee.
You might be troubled to know that the guidance control systems on modern spacecraft and many aeronautics subsystems work this way -- at least three redundant systems have access to sensor data and each make independent decisions about the course of action, if any, needed. In short, they vote, and majority wins.
Unclear why this is being downvoted. Modern cars have all sorts of computer driving systems, e.g. ABS. If you learned to drive without it, you still think to pump even though the computer does a better job if you just slam on the brakes.
The point is that so far most (all?) of these computer systems don't have a failure mode where it kills pedestrians, because their scope is very limited.
So, can't read the article but I did read the ntsb report directly. Basically, it sounds like Uber was not ready, but reassured themselves with "but there's a human driver who can intervene". The fact is, humans are very bad at remaining vigilant for long periods of doing nothing, then needing to intervene at a moments notice. Computers are good at that (and Volvo's built-in safety systems might have worked if Uber had not disabled them), but humans are bad at it.
Volvo has it right: human driver, computer backup. Uber's idea of a human acting as a last-second backup to a computer gets the relative strengths and weaknesses of each exactly wrong.
Worse, they knew they needed two people and previously had two individuals, but then felt that was unnecessary and went to only one person per vehicle.
I'm irritated at myself for swallowing Uber's initial line on this (and the interpretation of the officers who reviewed the cam footage provided by Uber) without sufficient critique.
This accident should have been avoided. No excuses.
The sensors observed the pedestrian 6 seconds before impact! That's more than enough time to come to a complete stop.
That's enough time to play a bell that alerts the driver and for the driver to manually react, press the brakes, and come to a complete stop.
And this was a pretty easily preventable scenario (which just makes it more tragic of course). Software is nowhere near ready to drive cars on real roads.
UBER software is nowhere near ready to drive cars on real roads. There are multiple competitors in this space (Waymo and Argo immediately come to mind); they don't generally have Uber's reputation of "move fast and break things" or of "cut human costs as soon as feasible."
In my initial assessment, I was reasoning the other direction---from practices I was familiar with from stories of those companies to Uber---and falsely assumed Uber was behaving more responsibly than they were. This Uber tragedy doesn't significanly update my prior assumptions about its competitors.
I agree this incident doesn't give evidence about Uber's comepetitors, but I just don't believe software is anywhere near ready to safely navigate neighborhood driving. Many of the challenges involve assessing the knowledge, goals, and capabilities of other people and objects in the environment which is far ahead of anything AI can do except in specialized scenarios with lots of accurate training data. Many of the scenarios wil be unique and not encountered in prior training data. So I'm very skeptical.
The car should have slowed at that six second mark, and at least reduced speed dramatically by the time of impact (if not completely).
The risk of fatality would have been severely reduced if the car was (at most) travelling at 30mph (likely around ~10% - instead of between 25% and 60% for the speed at the time of impact depending on what study you choose).
The initial coverage was terrible and missed the point completely. So much "it was dark" or "the ped was at fault". I was disappointed, though not surprised.
Does anyone else find these two quotes a bit hard to stomach?
> Although toxicological specimens were not collected from the vehicle operator, responding officers from the Tempe Police Department stated that the vehicle operator showed no signs of impairment at the time of the crash.
> Toxicology test results for the pedestrian were positive for methamphetamine and marijuana.
So they tested the victim for drugs but not the Uber employee in the car??
Other than that, Uber's so-called "self-driving" system sounds like crap and should never have been allowed to be used in that state.
The test is probably a routine part of an autopsy, whereas it likely wasn’t a routine part of the police response (which use visual and verbal cues to preliminarily assess inebriation).
I could be totally wrong, but I thought I read the emergency braking function was Volvo’s, and built into the base vehicle. Uber had disabled because they were testing their own software.
The NTSB report mentions the standard automatic emergency braking features from Volvo:
> The vehicle was factory equipped with several advanced driver assistance functions by Volvo Cars, the
original manufacturer. The systems included a collision avoidance function with automatic emergency
braking, known as City Safety, as well as functions for detecting driver alertness and road sign
information. All these Volvo functions are disabled when the test vehicle is operated in computer control
but are operational when the vehicle is operated in manual control.
However, that appears to be separate from emergency braking under Uber's self-driving system:
> At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).[2] According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
> [2]: In Uber’s self-driving system, an emergency brake maneuver refers to a deceleration greater than 6.5 meters per second squared (m/s^2).
It sounds like Uber didn't trust their own self-driving system enough to allow it to initiate sudden crash stops. Too many false positives, I guess? Of course, simply disabling the function leads to other obvious problems, as shown.
That exactly what I was wondering. So Volvo's system was disabled (and they probably don't have any data from that system anyway) and so these quotes are specifically about Uber's system, which they seem to have full logs for?
That's a really weird interpretation. Uber's system could obviously brake but not 'emergency brake'? The system disabled a part of itself? How is that any different than it just never deciding to 'emergency brake'?
I think the better interpretation is that the Uber system disabled another separate (non-Uber) system.
Uber's system could obviously brake but not 'emergency brake'?
There are two systems in the vehicle. One is the manufacturer's, let's call it System V after Volvo, and the other is System U, for Uber.
System V provides collision detection and emergency braking. It played no part in this incident, since it's inactive if the car is under control of System U, which it was at the time.
System U can decide that the car should slow down in some situations. Let's call gradual slowdown Action U1, and emergency slowdown Action U2. The incident called for Action U2, by Uber's criteria. What Uber said is that a) they disabled automatic execution of Action U2, punting it to the driver (really, a bored passenger in driver's seat), and b) that the driver would get no indication of emergency situations from the system.
The idea is, presumably, that driver should watch the road and react in emergencies. But we also know that the driver had the duty of working with the onboard console, which must have been quite a distraction. Effectively, Uber has set themselves up for failure, and it happened.
Volvo has automatic driver-assistance emergency braking. That's turned off while Uber's self driving system is on, because obviously the two systems are not built to work together.
Uber also disabled emergency braking from their own system. That was because it would "drive erratically" when it was turned on.
Testing is not a valid reason for disabling it, because if it fires, then either Uber's system has failed to respond in time, or it is a false positive, and there cannot be many of those, or else it would be a problem with the relatively large numbers of otherwise ordinary Volvos equipped with emergency braking and being driven by humans.
That's not how I interpreted it. The report calls Volvo's system "advanced driver assistance functions" and "automatic emergency braking." When the report refers to "self driving systems" or "emergency braking maneuvers", they are talking about Uber's system.
So assuming the driver knows all of this (and that's a big assumption), then you have the blame shared two ways, and it's hard to tell who deserves more.
(1) You'd have the driver being at fault, since they were responsible for controlling the vehicle at the time, even though the computer was doing the majority of the driving. In this case, the driver should not have been using their phone and ignoring the road and their duties to control the car.
(2) Uber should share some blame for not building alerts to the driver into the system.
But how much of these responsibilities Uber made clear to the driver is very much worth knowing, because however you slice it this was not so much a failure of technology as human negligence.
We've known for a while now that the driver wasn't using their phone. Instead, they were interacting with the car console where information about the self driving car system was being displayed. During earlier testing there were two test-drivers, one to watch the road and one to watch the console. But to speed up testing Uber had moved to assigning one test driver to both tasks.
If Volvo's emergency braking were doing that many false positives, then it would be a problem when humans are driving its cars, and I am pretty sure that is not the case, or else the NHTSA, and its equivalents in other countries, would be investigating ordinary Volvos with this feature.
If the claim is that Volvo's system is intervening in valid cases where Uber's system would (arguably) have handled it, then Uber's system is driving too aggressively or is too slow in responding. Humans, when paying attention, can drive Volvo's cars without often triggering the emergency braking.
I don’t think the “erratic behavior” quote is related to the stock Volvo emergency braking system. The report notes that that also existed and was disabled (justifiably) while Uber’s self-driving system was active. But apparently Uber’s own system had an emergency braking function that was also disabled:
> At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision[..]
[emphasis mine]
> In Uber’s self-driving system, an emergency brake maneuver refers to a deceleration greater than 6.5 meters per second squared (m/s2).
The NTSB always includes whether drugs or alcohol potentially played any role in an accident and puts it in their reports for all parties. This isn't a smear tactic, just standard practice because they are trying to determine all contributing factors.
When you're dead you don't have much expectation of privacy in the eyes of the law, they can run a toxicology report for no reason (or more likely as part of an autopsy). For a living person it requires reasonable suspicion to initiate. That said, IANAL
Because more context makes for a more comprehensive story.
Fact of the matter is, the person was not allowed to cross there. Not only was it not allowed [0], it was also extremely dangerous. Why would she do this?
That's one part of the conversation.
The other part of the story is that the self-driving car was mismanaged and misprogrammed, and that this was likely also a reason why the woman died.
There is therefore reason to believe both parties could have prevented this death. You're not sure. Perhaps drugs weren't a factor. And perhaps a driving person could not have braked quickly enough either to save her. You don't really know. But when you have information which lets you speculate plausibly to explain unknowns, I think it's relevant to include that information as a journalist. Meth use falls within that range of relevant information in this case, if you ask me.
It feels a bit analogous to saying a woman got shot and killed at a gun range. She ran into the shooting range and got shot. Why would she do this? The person shooting wasn't paying attention and was just firing casually down the range. Knowing the woman was on meth helps explain a lot. As a journalist I'd think that was relevant, and as a reader it offers a possible explanation for this kind of behaviour. It doesn't negate the fact the shooter didn't follow procedure and could have prevented this death, too.
I thought it was fitting at the very end of the article like it was in the original analysis that they refer to. Not so much in the subtitle, I feel that was in poor taste and driven by clickbait. It shouldn't be the focus of the article.
I have seen it. Check streetview too, it's helpful.
I really hate to defend this position because I feel terrible for her and it's quite obvious uber made some very bad calls here. (in fact I even used the very loaded word murder in another comment, see my post history). I'm definitely not arguing to put all or even most of the blame on the woman.
The intersection is indeed absolutely terrible. It makes no sense. You're not allowed to cross there, there are even signs which state it on both sides, referring to a crosswalk a minute walking up ahead. i.e., there is just no way you're allowed to cross there, despite the island in the middle having the cosmetic design of a walkway, you're not supposed to get on the island or get off it or cross the roads at that location. It's a terrible design choice, both inviting people to cross (in a dangerous spot), while also saying it's illegal. With better infrastructure planning you could have railing there preventing crossing, and no island at all.
That having been said, I can't for the life of me imagine crossing there and not seeing an oncoming car with its headlights on, assuming I was paying attention to the road I was illegally crossing. The fact meth is involved is a helpful possible explanation for why this kind of attention was not given.
You can check out the road on google street view, it definitely helps. I think you'd agree on two things, one is that it's not allowed to cross there and two that if you were to cross there, it'd be pretty easy to see cars. The reverse isn't necessarily true if you wear dark clothing at night.
AFAIU, the pedestrian who was killed has indeed part of the blame, and we probably would not feel as guilty saying that if it was a normal car that killed her. But for the topic of this general discussion, that is irrelevant; the emphasis is (and should be) on the incompetence and negligence of Uber, which lead to a person's death, albeit her negligence is at play too, and which demonstrated that (i) Uber is not capable of this sort of business as an unscrupulous and unethical company and that (ii) we should still be deeply sceptical of the capabilities of automated vehicles in responding to arbitrary reconfigurations and movements their paths.
The “no pedestrian zone” is part of a complex set of causes and effects.
Since it’s a no pedestrian zone, it has a higher speed limit posted
A higher speed limit causes a linear reduction in detection time, a sensor that detects a pedestrian 2 second away at 20mph will detects a pedestrian 1 second away at 40mph
Braking distance and impact energy are both quadratic and that’s part of how/why speed limits are defined
There are a lot of engineering issues that will arise from the need to cope with people violating limits and restrictions around autonomous vehicles.
I’m not defending uber here btw, they definitely jumped the gun deploying the system as described, just thinking out loud. There are decades of crashes that went into the driving code and road regulations, which humans kinda understand by relating to road design and getting cues from the environment that ais will need to learn
It is the NTSB's goal to include all contributing factors in their investigations. Remember that the NTSB does not assign liability or make regulations, it simply serves in an advisory role. Their reports are as thorough as possible in identifying contributing factors so that safety recommendations can be made that address as many factors as possible.
I wonder if one day we'll see a story like: The coder was drunk or high when he created the software that killed someone... I'm pretty sure we will, the way surveillance is increasing.
Why does walking/biking differ from driving if the two collided in the road? Not saying Uber couldn't have prevented this, just explaining why the media mentions the detected drug use...
The detection at 6 seconds was just of an object though, not an object moving in to the car's path. You couldn't drive a car if you had to constantly break because objects (such as people standing by the road) were being detected.
It's not clear at what point the car ascertained a collision would occur between detection 6s before and the determination that emergency breaking was necessary 1.3 seconds before.
Was there any other determination in between, and when? What I'd like to see is Uber's modelling of the woman's trajectory and the likeliness of collision across the 6 second window. That's completely left unsaid.
The average braking distance of a car is about 24m at 40mph, which is approximately the distance between the woman and the car at 1.3 seconds out. So perhaps the 1.3s figure wasn't the first moment the car determined a brake was necessary, but rather, the last moment the car could have braked to prevent a substantial collision. I want to know the first moment the car determined a brake was necessary at all. It's likely not 6s, but it's also likely not 1.3 seconds. It seems this was entirely preventable, or at least the collision impact could have been mitigated severely, had there been a braking and/or warning system in place.
Shutting off brakes on literally the only driving agent tasked with full attention is inexcusable. But that's what they did. To me that's murder. They used to have two passengers, one for tagging circumstantial data, the other to override the car when necessary and keep eyes on the road at all times. Either keep that and shut off emergency brakes from the car and put a warning system in place for the 'driver'. Or do not shut off emergency brakes. Instead they put a single person in the car, tasked to do things that kept her eyes off the road half of the time, and shut off brakes for the AI. That's insane.
> You couldn't drive a car if you had to constantly break [sic] because objects (such as people standing by the road) were being detected.
Yes, you could -- that's how I drive. Do you not? If I detect a mobile object that might be moving into my path, I slow down to give myself time to react until I am reasonably certain of safety. When doing so I take into account my situation-specific knowledge -- have I made eye contact with the pedestrian and do they know I'm coming? Is the dog on a leash and is the owner being attentive? Does the cyclist seem aware of my presence?
I would expect no less from anyone licensed to drive a car, be they human or software.
I didn't say you can't drive a car without being cautious.
I said you can't drive it without constantly braking the moment you detect an object, irrespective of what the object is doing. (e.g. moving into or away from the driving path).
i.e., just because an object was detected 6 seconds before impact did not mean the car ought to have started braking at that moment. It could be that the object was 200 feet away and moving away from the car's driving path, 6 seconds before impact. It'd be absolutely ridiculous to brake in that situation.
We have no information about this context, e.g. the car's data or determinations within the 6 second window. We only know it detected an object 6 seconds before impact.
It appears like the person I was replying to implied 'the braking distance was 180 feet, but the person was 380 feet away, thus uber could have prevented killing this woman had it not shut off the brakes'. In reality, the 6 second figure isn't relevant. What is relevant is the context that allowed a reasonable driver/AI to determine at a particular point in time, that the car should have slowed/braked. And we don't have that information yet. That's what I'm interested in.
I don't think I'm misinterpreting, just disagreeing about the level of caution. One point is that humans are quite good at immediately recognizing objects and evaluating threat level (at least when attentive). So a human is rarely in a scenario of "there's something up ahead I have no idea what it is or where it's going." But if they were, I don't think it's at all ridiculous to slow down until determining those things. If software is in that scenario, I absolutely expect it to slow down until it can determine with high confidence that no object ahead is a likely threat. (edit) For instance, an attitude like "in my training data, unidentified objects rarely wander into the road" is not good enough for me, I want to hold software (and humans) to a much safer standard.
Humans are frequently in this scenario, especially at night. For example, a reflection from a rural roadside mailbox's prism looks similar to the eyes of a deer, and shredded truck tire treads look similar to chunks of automotive bodywork debris. This doesn't invalidate your point about slowing down.
But we're asking a lot from this software (for good reasons), but humans commit similar leaps of faith of various severity on the roads daily -- failure to yield, failure to maintain following distance, assuming other drivers immediately adjacent to you will keep driving safely and carefully -- and only a small subset of these situations results in accidents. We're expecting an algorithm coded by humans to perform better than a complicated bioelectric system we barely understand.
Waymo has opted to commit to thoroughly understand its environment, which is why their cars drive in a manner that bears no resemblance to how humans actually drive. We as a society have to eventually reconcile the implications of the disconnect.
> reflection from a rural roadside mailbox's prism looks similar to the eyes of a deer
Deer hits a major cause of fatality out in the country. If your driving at night in deer country and you aren't eyes wide open then you are going to have an unhappy experience at some point. Their instincts are essentially the exact opposite of what they should do when encountering a car. They will stay in the middle of the road, and they will jump in front of you if startled.
Nice little FUD by WSJ in their subheadline: "Pedestrian tested positive for methamphetamine and marijuana" -- not referred to again in the article, moreover I have trouble seeing the relevance to the accident.
It is mentioned again at the end of the article. It's relevant to the accident because it provides an explanation for the pedestrian crossing the road outside a crosswalk without adequately checking for traffic.
Not at all, sober people jaywalk too. They just tend to check their surroundings a bit better than their high counterparts for any cars coming to potentially hit them.
The car should have stopped automatically, but that system was disabled. The pedestrian shouldn't have been there, but made a bad decision. The pedestrian should have been more aware of her surroundings, but was impaired or not paying attention. The safety driver should have been paying more attention to the road, but was also monitoring system displays.
This crash didn't have a single cause. Any one of those factors being handled correctly would have prevented it.
It is not the fault of the pedestrian for having been slaughtered, especially due to the immense amount of time after the car saw the pedestrian until the time that the accident occurred.
Does this mean the technology is so early that they are struggling to program it to do the right thing in normal conditions, let alone to prevent an accident?
It sure suggests the possibility of, "that thing keeps braking when we don't want it to, turn it off". When, you know, human drivers manage to do quite fine with it on. If you have to disable the built-in safety features of the Volvo to get your driving software to work, then you're not ready to do a road test.
I don't think that is right. My understanding is that they didn't HAVE to turn the built-in safety features of the Volvo off to make it work, but instead they had to in order to test their equivalent safety feature. If they have a feature that exactly mimics Volvo's, it can't be tested while Volvo's is active (or at least that is the idea, I think it probably could be tested in some way.)
But then they turned their own safety feature off, because it failed to be as good as Volvo's. And then did not turn Volvo's feature back on.
You either works with higher error margins than the Volvo software (what should be the case, since yours is being tested), or you log your software decisions and compare with the Volvo's one after the fact.
One wild guess would be their equipment interfering. Still seems legendarily stupid to just disable the possible free backup to your totally inadequate system, however.
Does this mean the problem is more complicated and nuanced than we had originally assumed so that they are constantly going to struggle to program it to even barely match the performance of a human being?
I'm still disappointed that the system has to use underlying maps to know where lanes are and what the speed limits are in them. What happens when the map is less than 100% perfect?
Isn't the person actually driving the car supposed to be driving the car?
I've ridden in these self driving Ubers. When I rode in one, the driver drove almost the entire time, except on a few straight stretches of road. They always had their hands ready to grab the wheel, were always attending to what was happening etc.
It seems like the marketing and the engineering got crossed here. Marketing says these were self driving, but anybody who rode in them knew they weren't. They were supposed to be getting driven by real drivers. From the report, it sounds like the drivers were listening to the marketing instead of the engineering team (who presumably would have told them that the system doesn't brake on its own).
It sounds like the real driver wasn't driving as they were supposed to be. From the video, it looked like they were reading something on their phone[1] instead of driving the car.
Compare this to a pilot flying in an autopilot. They don't shut their radios off and stop paying attention to the flight, they still fly the airplane and remain attentive to what is happening with it. That's what this driver should have been doing, not looking at their phone.
It frustrates me that this level of negligence could set self driving tech, something that will save countless lives, back. This was the Chernobyl moment for self driving tech. It's safer than alternatives, but now this is all people are associating it with.
[1]:the driver states that they were interacting with the Uber self driving system, not their phone.
> That's what this driver should have been doing, not looking at their phone.
According to the NSTB report, they were looking at a separate diagnostic panel and flagging messages which Uber asked them to do as part of self-driving training duties.
> It's safer than alternatives
The system also decided it should have applied the emergency brakes, but then didn't. I don't think this system is safer than alternatives.
> It frustrates me that this level of negligence could set self driving tech, something that will save countless lives, back. This was the Chernobyl moment for self driving tech. It's safer than alternatives, but now this is all people are associating it with.
If it can not be done securely at scale, it better be set back, especially on public roads.
I am deeply skeptical of how safe and realistic self-driving vehicles are. Even with such less amount of cars dubbed autonomous on the roads, we've seen up until now a great multitude of fatal accidents or near misses already.
This putting internet and AI in everything reminds me of a recent Vsauce video I saw which shows radium chocolate bars and underwear: a new fascinating thing that in future will probably prove more harmful than it is useful, because of how we overestimate it's practical utility.
The driver has to look down on a console to see warnings like this AND drive the car? This and that the emergency system was off tells me that the accident was 100 % the fault of Uber even if the "driver" were dancing in the backseat.
Yet another reason in favour of professional engineers being required to design and implement safety-critical features using software: because "move fast, and break things" becomes unacceptable when those "things" are the lives of living, breathing people.
Yikes, it had not occurred to me that even Uber would ignore their automation system's pleas to emergency brake because it was making their cars look too jumpy.
The one thing I spent years teaching my wife is that when there’s anything untoward on the highway, JUST STOP. I’ve avoided at least two major accidents by stopping instead of swerving — in a few feet you can get your speed down to levels where a crash won’t be fatal or severely injurious. And even if you get rear ended by stopping short, it’s usually at a much lower speed. All autonomous cars should just STOP when something is not right.
I agree, but in this context "struck" is what happened. "Killed" was the consequence in this case but not necessarily a guaranteed one (although at the speeds involved in this particular case, it was almost certainly guaranteed).
Yes, it was pedantic. I didn't want it to be but I won't pretend it wasn't.
The point I was trying to make is that I believe the noteworthy event is that a self-driving car struck a person, without even attempting to avoid it. That she was killed is a tragic twist to the story. I believe that even if she wasn't killed it would still be a noteworthy event because, again, the car did not attempt to avoid it.
I do not at all intend to diminish the fact that Uber killed someone.
The event of interest is the vehicle striking/killing someone. Pulling the trigger is not that in the context of a shooting, and "pulling the trigger" is not really a euphemism for shooting and killing someone with a firearm.
"person shoots at people in school" still doesn't convey them killing anyone when the effect is a newsworthy subject (though maybe not that much in the US).
I was running into the word count limit. The actual article uses "Struck, Killed" but that wouldn't fit. I took out killed because struck captured what happened in the incident better.
But how much work the software does is not what makes it remarkable. What makes it remarkable is how well the software works. This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
Also from the article: “If the software isn’t perfect, some of the people we go to meetings with might die."
Isn't the problem here much like the presumed cause for Air France 447 (https://en.wikipedia.org/wiki/Air_France_Flight_447)?
The pilots are so used to automatic operation that happens 99.9% of the time that they are not prepared for the random time when it does.
A bit like someone guarding a bank for 20 years and then gets robbed and can't react in time because they mind has taught them that nothing ever happens.
I'd like to throw out that one of the pilots of that flight was pulling up during a stall warning.
I don't think it's really similar to this accident, since it'd kind of be like the driver realizing they have to break, but gassing it instead.
Also interesting is that in a Boeing, this likely would have never happened. In an Airbus control inputs are averaged together, so the captain had no idea his copilot was pulling back during a stall warning. In a Boeing, the control gear provides feedback (physically moves) to the pilots, so they would have been able to realize they were giving opposite inputs.
I don't think they're very comparable. AF447 was in no trouble whatsoever when the autopilot cut out. From that point a cascading series of misunderstandings led the pilots to crash the fully flyable plane. Going by memory, the sequence goes something like this:
* Ice clogs the aircraft's pitot tube, so the computer loses airspeed sensor data. This causes the autopilot to turn off and the flight computer to change to "alternate law". This is very important because normally the computer does not allow a pilot to stall the aircraft. Under alternate law the pilot has more direct/traditional control, and can stall.
* The pilots didn't clearly understand why the autopilot went out, or that they were in alternate law. The copilot (who was flying the aircraft) began pulling back on the stick, without communicating his action.
* The pilots started to get stall warnings, but didn't understand how they could be stalling, because they didn't know they were in alternate law.
* The captain tried to take control to recover. Despite verbally acknowledging the control hand-off (IIRC) the co-pilot continued pulling back on his stick.
* The captain didn't realize that his control inputs are being overridden by the co-pilot and becomes increasingly confused about why the aircraft isn't responding to his inputs.
* By the time the captain does realize what is happening, it's far too late to recover.
I was thinking that the assumption in the design is that the operator can always handle the hard/fringe cases but the reality is that due to the AF447 principle (whatever it's called), this is precisely what the operator cannot do reliably.
Yeah, I agree, but it seems that they also had no alert and were asking the operator to simultaneously record data, leading to them not even looking at the road at the time the crash happened.
That's so irresponsible. So the system cannot apply emergency braking, but why didn't it try at least to decrease the speed?
Those cars are not ready for public roads. If your sensors give too many false alarms then you should improve them.
Also I think initially there should be a speed limit for self-driving cars. For example, 15-20 mph should be enough for driving in the city and won't cause too much harm in the case of an accident.
It's worth remembering that the video of the acciedent that Uber made available showed a pitch-black road with very little lighting, something that is not corroborated by the NTSB report. Given that the only comment about visibility in the preliminary report is that "(r)oadway lighting was present" it sounds very likely that Uber deliberately tried to create a misleading impression.
>> The forward-facing videos show the pedestrian coming into view and proceeding into the path of the vehicle
This is a little less clear-cut but it also seems to cast doubt onto the initial statement by the Tempe police chief, that Uber was not at fault because the pedestrian dashed onto the road suddendly and the crash was basically "unavoidable". [1]
This one is a complete no-brainer: At the first hint of an anomaly, the AI should have 1) alerted the driver and 2) started to gradually slow down.
If that strategy produces too many false positives, it's time to go back to the drawing board. The right answer in that case is absolutely NOT to say, "Ah, just fuck it" and deploy the system in the field.
1. Ones where a human pilots them most of the time, but a computer steps in in emergencies.
2. Ones where a computer pilots them most of the time, but a human steps in in emergencies (hopefully).
For some reason I don't understand, people treat (2) as an evolution of (1). But it is the inverse.
Presumably the point of #2 is that it's easier than the implied #3 ("The computer always drives") which is their as-of-yet unrealized goal. Of course deploying #2 into the wild may prove to be ethically unjustifiable. Being easier than the implied #3 is a poor excuse.
Funny, all this contention around technology that simply doesn't exist yet. When automakers market SAE Autonomy L2 (https://en.wikipedia.org/wiki/Autonomous_car#Classification) certified vehicles as "self-driving" in official marketing media, and even more so when their sales reps blatantly lie to customers (both of which should be illegal), they're responsible for lost lives and undermining their companies' image, ethics and financials.
There's only one car certified for SAE L3 / "eyes off" and it's the 2018 Audi A8, only up to 60kmph - this is still not self-driving.
The primary concern here is false advertising as pervasive as it is blatant.
BMW has collision detection system that makes a loud beep when it determines an imminent collision. The beep alerts the driver regardless of false positives or false negatives. Its a simple solution which many cars have had for over 10 years.
Why Uber engineers chose to not alert the driver is beyond me.
> According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle
behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to
alert the operator.
Wow. So the vehicle and these tests were run knowing full well that an accident like this would not be preventable. This isn't manslaughter, this is murder. Uber was letting its car drive without any safety systems, without even an alert driver behind the wheel because her job was to monitor the panel. Fuck me. Wow. They should never be allowed to operate another autonomous vehicle again. The ceo should go to fucking jail. Fucking murderers.
If any non-rich, non-corporate individual had done the following: sent out a vehicle on city streets, driving fast under computer control, which did not have any capability to brake and which did indeed strike and kill someone, I think that non-rich person would certainly be sent to jail for their reckless behavior and also receive very large fines and civil judgments. It wouldn't even be close. The judge would shout at them during sentencing and the papers would cheer.
So, let's see what will happen to the Uber personnel involved.
By the way, it's insane to be an Uber test driver under these circumstances. They're going to hang you out to dry. Quit.
Interestingly, the picture shows that the car seems to have tried to avoid the collision by moving to the right.
Of course, every human drivers with experience knows that you pass pedestrians behind, not ahead. The Artificial Intelligence obviously was not smart enough.
In other words, as suspected, the self driving cars are a pipe dream. One can't just mix a bunch of statistical woodoo into a neural net and hope it will work. It may work most of the time, but the mistakes would be catastrophic and incredibly stupid.
Now I understand why Uber started their advertising campaign. They know this will come out and show the world how irresponsible they are, and they wanted to get in front of that.
This entire debate reminds me of this classic Milton Friedman video (I think the key difference with Uber, and where they are likely to get in trouble, is that they never made any estimate about how many lives their car experiment would cost that I am aware of): https://www.youtube.com/watch?v=EYW5I96h-9w
a. the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path
All of those things are bad things to hit, why not slow down?
b. 1.3 seconds before impact, the self-driving system determined that
an emergency braking maneuver was needed
Too late for either a human or computer.
c. operator is responsible for monitoring diagnostic messages
There is superseding responsibility to drive the car safely. Uber's policy sets up the test driver for failure, and puts people's lives at risk.
d. emergency braking maneuvers are not enabled while the vehicle is under computer control
The pedestrian had no chance.
The very feature that should make the car safer was disabled, but also the judgement was poor and too late, and Uber policy sabotaged the car driver judgement as the exclusive (not merely primary) safety mechanism by distracting them with data that in most every way would have been more diverting than talking on a cell phone or fiddling with the radio.
I hope the family got a lot of money in the settlement.
So as many pointed out quite early, this isn’t litterally a case of Uber killing people because they cared more about taking I up miles than safety an in effect human lives. They disabled safety measures because they where getting in the way.
Hopefully they will get the book thrown at them and then a couple of chairs.
I hope this accident weighs on the people who made these decisions so that they will be more careful in the future. I would hope they would not see it simply as a bug in the system.
All Uber executives should be forced to recreate this scenario as the pedestrians crossing the street, while their test vehicles drive the loop, over and over again until there are no pedestrians killed by their self-driving cars!
This is what happens when software companies that have traditionally had an incentive to "fail fast, fail often" mentality ship something in the physical world.
Especially something as powerful as a ton of steel moving like a missle on our roads.
Absolutely stunningly stupid that there are teams that built this and felt incentivized to put this on our roads without any concerns or safety mechanism. Shameful.
Legally, it will be amazing in the future to hold these people - the engineers, product managers, the PR people and the CEO (ex and current) all accountable. We did for something far less serious with VW...
I have to blame management in this regard. There must have certainly been engineers that spoke up about this problem who may be silenced by NDA's (or other means), even if they did resign in protest.
Stopping in time to not run over an unexpected pedestrian crossing the road would be item number one on any sensible person's agenda. Uber needs to be liquidated.
We shouldn't assume that engineers have spoken up about this problem. All the same incentives (stock, bonuses) and fears (pushback for whistleblowers, an anti-truth company culture, getting fired) that apply to management apply to engineers, too. Engineers are fallible people, too.
I think most people here can relate to being an engineer who is held to other people's unrealistic expectations. Problems happen when there is not a critical mass of individuals to push back on those requirements. It's possible that this project is Uber's last ditch effort at existing, in which case there would be enormous pressure on everyone to make self-driving work at any cost. It was a design decision to disable the car's stock automatic braking feature, for example. It's not that they're stupid, it's that they're putting their morals on the shelf in order to get this to work.
At some point you need to have a regulatory body in place that sets standard for safety, otherwise people are going to die.
> It's possible that this project is Uber's last ditch effort at existing, in which case there would be enormous pressure on everyone to make self-driving work at any cost.
If you're an engineer at Uber and you're facing this kind of pressure that's causing you to take safety shortcuts, walk away. Go work somewhere else.
Real engineers are responsible for their decisions they can't just defer personal responsibility to management.
If you build a bridge you know is unsafe because the company you work for will go under if it doesn't get built, it's still your fault when the bridge fails and kills people.
A "mistake" might have been "we fucked up the concept of a self-driving car completely, sorry about that, we will pull our fleet off completely before we kill anyone".
This is not a mistake. This is straight up manslaughter.
Mistake? Turning off emergency braking because it happened too often then continuing to drive on public streets isn't a mistake. That calls for a custodial sentence.
Also "we've got a safety driver and a copilot monitoring and classifying systems feedback, the co-pilot isn't involved in anything safety-critical so we can just remove them and give both tasks to the safety driver (thereby requiring them to spend half the time looking at and interacting with the monitors on the center console rather than watching the road)"
Vast majority of cars on the road and majority of new cars sold in US has no emergency braking assist. Who should we sentence every time somebody dies because of lack of EBA?
The vast majority of cars are being driven by humans who we expect to carry out an emergency stop if necessary. Replacing the human with a computer driver which is intentionally incapable of any kind of emergency stop and trying to justify this by relying on a human supervisor who's required to take their eyes off the road for much of the trip to stop the car is basically murder.
(Besides, relying solely on an unassisted human driver - even an attentive one - is dangerous enough that the industry wants to make automatic emergency braking systems mandatory on all new cars as soon as it's practical.)
As far as I'm aware there aren't any states that are allowing Uber to test their self driving cars on their roads. And after reading this report I see no reason that they should. So to me it looks like Uber's self-driving efforts are essentially over.
Doesn’t seem it. Read any article about Uber. The LAST thing they seem to be concerned with is ride sharing.
While everyone else is trying to be the Uber of X... Uber doesn’t even want to be Uber. Their latest thing is taking funding from the US Military for some project iirc.
This “technology” company has no idea what it really is.
Their core business is losing money hand over fist, and it's not clear the economics will ever work out.
I think they were floundering -- until the new CEO took over. Their approach appears to now be building an Expedia for local travel which includes ridesharing, JUMP bikes and Getaround.
These side projects (self-driving, logistics, etc) I believe are the holdovers of the earlier era before they knew what they were doing. They always struck me as more of as a way of distracting everyone else until they did.
It makes sense from a business perspective; their current core competency is mainly "blazing through VC money at an astounding rate". Uber's only shot at approaching any kind of profitability is from self-driving technology advancing to a point where they can cut out their biggest costs, namely, people.
Yeah, I'm of two minds of the usefulness of sharing links to things that a significant proportion of people might not be able to access.
This is actually a good use-case for that pay-per-article site/app (but I can't remember its name off the top of my head). Unfortunately, I don't know how easily I could find the article in it. I emailed them about that and they told me they were working on integrating with publisher sites; maybe they should look into integrating with aggregator sites instead.
Can we change the link to either the direct link to the report or a non-paywall report like the one on Ars Technica? There's nothing special about the WSJ coverage of this, other than it was the one that got voted onto the front page.
I'm not sure the title is click-baiting; the reality sounds exactly as dire as the headline.
> The agency, which investigates deadly transit accidents, said Uber’s self-driving system determined the need to emergency-brake the car 1.3 seconds before the deadly impact. The NTSB report said that, according to Uber, automatic emergency braking isn’t enabled in order to “reduce the potential for erratic vehicle behavior” and that the system also isn’t designed to alert the operator in case of an emergency.
Yes they were. You can not be kind of driving. Google has indicated this several times. Either the car can drive itself or not. This half way is dangerous.
According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2). 2 According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.