Hacker News new | past | comments | ask | show | jobs | submit login
Tesla’s Autopilot found partly to blame for 2018 crash on the 405 (latimes.com)
152 points by casefields on Sept 4, 2019 | hide | past | favorite | 227 comments



Fire truck driver here: People crash into parked fire trucks and ambulances and cop cars all the damn time. That's why we park the Big Red Truck at an angle behind the accident -- so that when the 2-ton car crashes into the 30-ton truck, the car will bounce off and the people behind the truck (us) won't be injured.

I don't disagree that Tesla's software is partly to blame here, but the null experiment also has a lousy track record.


As another fire engine driver (now officer), this is why more and more departments are employing "blocker engines", old decommissioned, stripped engines to provide this security, without risking a $500,000 engine to $1.5MM tiller.

Also, anything that fails to assume that emergency traffic may be at a vastly differing speed from other traffic, or that emergency lights should _almost exclusively result in an autopilot disengagement_ (because their behavior should but is not always entirely predictable, particularly to a machine without sufficient context and/or experience) is to me inherently poorly designed.


> As another fire engine driver (now officer), this is why more and more departments are employing "blocker engines", old decommissioned, stripped engines to provide this security, without risking a $500,000 engine to $1.5MM tiller.

Very interesting idea! You wouldn't even need a dedicated bay for it because who cares if it gets faded in the sun or rained on? An old tender that still holds water would be perfect for this.


Even better would be a truck with an impact attenuator [1] in the back. Those have the potential of saving the life of whoever crashes into it.

[1]: https://en.wikipedia.org/wiki/Impact_attenuator


> this is why more and more departments are employing "blocker engines", old decommissioned, stripped engines to provide this security, without risking a $500,000 engine to $1.5MM tiller.

That's pretty clever. I don't know why I didn't think of that myself.


This one time I go stopped at a fruit fly control point on the border between South Australia and Victoria, near Bordertown.

The person staffing the control point indicated with hand signals for me to stop on the highway in the lane I was in.

So I slowed down to walking speed and pulled off the highway into the car park next to the office.

The agent then lectured me about not following instructions and that she could fine me for disobeying.

I then calmly explained to the agent the reason I didn't stop in the lane on the highway: drivers of cars and heavy vehicles are known to have a terrible habit of not seeing other vehicles stopped on the road.


Interesting, I am pretty sure in Germany they do just that: they drive with you to the next exit. They have a huge display that tells you to follow.

At least according to my anecdotal evidence.


In EU you've usually got an entire always empty "emergency stop" lane on the right side.


The shoulder is still not to be used for a police stop. You should only stop there if there's no alternative. A police wanting to stop you is not an emergency; the nearest exit is close enough. Same for a lot of other situations.

Further, in Netherlands an ambulance is not allowed to use the shoulder at high speed. Meaning, they'll prefer driving on the normal lanes (even if a bit crowded) over the shoulder.


These are getting removed in the UK to be opened as an extra lane, with overhead sings warning you to get out of it if there's a stranded vehicle.



This is currently being argued in court... There's a good chance the plans will be reversed.


In my country the police cannot stop you on the highway. They have a sign at the back of the car saying "Follow me" and stop at the nearest parking. The speed limit is 140km/h.

I guess it's a bit safer when the speed limit in Australia is 110km/h


It always blows my mind when a highway patrol scolds me for not stopping immediately on the side of the road, instead pulling off at the first exit/intersection to make it safer for us.

I'd estimate it's about 10% of the time. But on the contrary, I've also had them express appreciation for doing the same thing.


Jesus, how many times have you been pulled over that 10% of the time is more than once!?


There was a period of a few years where I drove a track-prepped race car on the street daily, with out of state plates for emissions reasons, and it resulted in a lot of police encounters.


Has there been any studies of this? I think it’s well known that if you drive a sports car you get pulled over more often, but I drive a beat up pickup in the south of USA and almost never get pulled over even though I’m frequently 5-10 MPH over the speed limit. Would be interesting to see pickup vs a normal mid range sedan vs sports cars in numbers. Maybe throw in out of state plates as an extra variable.


I drive a Subaru WRX STI which is one of the most expensive to insure vehicles due to it be one of the most ticketed and crashed vehicles. I have never been pulled over while driving it. I have been pulled over in my Wrangler many times while obeying all laws, but never had a ticket from it.

https://insurify.com/insights/car-models-with-the-most-speed...


Sports cars are sometimes easier to track and that makes them easier to successfully pull over. I asked a cop about this after he pulled me and my red Stealth over for having a missing corner on a sticker on my license plate.


I don't know exactly where you live, but around me 5-10 MPH over is still 5-10 MPH under what even the police go.


What I notice in Australia is when police want to pull someone over will often follow behind a car with their lights turned off until there is a safe area to pull over - then will turn on their lights. This might be a few minutes down the road.

Some people will immediately see lights flashing and pull over straight away without checking for a safe location, so by delaying their lights - it removes that out of the equation.


Could you explain what a "fruit fly control point" is, i can't find anything about it on Google and it seems interesting to me as I am not an native English speaker. Thanks.



It's not the staff's fault. They're not working there because they display great independent thought.


I was going at 30-40mph in the right lane on a 4-lane all empty highway, when a gentleman flies at 60-70mph and hits us right behind, goes 360 turn with us and we hit the wall. Everyone survived without any medical incident (other than that I do panic and respectfully avoid those fliers now-a-days). Relevant because: - I hope my sprinter (size of mini truck, with upfittings) had an autopilot in the front/rear to speed up or divert to save an accident. - +1 to your parking angle. The other driver survived because he hit us at an angle and skidded full circle to minimize the impact. Of course, that helped our life as well.


This... sounds like it’s on you. If this is in the US, the “flier” is going the normal speed of the road, and you’re driving dangerously slowly.

It’s hard to tell that something directly in front of you is approaching. It’s just not something our brains are good at, compared to detecting lateral movement. Especially when it’s so unexpected. One of the first things they tell you in motorcycle safety class is that “constant bearing, decreasing range” situations are extremely dangerous.


He mentions being in a truck, but it's a Sprinter, which unless it's dangerously overloaded, should be able to manage more than 30mph up a freeway hill.

Australian driving law/principles state that "drivers should drive at the _maximum_ possible speed that is: within the speed limit, and safe for the conditions". Notably absent in this is "within your ability" as if you're not capable of safely driving on all roads, the balance of probabilities says you probably shouldn't be driving on _any_ roads.


GP said "highway", not "freeway". In some parts of the country/world, these refer to different types of roads.


But... regardless how fast the speed limit is, you should never expect the road ahead to be clear! A Sprinter is a huge vehicle, if you don't see that ahead of you, you must be severly distracted...


Less so than you might think. We’re just plain slower to detect things that are growing larger than objects traversing our vision. Especially without context clues like brake lights or other slow traffic, approaching at 40mph doesn’t give you a lot of time to correct.

Ultimately yes, it’s the other driver’s fault because it’s always your job to not hit things in front of you. But OP made his vehicle into a hazard.


There is no such thing as dangerously slowly - the fault lies with people that aren’t leaving enough room to take evasive action given their speed.


You'll be fined in The Netherlands for going too slow. They're not waiting for dangerously slow, way slower than any other traffic is enough.

Example (Dutch, use Google Translate): https://www.rijnmond.nl/nieuws/124541/Flinke-boete-voor-te-l...

A truck got a fine of 1500 EUR. The example was dangerously slow (eventually 40km/h), not just slow.


So I will be fined if I follow the speed limit, which is way slower than any other traffic?


No, but you will if you drive significantly slower and obstruct traffic flow.


The fines are probably to discourage this behaviour and improve traffic flow slightly. But I'd be surprised if the law uses the term "dangerously slow", or anything similar.


Many state roads post minimum speeds as well as maximum speeds (usually it's 40 or 45 mph, where the maximum is 65mph or 70mph), because there is in fact such a thing as driving dangerously slowly.


The other driver was clearly 100% totally at fault, but I'm just curious: why were you going 30-40 on a four lane highway? Where I live that's at least 25 below the speed limit.

Here it's mostly tractors and other farm equipment driving at those speeds.


In some parts of the US, the word "highway" is used to refer to roads (sometimes quite wide) that have stop lights and with speed limits around 40 MPH. I grew up in northern California, and we call roads without stop lights "freeways" and roads with stop lights "highways". There are various other terms, of course (e.g. "expressways"), but these are the most common generic descriptors.

I wouldn't assume that the driver going slowly was much under the speed limit, if at all.


Which is understandable when you consider there were "highwaymen" in the 1600s when the highways had nothing faster than a horse.


I should have clarified. 4-lane expressway, signals were about 0.5-1.0 mile apart. All lanes clear. Mid night.

It was NOT a freeway.


Makes sense, the usage is colloquially different in the Midwest.


Maybe going up a hill? They did mention they were driving a truck, it’s easy to have vastly different climbing speeds in that case.

Also, I would often go less than 20 down a pass during the winter because I didn’t have snow tires or chains, and was passed by plenty of locals doing 60+ because they were equipped for it.


Yeah, that seems dangerous. There is usually a hard 45mph minimum on the highways near me.


I'm never going to to discourage people from driving slow, because our monkey brains have shit reaction times and perception, and KE increases with the square of velocity, but if you're traveling well below the normal speed of travel, put your flashers on. In many places this is mandated for a certain amount under the limit.


In my view the wreck isn’t made worse so much by absolute magnitude of one momentum vector as by the magnitude of the difference in momentum vectors.


You have to consider that after a wreck, both cars have to come to a stop one way or the other.

Imagine being rear ended while you are stopped at a stop sign. The difference in momentum is small and there is no significant damage.

Now imagine going 80mph down the highway and someone going 81mph hits the back of your car. Still a very small difference in momentum, but your car is very likely to lose control and roll or hit other cars. It's not always the initial impact that does all the damage.


To put it another way, every collision you are in is also a collision with the Earth, which masses a lot more than you and wins every exchange of momentum.

Conveniently, your delta-v with this massive body you don't want to run into is also the one conventionally reported for your speed.


The difference is if an AI fails to handle a certain situation then it will fail every single time. With humans there is a random element and a small fraction end up failing.


But to an AI, a certain situation has millions of data points, it's very rare for a certain situation to reoccur. Similar types and classes of situations? You betcha. But they'd be handled based on other data points as well.


Then it's not an AI but just complicated if-statements; if it was more like a real AI then it would learn from the experience. The core premise of AI is that it's learning and improving all the time, not that it does the exact same thing every time.


you don't want that either, because you want to quality control the changes to behaviour. see for example first generation Forza drivatars, that learned from other online player unconstrained and became insufferable assholes


But situations are never "the same". The input (camera images, sensor data) will always be (slightly) different and – in this example – might just lift the confidence of "this is a truck standing on the road" over the threshold.


In practice Tesla's thing only seems to have crashed into a fire truck a couple of times against I imagine a large number of misses.


On Beijing’s third ring road one time, I saw a lady being stopped by a cop for some reason (rare, cops never pull over anybody there), she got into her car and promptly rammed into the police car directly in front of her (on accident I assume). I think she was having a bad day or something.


The argument that Tesla is not fully to blame because people also fails seems fundamentally flawed. A human driver at fault cannot avoid blame by simple asserting that other human drivers also make similar mistakes.


>People crash into parked fire trucks and ambulances and cop cars all the damn time.

Autopilot is supposed to be safer than humans, not equivalent to the worst drivers.


No TMA vehicles? I haven't seen an accident without TMA here, I think no one is even supposed to go out to the accident before the TMA is in place.


The null experiment is the human driver of that particular Tesla, not an average of all drivers


So if the crash didn't involve "the human driver of that particular Tesla" but instead was an identical incident with a different Tesla driver, we wouldn't be having this conversation? Because that seems highly improbable.

This particular driver isn't what makes the story special or newsworthy. The nature of the incident is. Our choice of statistical cohort should reflect this data selection effect, ie "all Autopilot miles driven," not "all miles driven by <person's name>."


I disagree because we're comparing what this individual driver would have done without a feature advertised as Autopilot.

I'm a partner in an auto insurance company, and most of our no-accident records will stay that way, statistically speaking.

If this driver had a one in a billion miles chance of crashing and Autopilot (with an inattentve driver, as they all tend to be) had a one in a million miles chance, then Autopilot is decreasing safety.

At a certain level of safety, Autopilot would make everyone safer, but it's definitely not there yet.


How do you know that this individual driver has a one in a billion miles chance of crashing? By statistics from other human drivers I assume?


I didn't say that I knew. I was only disputing the relevance of defending individual Tesla crashes by saying that Tesla's are, on average, less likely to crash when on Autopilot. You can't compare Tesla drivers on Autopilot to all drivers -- you have to compare them to themselves, or you have an extraneous variable.

Fatal human accidents are on the order of 1-10 every billion miles of driving in the US, yes. Tesla Autopilot crashes seem to be much, much, much more common.


Figure 7 depicts the movement of the Tesla and the two lead vehicles in the last 15 seconds before the crash. In the left panel—covering 15 to 8 seconds before impact—the Tesla is accelerating from 9 to 18 mph while following a lead vehicle (red car) at 30 to 46 feet.In the middle panel—covering 7 to 4 seconds before impact—the Tesla is traveling at a constant 21 mph while following a lead vehicle (green car) at a distance decreasing from 148 to 108 feet. In the right panel—covering the last 3 seconds before impact—the Tesla is accelerating from 21 to 31mph, with no lead vehicle and a forward collision warning half a second before impact.

...

About 0.49 second (490 milliseconds) before the crash, the system detected a stationary object in the Tesla’s path. The forward collision warning activated, displaying a visual warning and sounding an auditory warning to the driver. By the moment of impact, the Tesla had accelerated to a speed of 30.9 mph.

...

AEB did not activate during the event, and data show no driver-applied braking or steering before the crash. Tesla’s AEB is a radar/camera fusion system that is designed for front-to-rear collision mitigation or avoidance. According to the company, the system requires agreement from both the radar and the camera to initiate AEB; complex or unusual vehicle shapes can delay or prevent the system from classifying vehicles as targets or threats. [1]

Notice the last line. The system still has to recognize an obstacle as being visually car-like before braking to avoid hitting it. If it's not recognized as a vehicle, the car will not stop.

This was in January 2018, but it was a 2014 vehicle, which would mean the old Mobileye vision based collision system, right? We know Mobileye works that way, trying to draw boxes around car rear ends, because people have bought them and made videos showing what they see. Did that get fixed? Can it be fixed with a vision-based system? This is the big weakness of a no-LIDAR system.

Here, the driver had 3 seconds to react, because the car ahead turned out of the lane to evade the fire truck well before reaching it. Compare this 2017 video of a Tesla hitting a construction barrier. The car ahead didn't turn until just before the barrier, only giving the driver under 1 second to react.[2] Same failure to detect a lane obstruction, but not enough time for the driver to do anything about it.

[1] https://www.ntsb.gov/investigations/AccidentReports/Reports/...

[2] https://www.youtube.com/watch?v=VTdcWnGnnJQ


It only highlights how incredibly dumb these autopilots are. They’re incredibly diligent but completely fail at the necessary skill of understanding.

What is the point of any of this if the car with all of its sensors knowingly plows into a stationary red 30-ton obstacle. It’s something that a 4 year old human would understand instantly as an object to be avoided at all cost.

I really think folks don’t realize how critical all of our human understanding is to driving.


Except humans crash into stationary vehicles in lane all the damn time. Most highway pile ups are a simple crash lots of folks drive into the back of. Why's it hard?

A vehicle coming from a side road has a lot of movement within your field of vision. When driving on a highway we're well used to the fact and expectation that stuff isn't stopped. When the vehicle ahead emergency brakes or stops you have the growth of size of the target to gauge by. That's not much movement, until the last moment when it's far too late to evade. Some stationary debris is much easier - it's not meant to be there at all - you transition to evasion right from the get go.

For a human in a car, the dangerous highway case is the stationary vehicle in the fast lane, maybe one or two vehicles ahead. When flying it's the other aircraft that's on a collision course. Neither will move within your field of vision. Neither is easy to detect as problem, until it's very late.

FWIW 4 year old humans often run into things. :)


> Why's it hard?

That humans get distracted and often make mistakes is no surprise, it should also be obvious that the amount of risky behavior is correlated to the amount of _perceived_ risk in any given situation, which is probably why so many are "a simple crash."

It is, however, somewhat scary that a machine that has nothing else to do other than to drive the car as safely as possible still makes these "dumb" mistakes.


Adaptive cruise control makes dumb mistakes.

Cruise control makes dumb mistakes.

I know people who think automatic transmissions makes people worse drivers.


> I know people who think automatic transmissions makes people worse drivers.

Well, it's certainly a bit trickier to to respond to texts or browse instagram when you need two hands to operate the vehicle.


You would probably be surprised what a person with 30+ years behind the stick can do while operating a manual transmission vehicle.


There are indeed satistics showing that standard transmissions actually correlate positively with driver attenttiveness. I will try to find a link...


https://journals.sagepub.com/doi/abs/10.1177/108705470628810...

Well, thats half of the problematic drivers, and i suspect texters as well. /s


> Except humans crash into stationary vehicles in lane all the damn time. Most highway pile ups are a simple crash lots of folks drive into the back of. Why's it hard?

Humans get inattentive and distracted, and humans follow too close for current conditions and speed. There's no excuse for either of those factors to apply to an autopilot system. If they do, it's badly designed.


By the time emergency vehicles are on scene, traffic has slowed. It's not like you're driving unimpeded at 60mph until "Oh crap, stationary fire engine with lights on immediately ahead of me".

> Except humans crash into stationary vehicles in lane all the damn time.

And Tesla's argument is that Autopilot is far safer than humans.


https://www.tesla.com/en_CA/VehicleSafetyReport?redirect=no

In the 2nd quarter, we registered one accident for every 3.27 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 2.19 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.41 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 498,000 miles.*

*Note: Since we released our last quarterly safety report, NHTSA has released new data, which we’ve referenced in this quarter’s report.

Whatever is the reason (may it be better road conditions) when you use Autopilot you are less likely to crash.


It's still misleading, because the miles driven with Autopilot are only a subset of total possible miles, and specifically are a _smaller_ subset of total possible miles than "normal driving".

To wit, if Autopilot is unavailable or disengaged because of poor weather/road conditions, a human still has to drive those miles. Autopilot gets the _best_ conditions to show its abilities. Humans get all conditions to show their abilities.

The question is "how did human drivers do driving the same miles, at the same times, as Autopilot", which is impossible to calculate.


As I said, whatever the reason, there are less accidents with Autopilot, than without. Autopilot is a convenience feature.

The question is: Are you willing to accept only 33% less likely hood of an accident with autopilot engaged as compared to when you drive on your own in not Autopilot safe conditions?

Do you use Uber knowing that their average driver will be ... well ... average, or just somewhat better than average? Knowing that the average driver baseline is 500 000 miles per accident?

https://www.technologyreview.com/f/612346/uber-and-lyft-are-...


He has a point, right? There's a possibility of clear sampling bias: if there are X miles that are safe and Y miles that are dangerous and autopilot can only be engaged in the X miles, then it's not creating safety, but the partitioning will cause it to appear to be safer than the human. If the partitioning had been done without autopilot the two partitions may show similar behaviour.


For an analogy, you've got two surgeons, one who only takes cases where he is extremely confident he can operate successfully, and one who takes the cases no one else will. The first loses 1% of his patients on the table, the second 20%. Which person is the better surgeon?


You have to compare the same miles, it's not fair to compare highway driven miles to city driven miles.


Most highway pile ups are a result of drivers following too closely and therefore being unable to stop without hitting the brand-new stationary obstacle where the car they were following used to be.


Quite, but they are still driving into stationary traffic. Nor is it only the immediately following vehicles - it's far from uncommon for vehicles minutes behind to crash.

None of which changes the difficulty in detecting a stationary vehicle, that's in your path, as quickly as you would like.


> When the vehicle ahead emergency brakes or stops you have the growth of size of the target to gauge by. That's not much movement, until the last moment when it's far too late to evade.

You see the brake lights up, and as long as you're alert and keep some distance you're going to be alright. If you're not alert and/or driving too close you hit it.


We need to look at driverless a different way and I suspect this is why it's going to be hard to accept driverless for longer than we need to.

Computers look at the world in a different way. So they are going to have accidents that seem very obvious to humans making people go "look how bad driverless is because a human would never...". Many people will take that as a reason to avoid driverless tech.

Yet driverless will (or should) avoid many more accidents humans would have made if they were in control.

Because of this we need to look at overall statistics. If we only look at specific driverless accidents from a human lens it's going to be easy to non-lateral thinkers to get caught up in criticising but it doesn't actually understand the benifits or not.


I can't speak for others, but while this is true in aggregate, I'm not personally willing to stake my life on a system that can, on a 1-in-a-million event and with no warning, accelerate into a fatal collision. An otherwise 100% accident-free driving record matters not when I'm dead.

The failure modes of these systems need to be much more gentle: when I'm confronted with an uncertainty while driving, I slow down and pay more attention. I don't speed up until I'm confident that a higher speed is reasonable. This is far far more sophisticated than simply "can I see something in front of me" -- I'm evaluating the probability that I need to stop based on the events I can see happening. If I see a car in front of me slow down and swerve out of the way of something, that means something much more than "obstacle detected" -> "obstacle removed".


> while this is true in aggregate, I'm not personally willing to stake my life on a system

Why not? Statistically, you're safer with an autonomous system. Unless you're sure you're a much better driver than average, your position is irrational.


Only because the autonomous system restricts itself to drive in safer than average conditions. Stopping for a stationary object is something humans do dozens of times per journey in city traffic.


And the lawyers will become very wealthy putting these companies into bankruptcy... as they should.


Why should they?


Because self driving car companies are putting a dangerous product onto the road, citing “safer than human drivers” in a disingenuous manner. Musk likes to cite autopilot as safer than human drivers, based upon accidents per miles driven. However, where are autopilot miles primarily being driven? On the highway. I expect a sober human driver to have a much lower accident rate than Tesla’s autopilot when driving the same highway miles. And yet Musk pushes his agenda... which is possibly an overreach. When I see cars plowing into the rear of fire trucks, into freeway split barriers, into tow trucks or under turning semis, I have to ask myself if I would have been able to avoid those, and I can honestly say that I would have. Would a drunken high schooler? Probably not.

I simply don’t believe the “safer than humans” narrative is being honest with the number of caveats.

Until self driving car executives are willing to send their children into the street in a real life game of frogger with their cars being driven by a bunch of drunken frat boys in a blizzard, they shouldn’t expect the general public to blindly agree to be their guinea pigs either.


> I have to ask myself if I would have been able to avoid those, and I can honestly say that I would have.

This is not the whole equation. The question is also would you be able to react to all accidents that Autopilot is avoiding? Are you 100% diligent ALL the time? Can you react in 200 ms ALL the time to someone else stupidity? Last-minute breaking for the exit, veering into your lane, speeding trough crossroad on red etc.?

https://www.youtube.com/watch?v=P68XGJticuo

https://www.youtube.com/watch?v=FrJ2uPRRtz0

https://www.youtube.com/watch?v=QVdTAwU07Jc

https://www.youtube.com/watch?v=9cFJI6Qf9GA

> I expect a sober human driver to have a much lower accident rate than Tesla’s autopilot when driving the same highway miles.

This is based on what?

What about Tesla miles without active safety features vs. Tesla miles with active safety features?

Based on Tesla safety report just having active safety features active without an autopilot is decreasing the risk of collision by 36%. As far as I know, both numbers include miles out of highways.

"For those driving without Autopilot but with our active safety features, we registered one accident for every 2.19 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.41 million miles driven."

https://www.tesla.com/en_CA/VehicleSafetyReport?redirect=no


I’m not the only one questioning these claims:

https://qz.com/1414132/teslas-first-accident-report-claims-i...

Tesla’s autopilot miles are primarily driven on highway miles (in low speed, bumper to bumper Bay Area/LA traffic), in new cars (with well engineered safety cages and multiple airbag systems) which are well maintained (Tesla owners are unlikely the be driving on bald tires and experience blowouts), unlikely to rollover (one of the most fatal accident types) due to battery weight distribution. The car, without autopilot, is much less likely to experience fatal accidents. Couple that with the type of autopilot miles driven, and I expect sober human drivers in the 30-69 age bracket would have been less likely to experience accidents in those same miles.

I expect Tesla is intentionally trying to cloud the statistics to build the public narrative that self driving cars are safer. I believe that Tesla truly expects them to be safer eventually, but they’re smart enough to have the numbers on equivalent human driver crash rates... I’m sure that data was compiled by a very energetic Tesla engineer, put into an power point deck and emailed around. Perfect for a lawyer to subpoena.... and prove that Tesla knew their product wasn’t.....

And Tesla isn’t the only self driving aspirational car company pushing the “self driving cars are safer narrative”. Even Waymo (or maybe the media) claims their vehicles are involved in fewer accidents per mile... but they’re not self driven cars. They’re self driving cars with a human overriding the car’s decision every ~10k miles. Human drivers have an accident approximately every 500k miles (eliminating the under 25 and over 70 age bracket, and non-sober drivers - I expect that number to be closer to an accident per 500k-1M miles). So, a self driving car, without a human, would have been in some type of bad situation 50 times in those 500k miles a human would have driven with just one accident. My suspicion is that in a high fraction of those 50 disengagements, there would have been an accident had a human not intervened. My real suspicion is that self driving cars without human supervision are 50-100x worse than humans.

Lawyers are the only ones who are paid to figure out if my statistical assumptions are correct and to bring them to light.

And yet, I drive with Tesla Autopilot every day. Why? Because I believe that on a portion of my commute, it’s absolutely a better driver than I am. That portion of my commute is straight high speed traffic, periodically interrupted by people slamming on the brakes. It’s boring and mentally draining at the same time. I wish everyone on that portion of the freeway was driving with Tesla autopilot, as accidents are the major cause of my commute delays. Those 20 miles of new, well maintained, straight, properly marked freeway miles, driven during perfect weather, not into the sun, are ideal for Tesla autopilot. Nobody should take those ideal miles and extrapolate them to the rest of my much more dangerous commute.


"And yet, I drive with Tesla Autopilot every day. Why?"

I think we generally agree on our understanding.

See my other answer:

As I said, whatever the reason, there are fewer accidents with Autopilot, than without. Autopilot is a convenience feature.

The question is: Are you willing to accept only 33% less likely hood of an accident with autopilot engaged as compared to when you drive on your own in not Autopilot safe conditions?

------

"My real suspicion is that self driving cars without human supervision are 50-100x worse than humans."

Yes, Waymo in their fully mapped safe heaven may be around that.

Teslas in cities are probably more like 1000-100000x worse than humans, I don't think they would be able to drive even 100km in random city traffic, like a taxi, without an accident. They would probably be super eager to get rid of the steering wheel if they would think that Teslas are better unequivocally, all the time, no matter the conditions. Maybe the next iteration will improve it 10 times, but they are a long way from self-driving taxis.


I believe that the select miles I drive with autopilot are the only miles which are actually safer than me driving.


> However, where are autopilot miles primarily being driven? On the highway.

Precisely. Autopilot has the luxury (need) for disengagement in suboptimal conditions. Human drivers don't have that. So Autopilot stats self-select for ideal miles. Of course, we don't have as concrete metrics for "how humans drive in clear dry well marked highways" to the exclusion of crappy conditions.


While "human understanding" would be nice, detecting big solid obstacles is a solved problem. It just costs money. Until somebody orders enough LIDAR units to get the cost down.

Tesla's "autopilot" collisions are all very similar. They're all "hit big stationary obstacle that wasn't the rear end of a car". So far, a fire truck, a street sweeper, a stalled car at a freeway median, a crossing semitrailer, and two freeway barriers. This is a well-defined defect.


you do not need LIDAR to recognize a stopped vehicle, higher powered visual processing will suffice.


You sound like Elon. LIDAR is far superior to radar in a lot of cases. The best situation would be for Tesla's to use a mix of both radar and lidar.


This is completely ridiculous. Adults drive into stationary objects all the time. 4 year olds walk into stationary objects all the time.

As far as I'm concerned, there is only one criterion that matters when evaluating self driving cars: is it better than human drivers?

The fact that we are still discussing this _one_ accident that happened _a year ago_ indicates to me that the answer is a resounding yes.


Well, we could also talk about the time the Tesla accelerated into a gore point when the lead car followed the lane to the overpass. Or the couple of times the Tesla drove into a semi trailer. Or the several other stationary emergency vehicles they've hit. At least the semi's were moving.

We don't really know how well the Tesla automation does vs human drivers, because the statistics are complex, and Tesla only gives us deceptive summaries.

The capabilities that Tesla cars have aren't that much different than other luxury cars, but the other manufacturers don't have NTSB investigations, because their marketing isn't convincing people the car can drive itself.


> The capabilities that Tesla cars have aren't that much different than other luxury cars, ... because their marketing

It's actually somewhat impressive, if frustrating, how well Tesla's marketing works.

In a previous article here a couple of weeks ago, a Tesla owner was touting "how much better" Tesla's blind spot monitoring was than "other manufacturers", because it took speed differentials into account, while my 2015 Audi A4 did exactly the same thing. If you were faster than someone in your blind spot, it wouldn't activate. If speeds were similar, it'd activate at close range. If they were vastly outpacing you (I'd see it when adjacent to a HOV lane), it'd activate well within a safety distance to factor in speed differential.

While there are "dumb" Blind Spot sensors, multiple Tesla owners were of the fervent belief that this was an example of how unique Tesla's technology is.


Even 2011 has it, and I am sure it wasn't the first year it was available either.


> Adults drive into stationary objects all the time

I agree. But do they when alert and focused and when they clearly see the object? I highly doubt it.

This Tesla was all of those things, and still crashed.


That's a non-sequitur unless the average driver the autopilot is replacing is alert and focused.


Or you could have only the automatic emergency braking in a vehicle, and not autosteer.

Stop acting like it's all or nothing. The technology behind autopilot is being touted as driverless/self-driving, but it can do passive accident avoidance. That should be the benchmark to compare too.

In this particular case, humans get distracted and get in an accident. Autopilot is dumb as shit and gets in an accident. But a human with an emergency braking system is smart and doesn't get in an accident on the account of driver distraction.


There are accidents caused by cruise control.

That's your Level 1 example. Yet Level 1 systems are not banned.


I didn't recommend banning. But to stop saying "it's safer than an average driver".


There are many Tesla Autopilot accidents under investigation, and their relative infrequency is due to the fact that a.) accidents themselves are rare, and b.) Autopilot (mercifully) has yet to see widespread deployment in the U.S. vehicle fleet.


> As far as I'm concerned, there is only one criterion that matters when evaluating self driving cars: is it better than human drivers?

By that measure, no tech company has contributed more to self-driving than Facebook Inc, masters of distraction. Relative safety isn't a good metric.


What's a better metric? When deciding whether self driving cars is something to encourage or discourage, the obvious question to ask is: Does this improve safety?

You can make moralistic arguments about why you don't like this or that, but at the end of the day you are causing unnecessary deaths when you choose to suppress a system which is better than human drivers.


> is it better than human drivers?

The bar is much higher than that. People accept people killed by other humans suddenly becoming ill or briefly not paying attention. We accept human error. We do NOT accept machine error.

So the criterion is that autopilots should avoid a lot of accidents that humans risk getting into (e.g. lack of attention) but it must not be involved in ANY accidents that a human paying attention would get into.

In the end, no one really cares about the overrall safety or the statistics of it all, when it comes to acceptance of new systems. What matters for public perception (so in the end legislation) will be an almost impossible bar for autonomous vehicles, because they don't have the benefit of acceptable human error that we give flesh drivers.


I agree with your thesis that facts don't matter when forming public opinion. Feelings and perceptions matter much more.

I do disagree about the mechanism. I don't find that humans are very generous to one another - public opinion very often assumes the worst and takes the least generous interpretation. People are more than happy to vilify a reckless driver who caused a death.

I think the mechanism here is media coverage. We are still seeing articles pushed out by media corporations about this one accident from a year ago. Meanwhile, human drivers kill thousands of people a day and you don't hear a peep about it. So naturally public opinion becomes scared of self driving cars and ignorant of the real issue.

All of that is really tangential to my point, though. My point is that "feelings over facts" is grossly immoral when ignoring facts causes tens of thousands of preventable deaths each year.


>The fact that we are still discussing this _one_ accident that happened _a year ago_ indicates to me that the answer is a resounding yes.

It takes a long time to do an investigation. This is one of a number of incidents. You should try actually reading facts instead of Elon's Twitter.


Not all the time, only when not paying attention. This system was paying attention and still run into obstacles. Can you imagine a human driver looking at something on the road and going "eh, I don't know what this is, so I probably should not brake".


...Also, that urban environments are the _last_ places these should be deployed. There is just too much everyday weirdness.

Hell, I get confused sometimes in driving SF - flakey drivers, aggressively solipsistic homeless folks crossing (or just standing around in) the street randomly, bicycles, shopping carts, dogs, kids...

I want to see the robot death boxes competently handle less fraught environments before allowing them around humans.

And industry suggestions that the humans not in robot death boxes cede the streets to those who are? Ah, no. Cars own practically everywhere else - you can go play with your toy on the freeway or drive the damn car yourself around other humans.


Sounds like dumb policy to me. Why wouldn’t it stop for a boulder or fallen tree?


/devils advocate

False positives are dangerous. Trying to detect ‘Anything’ may result in more deaths than hitting the tiny number of exceptions.

Alternatively, the system would be overly flakey and hard break all the time.


Slowing down is a lot safer than plowing into a stationary object at high speed. This is so obvious there must be something missing to the conversation.


Self driving cars are going to be driven a lot, so something that’s a 1/10,000 risk that happens every week to 1 million cars is going to happen every day.

Self driving cars need to have a threshold before hard breaking. Hard breaking is risky, it can cause serious accidents. Reducing that threshold thus increases some risks even if it lowers others.

So, now we need to balance a risk of say ~1/10,000,000 per car per year vs what? I don’t know what changing those thresholds would result in, but I do know it’s not free and there would be false positives.


Everything you wrote is plausible or true, but the fact remains slowing to a stop is the best choice unless the obstruction is tiny. That’s what a human would do. There might be a possibility of changing lanes but that is next level.


How often are bolders in the road?

As software engineers, we can understand the hidden costs in complexity and bugs when implementing features that don't need to be implemented. I could imagine that false positives for bolder avoidance causing more harm than false negatives.

Obviously it should be smart enough to detect any and every obstruction, but I'd rather the 99% use case get ironed out first.


> How often are bolders in the road?

as someone who has driven all over the west , all the time.

You have New Mexico and Arizona that believe that steep incline cliff faces on both sides of the highways are a fashion statement, states like California, Washington, and Oregon that like to build new properties on rapid-eroding mountainsides, and states like Nevada that simply don't patrol sparsely populated highways -- and are simply ignorant to the boulder some kids rolled into the street some weeks back.

All anecdotal, but i'm a fairly heavy driver and I see them commonly.

I agree that false positives are to be avoided -- but that doesn't really lessen the value in boulder-avoidance ; it just brings up yet another point to worry about with the system as a whole.


Because false positives would cause dangerous accidents as well.

And worse (heavy cynism ahead): they would make the car look stupid even when not causing an accident, whereas boldly plowing into whatever might or might not be an obstacle makes the car look strong and confident, right until it looks all bent and crumbled. Car sales are incredibly marketing driven and a self-driving implementation that routinely chickens out of ambiguities that look easily solvable to humans would be a worse marketing problem than twice the number of accident investigations Tesla is facing.


Because it's a Level 2 system, and it doesn't actually know how to drive a car.

Similar to how cruise control (Level 1) doesn't know how to drive a car. Yet people are willing to put up with the concept that it causes some accidents.


Doesn’t need to. It needs to slow to a stop when approaching a large stationary object, car or not.


That's a level 3 requirement, not level 2.


It’s a simple requirement, I don’t care about the number. In fact it is the only feature I want out of such systems.


Ah. Well, no one offers that feature currently. In fact, are you aware that a typical automatic emergency braking system will always hit the thing in front of you if you're going faster than 45mph? That is if it sees it at all.


Exactly. This is what I always point out to autonomous car people, driving a car cannot be modeled by ML too well, a simple fire truck parked in the lane makes it obvious.


Telsa 3 Owner here...

There is supposition among some owners that another wreck involving a Tesla, where it went under semi trailer that was crossing the road, was because it failed to recognize it as another vehicle but instead thought it saw a bridge. Which is fun because at times the car certainly did not like some bridges and would brake for them. There was also a recent article about how changing to color of pavement surfaces can cause the summon feature to pause until it sorts out what it is up against.

I have zero issues using auto pilot but I never fully trust it. If anything on long trips I use it to keep an eye on me. I can zone out, I figure many do on long monotonous drives, but it does not. So it is my back seat driver

What it all boils down to is, it is damn hard and there needs to be more regulation in this space and especially for any system which won't require a human behind the wheel.

On another Tesla issue, they should be damn ashamed their blue tooth support for smart phones does not include control over music to the extent that you can manipulate play lists, artist selection, and such, from the car but instead have to use the phone which is illegal in many states.


It may not be fixed. My 2019 Subaru, also a camera based system, does exactly the same thing (accelerate towards stationary traffic when the car in front changes lanes). And car and driver found at least with the 2018 Model S that it performed somewhat worse than Subaru on avoiding stationary objects.


I use the automatic cruise control on my 2018 Subaru pretty much 100% of the time I'm on a highway, and I've never noticed this problem.

The only limitation like this that I've noticed is that the range isn't long enough for me to trust it to slow down if I'm traveling at highway speed and there's a stationary vehicle far ahead. Is this what you are referring to? Because I don't think that's the same problem. If I'm not traveling at highway speed in the same situation, I'll let the cruise control do its thing, and as soon as the stopped vehicle is in range, it will start to slow down.


The driver was found using his phone, over-relied on the autopilot system, and lied about what he was doing when being questioned. Sadly, we can't 100% trust these automated driving systems yet so folks need to stay attentive behind the wheel. The more accidents like this will cause lawmakers to create laws that can potentially slow down automated driving development.


The best way to stay attentive behind the wheel is to actually be the one driving. There's virtually no way of being both attentive and passive for long periods of time. If a driver can't check their email or whatever on their phone while autopilot is on, then autopilot is not safe to put in cars. And while I'm okay with not-exactly-safe for most things people willingly consume or use, driving is not an area where it's okay to roll out a feature that may cause people to stop paying attention to the road.


>If a driver can't check their email or whatever on their phone while autopilot is on, then autopilot is not safe to put in cars.

Have you ever looked at other drivers on the highway? You see people texting, emailing, eating, shaving, doing their makeup, reading a book, the list goes on and on. If there is an activity you can do while seated, odds are people are doing it while driving.

That is one of the flaws with a lot of the complaints about Autopilot. The question isn't whether Autopilot is safer than an attentive driver. It is whether it is safer than the average driver and the average driver is not attentive 100% of the time. And there simply isn't enough data available in the public to say whether a Tesla with Autopilot is or is not safer than the average driver.


The problem is that autopilot might make people even more complacent.


> There's virtually no way of being both attentive and passive for long periods of time.

How do aircraft pilots manage it?


Unlike a car, a plane at cruising altitude doesn't immediately crash into the one in front if the pilot doesn't pay attention for half a second, and despite their far higher absolute speed, actions performed when flying have a much lower relative speed.

For example, in a car, several seconds of following distance is normal and following another car that closely is common. In a plane, which can move in 3 dimensions, horizontal separation is measured in minutes and it's not that common to be following immediately behind another plane at the same speed: https://en.wikipedia.org/wiki/Separation_(aeronautics)#Horiz...


In cruise, it's extremely rare to be within 3 miles of another aircraft laterally and 1000' vertically.

Takeoff, initial climb, descent, and approach to landing are high workload times, easily holding the attention of the crew.

Cruise is much more relaxed and much less need for hair-trigger responses. My aircraft has audio and visual cues if an aircraft breeches a 2nm (or thereabouts) and is within 1000' vertically. Other than that, it's systems monitoring, weather monitoring, keeping up on the nav progress vs fuel remaining, and making sure the automation isn't trying to kill you, either subtly or acutely.


There are lots of differences, but one is that situations requiring aircraft pilot intervention often have a timeline measured in minutes. The recommended procedure for handling an emergency situation is to pull the checklist for that situation before starting to ensure you don't miss anything. Even urgent situations, like a TCAS alert, will often give 30 seconds warning. By contrast, this driver had roughly 2 seconds to notice that the autopilot was silently failing to handle the situation and to apply the brakes to prevent a collision.


Extensive training, restriction of hours worked, health restrictions (pretty sure my ADHD would prevent me from being a commercial pilot)...

Also, my understanding is that aircraft autopilot systems are designed to allow for pilot response to take up to three seconds. Three seconds on the highway is a lifetime.


Car driving also has health preconditions.


Airplanes are 10km up when cruising. Almost any error a pilot makes has minutes of time to be corrected. Contrast with 1 second while driving. Take off and landing are far more involved and take 15 minutes each (approx).


Aircraft on most kinds autopilots crash into mountains without any intervention. They are mostly completely blind.


They take shifts.


I've flown "right seat" (not pilot) a fair number of times. Air traffic control interacts with the pilot quite often and at fairly unpredictable times. This requires the pilot to be alert and respond promptly to a request or direction from air traffic control. If the pilot doesn't respond promptly, air traffic control repeats the request with more urgency and goes into the "lost communications" procedure if there is no response.

tl;dr: aircraft pilots generally must be attentive. Tesla drivers on "auto pilot" - not so much.


It is certainly okay when the rates at which these things happen is lower than the rates at which non-self driving cars have accidents.

A 0% accident rate is unrealistic and unfair.


Statistics makes for a tough sell to the survivors.

“Yes, this autonomous vehicle killed your son who just happened to be in the wrong place at the wrong time, but someone else’s son would have died in an unrelated accident somewhere else were it not for autonomy.”


You absolutely can't challenge someone's personal experience with statistics, but something about this line of argument still troubles me.

I've lost a handful of close friends to car accidents. If survivorship grants standing, then I'm for doing almost anything to reduce the tens of thousands of US road deaths each year.

Survivors have been angry at safety tech before, but we generally ignore them. I've known people who insisted they would rather be thrown from an accident; they wouldn't wear a belt. One friend came to that conclusion because his uncle died trapped in a burning vehicle. I respect the incredibly complex and deep emotional reasons that he came to his conclusion, but he was still wrong, and he shouldn't set seatbelt policy.

ON THE OTHER HAND...

We sometimes throw around this baseline of the natural accident rate of drivers, and that's slightly lying with statistics. Accidents are not evenly distributed across all drivers.

You have a lot of impaired drivers in that pack. You have a bunch of insolvent, uninsurable drivers who cause a disproportionate share of accidents.

If the worst 10% have most of the accidents, then even a car that makes us safer than the mean driver could still make 90% of us less safe. We really want a car that makes us safer than our percentile of drivers.

Maybe driverless cars are already better than the mean. Are they better than the 90th %ile? I don't know that anyone has enough data to say.

The most cautious drivers really will have an at-fault accident rate statistically indistinguishable from zero. We should absolutely be shooting for that.


People understand drunk drivers. They understand distracted drivers. They’re angry, rightfully so, but it’s a known problem and easy to comprehend. We can make laws tougher and feel good about it, even if it doesn’t help much.

It’s harder to emotionally cope with, I speculate, when someone you love is killed by a software bug, or by machine learning where no one can precisely explain why it made a certain choice.

I’m not saying I’m against autonomy. I just don’t know how to prevent it being caught in a backlash when people who didn’t have to die, do.


> but it’s a known problem and easy to comprehend... It's harder emotionally... when someone you love is killed by a software bug

Sure, I understand how a poorly designed steel guardrail can ride up the front of a vehicle and crush the chest of someone I had hugged a week earlier.

My sense is that understanding does not lessen the pain.

You seem to basically be trying to explain how hypothetical people that you are imagining could be experiencing pain that is more salient than mine.

I don't see how this takes us anywhere good.


>We sometimes throw around this baseline of the natural accident rate of drivers, and that's slightly lying with statistics. Accidents are not evenly distributed across all drivers.

Not only drivers, but also roads. I would imagine most people using Autopilot use it more on highways than on city streets. The rate of accidents per kilometer is obviously much higher in the city where you have more complex situations and slower speeds. So when comparing autopilot failures we need to take into account not only miles driven, but where they are driven.


> If the worst 10% have most of the accidents, then even a car that makes us safer than the mean driver could still make 90% of us less safe. We really want a car that makes us safer than our percentile of drivers.

The standard shouldn't really vary per-person.

If it's as good as a 50th percentile driver, it's good enough.

If it's as good as a 20th percentile driver, that's also probably fine.

You're right not to touch the mean at all, though.


> 50th percentile driver, it's good enough.

The problem is if there's any correlation with safe driving and early adoption, a median solution could also lower overall road safety.

For example, teenagers are generally more dangerous drivers than most, safety actually peaks per mile around 60 years old. 60 year olds are more likely to be at peak earning, more able to afford the latest car features.

You're probably right at 20℅ though? This all depends on the curve.

And there's a herd immunity, once most people are driverless, you can probably be less strict as roads become less hazardous overall.

We'd honestly probably make the biggest dent tomorrow by just finding the worst ten percent of drivers and giving them free uber/lyft for life. That would probably save us money as a society, money we could use on driverless tech.

Those aren't teenagers. Judging anecdotally from my time as an attorney, those are serially unemployed middle aged males with revoked licenses who somehow still own an old heavy truck and drive it on the sly with a bottle of something in the glove box.

If you read through case law that profile is weirdly common.


Ambulances save some number of lives. Ambulances are involved in some number of fatal accidents.

Should ambulances be banned? Subsidized? I can imagine different ways of analyzing the question, but asking "what would you say to the families of those who die" is a nonstarter. There's nothing you can say that will be satisfying, or ease their pain.

Statistics may be cold, but they're the only tool that actually lets us answer the questions like "would fewer people die with or without this change"?


On the other hand, people know who’s been saved by an ambulance. We’ll never know who was saved by the existence autonomous vehicles.

I’m just pointing out there’s a PR problem that may prove tough to manage.


Striving for 0 accidents and only accepting 0 accidents are two very different things...


Assume for a moment that vaccines could actually cause autism at a small rate.

Your argument applies there. "Yes your son became autistic, but someone else's son somewhere else didn't get measles and die."

I understand there are human emotions involved, but my opinion is that significantly reducing death count is still a better outcome. If we can't accept that noticeably better systems will still have faults, and that those faults will result in a different, but smaller number of deaths, then we're content asking for more deaths than necessary.


You don’t need to assume. With any medical procedure, there is risk involved. Including routine vaccination. That’s why there is a dedicated program to both report and compensate people who suffer adverse reactions from vaccines. See https://www.hrsa.gov/vaccine-compensation/index.html


> It is certainly okay when the rates at which these things happen is lower than the rates at which non-self driving cars have accidents.

Eh, Tesla likes to point to this statistic every time the safety of Autopilot comes into question, but it is an apples-to-oranges comparison because AP is only used on highways where traffic incidents are already far less common per mile driven.

Detailed data is hard to come by, but more nuanced studies have been far from conclusive in favor of the "Autopilot is safer than human drivers" thesis.


it is an apples-to-oranges comparison because AP is only used on highways where traffic incidents are already far less common per mile driven.

True, and that kind of flawed argument bugs me. However, there is also data that showed that accident rates went down significantly on the same cars after Tesla first enabled autopilot via a software update. This is much more compelling to me; if the same people driving the same cars in the same areas became less likely to get into an accident as soon as autopilot became available, then I don't see how you can really argue that the feature is not beneficial to safety.


This sounds interesting. Are you able to remember where you might have read this?


I assume he's referring to the NHTSA study which was released on the last day of the Obama administration and purported to show a 40% reduction in crash rates. It's still the highest search result on HN for "Tesla Autopilot."

That study was later shown to have employed sketchy data and egregiously flawed methodology. A deeper dive showed that the crash rates may have actually gone up with Autopilot. HN discussion here: https://news.ycombinator.com/item?id=19127613


Thanks for the link. (No idea why your comment has been down voted, certainly wasn't me).


Happens on every Tesla thread.


The way I remember it, Telsa keeps saying something like 'Autopilot is way safer than all-other-road-vehicles-combined'.

What's the point of comparing Telsa Autopilot related crashes to crashes involving at least one motorcycle?

I think it's obvious the answer is: deception*.


That's one possible answer. If they're trying to make their car seem safer than other cars, it's a deceptive statistic to use.

But when the argument is "using the vehicle this way is a lot safer than the minimum requirements to be on the road", it's a relevant and fair statistic.


I'm not convinced that's the right metric though.

A more fair comparison would be: other drivers of similar demography driving other vehicles of similar age and specifications.


Why, though?

I could buy a car with 3 star crash ratings, and people would be fine with that. I could get in a taxi with a mediocre driver, and people would be fine with that.

Why does the comparison have to be to me driving, in a car like this?

When I'm shopping for a car, it makes sense.

But for public policy discussions, what matters is how the safety ranks against the general population.


The question is whether "Autopilot" is safer than "No Autopilot" ceteris paribus, because if it isn't, then a trivial regulation to make the roads safer with zero downside is to just ban Autopilot.

You can't determine whether Autopilot makes driving safer by comparing miles driven only on highways by Teslas to miles driven across the whole vehicle fleet.


> then a trivial regulation to make the roads safer with zero downside is to just ban Autopilot

That assume almost everyone using autopilot will replace that with manual driving in the same car. Some of them might get cheaper cars or take uber and end up less safe.

And even then, there's lots of stuff you can do to a car that makes it less safe that we don't ban, as long as it's still sufficiently safe.

If anything, increase safety standards. If you want to do something that won't cost extra money then do something to ban oversized cars at the same time...


I get where you’re coming from, and I agree there the needs of public safety policy and self interest don’t always align.

What I mean to say is, when car considering the safety data of any vehicle, it is insufficient to know that the vehicle performs better than whole-fleet.

It’s also necessary to know whether any particular vehicle outperforms others in its class.

One of those without the other is necessary but not sufficient.


More data is almost always better, but I disagree on what's sufficient. If a specific use case ranks moderately high vs. the entire US fleet, then that single piece of data is sufficient.

The driver that's above average but could be more above average isn't where we need to be fixing things. Not when there are millions of drivers that are either hyper-aggressive or senile still on the roads.


Okay, yeah, that’s fair enough.


If you create a system that by design will cause the driver to lose focus, you are responsible for the driver losing focus.

Saying “we told the users that they must pay attention while they are not driving”, or “we told them that the system called ‘auto-pilot’ is not actually able to autopilot the car”, and “we advertised the car’s self driving ability but had documentation saying it doesn’t actually have that ability” then you are responsible for that system driver behaving exactly as you would predict.

Their have been numerous studies that show that people stop paying attention when they are given menial or non-focus dependent tasks. Saying that the only safe way to use autopilot is by not being a human is fundamentally the same as saying “the brakes in the car only work if you press them continuously for 5s or apply 300lbs of force”.


My experience with autopilot isn't that I lose focus as much as I focus on something else. Instead of focusing on keeping in my lane, not going too fast, and if the car ahead of me is braking, I'm focusing on the road ahead -- is there construction coming up? Is there debris in the road? Is there a wall ahead that gets too close to the road? Is there anything weird up ahead that needs attention? You learn what AP can and can't handle and you keep an eye out for anything out of the ordinary, which on long road trips is significantly less fatiguing and probably safer.


> as much as I focus on something else.

Yes, so people ending up focusing on things like cellphones and email.


There's obviously going to be some of that from people in every sort of car. The hope is that over all, people with AP are safer. I personally feel like I am.


If we want to be fair to consumers and bystanders, Tesla needs their product to safely stop the reckless behavior (not simply warn the user). If it can't do that, it needs to disable AutoPilot. AutoPilot was marketed in a way which led to Tesla selling their product to people with unwittingly dangerous uses in mind.

It is unacceptable for Tesla to enable this reckless driving. Tesla has the ability to affect the scale of this problem most readily and relying on individual consumers is not in the best interests of public health.


Why can't I say literally the exact same thing about all cars?

Let's hold people responsible for their actions, not objects. It's one thing to say "autopilot makes it hard to focus", it's another thing to say "autopilot made me pick up the phone while driving, and lie about it".


Tesla's Auto Pilot website makes claims that other auto manufacturers don't make like:

"With the new Tesla Vision cameras, sensors and computing power, your Tesla will navigate tighter, more complex roads."

There is a lot implied in the short phrasing of "navigate tighter, more complex roads". It also begs the question that Tesla's capabilities were sufficient in the first place.

"All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."

Why make potentially misleading and untested future-looking statements when consumer safety should be the focus of a system like this? https://www.tesla.com/autopilot


This might sound like a good idea in principle — in effect, don’t allow people to use the system unless they are more attentive then even most drivers are normally.

The problem is the natural outcome is that people will then simply just stop using the system, and end up less safe because of it, causing more accidents and possibly costing lives.

In the interest of public health, the system has to be better than the average driver, and also not annoyingly unusable.

So there is a balancing act. Tesla will already disable AP if you keep your hands off the wheel for too long. It even locks you out for the rest of the drive as a “penalty box” so you can’t just turn it back on.

As I drive down the highway on AP, the number of cars drifting in their lanes, crossing the lines, changing lanes without signaling, not maintaining proper speed, driving aggressively, driving for an exit at the last minute, panic breaking, etc. is actually somewhat mind blowing. It’s amazing there aren’t more accidents.

Versus the Model 3 which maintains speed and centers in its lane more precisely than I even do manually, for mile after mile.

What’s also remarkable is the constant progression of new features, better feature detection, more complex navigation and route planning which has happened over just the last 12 months. Tesla AP now in 2019 is significantly improved from what it looked like even July 2018, and v10 will just continue the progression.

Tesla safety stats are hard to compare to the general public, but what I can say definitely is a Tesla on AP on the highway is way more consistent and predictable of a driver than any human. It’s better to be in the Tesla in AP on the highway, but I imagine it’s also better to be driving behind or beside the Tesla in that same situation because it’s not driving distracted like almost everyone else on the road.


>Sadly, we can't 100% trust these automated driving systems yet so folks need to stay attentive behind the wheel.

And companies selling the systems need to stop calling them "Autopilot".

>The more accidents like this will cause lawmakers to create laws that can potentially slow down automated driving development.

You'll actually hear this from people working on self-driving at most companies. That they have deliberately been conservative about their roll out and claims because they don't want to destroy the concept of self-driving before they can achieve it. Tesla has done a great job of damaging the reputation of self-driving cars for everyone.


What is the point of a self-driving system where you need to stay attentive behind the wheel?

The physical act of turning a wheel isn't the hard part of driving. Paying attention is.


> The physical act of turning a wheel isn't the hard part of driving. Paying attention is.

As an avid user of Tesla Autopilot I'll disagree with this. On a long road trip, monitoring my surroundings and making sure everything is generally ok is far less fatiguing than making constant microadjustments to steering, speed and following distance.

Even while paying close attention to the ride, long trips (especially in stop-and-go traffic) become significantly less stressful and exhausting.


This isn’t a matter of whether you think you’re paying attention.

It’s whether you actually are - and that’s very different from what you perceive. For example i would be curious to see how eye movement of regular autopilot users differs when they are using autopilot vs when they aren’t.

Based on all prior studies of attentiveness I suspect you’d find dramatic reduction in active scanning, etc.

Part of the reason for the many curves in I280 in the sf peninsula was apparently to ensure drivers had to actually do something while driving. I’m tempted to investigate whether that’s a myth or fact, but the nature of the internet means I’ll probably just find that i280 gave me cancer :)


I've noticed the same thing driving my Chrysler Pacifica, which has lane keeping and adaptive cruise (with FCW). But it's not called "autopilot", and it deactivates if it detects that my hands are off the wheel.


Model S and X owner here. Autopilot will nag you after so many seconds (depending on vehicle speed and autopilot path planning confidence) if it doesn’t detect steering wheel torque, and if you ignore the nags, it brings the vehicle to a stop safely with the hazards on.

Car and Driver tested AEB in several vehicles (Tesla, Toyota, Subaru) and they are all pretty equally bad at it. Figuring out if matter in front of you, while traveling at speed, poses a life critical hazard is hard! It’s better than no AEB, but don’t rely on it entirely or you’re going to end up maimed or dead. As the driver, you are still the final responsible party.


How do you reconcile this with the claims in the accident report that the hands-off alert intervals are measured in minutes for the accident scenario? Quoted here with link to full source:

https://twitter.com/vogon/status/1169358625220882432

Visual indicator after 2 minutes of no detected driver-applied torque to the steering wheel (apparently because they only detect torque - i.e. turning/steering motions - and not pressure?)


>hands-off alert intervals are measured in minutes for the accident scenario

Since January 2018 there have been software updates that reduce the time that the car will allow you to not signal to it that you are paying attention. You now have to apply a little turning force to the wheel much more often than before.


I can’t speak to their vehicle, but we run the latest software revisions on both of ours (both 2018 builds, Autopilot hardware version 2.5), and nag delay has never been more than 10-15 seconds apart if I’m not applying sufficient torque. I have not experienced nags taking a minute to be realized (even in rush hour traffic with no curves and slow speeds).

The safety system is robust IMHO. If I push the accelerator pedal while autopilot is running, immediate nag. Running at 89 mph (max speed before Autopilot locks you out for the duration of the drive due to too high of a speed)? Nags every 5-10 seconds.

It’s not perfect by any means, but it also expects a responsible, aware driver behind the wheel.


> is far less fatiguing than making constant microadjustments to steering, speed and following distance.

Adaptive cruise control and lane assist, available in every modern car under the sun will make microadjustments to steering, speed, and following distance all day long.

What exactly does autopilot bring to the table?


I also wonder whether “dumber” systems are easier to predict. I’d feel much more comfortable knowing what the car won’t do (change lanes on me, e.g.).

(Update: to be clear, I’m not inherently opposed to a car changing lanes if I need it to, but it seems easier to deal with a potential emergency, and easier to persuade myself to pay more attention, if I know definitively it won’t do something that sophisticated.)


That's all Autopilot is--Tesla's branding for adaptive cruise and lane keeping. It's just as useful as competitor's versions.


Did the sun play a role in this? This picture from the report shows a low sun directly ahead. (Southbound 405 is going SSE in Culver City. 8:40AM accident)

https://imgur.com/a/OaHNqdQ

Assuming it was taken ~1 hour after accident, the sun would have been lower and to the left at time of the accident. The slightly askew firetruck might have been basically facing the sun, and its prominent rear features would have been in as much as shadow as possible. Obviously there were other factors involved (yes, ultimately driver's fault), but do Tesla's cameras have the dynamic range to be looking directly at a low sun and also see details in shadows? Originally I thought this was a high-speed accident and am surprised to learn it happened between 20-30 MPH.


If Tesla is unable to brake for a stationary fire truck with 3+ seconds of visibility, why should I be confident to pay $6k for “Full Self Driving”? Recognizing a stopped fire truck (effectively a wall) in front of me is a prerequisite for me to invest further. A part of me believes that the public is being intentionally misled on the capabilities, and Tesla’s lack of transparency, coupled with their financial incentive to obscure, makes me more cynical.

What Tesla needs to do is to classify every known accident, and then recreate the accident (video please) followed by a recreation with their fixes. If the scenario can’t be prevented in software, it needs to be disclosed to the buyer.

Would Tesla crash into that same fire truck again with the newest software? I suspect yes. I even suspect there are a bunch of engineers sitting around Tesla confident that crashing into that fire truck was the right thing to do.

Heck, I think Tesla should be required to have an asterisk next to every self driving claim, with an asterisk for every known scenario where their product has failed, including links to the crash reports, including full video evidence. The public should have a right to know exactly where/when/why the system failed as long as those failures aren’t prevented in software from happening again.


>why should I be confident to pay $6k for “Full Self Driving”

No one should be. Creating a fully self-driving car probably requires creating artificial general intelligence. It will almost certainly not exist for many more decades, if it ever exists. Elon Musk is a con artist.


Why should anyone buy a feature for $6k that basically gets them nothing until a future point when they could buy it anyways?


It will be more expensive later.


The thing is, it almost certainly still won't exist later.


Report...

Highway Accident Brief: Rear-End Collision Between a Car Operating with Advanced Driver Assistance Systems and a Stationary Fire Truck

https://www.ntsb.gov/investigations/AccidentReports/Pages/HA...


Thanks for the link. Interestingly, the NTSB's page appears unable to scroll at all with javascript disabled, but I was still able to view all the content by resizing the viewport(?) in both Firefox and Chrome. A very odd design decision...


This shifting blame to the cloud is a bad trend. It's the driver's fault.


... but the driver was the car...


Not legally. Has Autopilot got a driver’s license?

The fact that software was in control at the time is no more relevant than the guy who asks his twelve year old son in the passenger seat to reach over and steer while the driver rifles through stuff on the back seat.

Responsibility for an accident doesn’t ever rest with whatever entity is steering the car, it rests with the licensed driver. (Which could be software, when it gets good enough.) Anything else is madness.


This report is about determining what features are inherently dangerous. It is very relevant that autopilot can't detect vehicles.


A common marketing trick is to steal a word by naming your product it. Suddenly you have Autopilot when it's really "Autopilot". Or better "Tesla Autopilot".


Yes, a respectable company like Chrysler wouldn't use such an inaccurate term.

https://i2.wp.com/www.curbsideclassic.com/wp-content/uploads...



heh. You know it's a pre-programmed computer on wheels. It has no self.


I worry much more about public overreaction to rare incidents slowing and overregulating self driving technology than I do about the rare crash here and there. Overall statistical safety is what matters, and statistics goes out the window when the public emotionally fixates on specific mishaps and demands regulation that slows and cripples overall-safe technology.

Everyone believes, without reason, that he is a good driver and will always beat those janky autopilots. HN threads about autonomous vehicles are always full of unrepresentative anecdotes and angry moral denunciations of technology. It's sad.


Overall statistical safety just now for Tesla autopilot is probably worse than plain human driving. You can reasonably argue though that allowing research now will save lives in the future.


NTSB Report here: https://ntsb.gov/investigations/AccidentReports/Reports/HAB1... (found it here: https://www.theverge.com/2019/9/4/20849499/tesla-autopilot-c...)

Copying: When the last lead vehicle changed lanes—3 to 4 seconds before the crash—revealing the fire truck on the path of the Tesla, the system was unable to immediately detect the hazard and accelerated the Tesla toward the stationary truck. By the time the system detected the stationary vehicle and gave the driver a collision warning—0.49 second before impact —the collision was imminent and the warning was too late, particularly for an inattentive driver. The AEB system did not activate. Had the driver been attending to the driving task, he could have taken evasive action to avoid or mitigate the collision.

So there you have it: "particularly for an inattentive driver".

Not really an expert in the field, but I think a major problem with self-driving in general is handing over from an autopilot to a human. This is something that cannot be easily solved just by throwing smarter technology at the problem. Even if autopilot technology manages to be reliable 99.9% of the time, this won't be good enough, because people will invariably think they can rely on it 100% of the time and fall asleep or get distracted. So this 0.1% can be potentially catastrophic, because you cannot go from being asleep to full alertness in 0.5 seconds. Besides, even if one actually wants to maintain alertness, it may be harder to be alert and not doing anything than being alert and doing something (driving).

So, unless autopilots can really really handle anything thrown at them, this will be a real issue. Although there is much willingness for people to convince others and be convinced that this kind of autopilot is very close to being a reality, I suspect it may be not.


There was an interview with George Hotz from comma.ai where he talked about this.

His fix is a very attentive driver monitoring setup. So if you are not paying attention, it will barf at you. IIRC he was also talking about how this driver monitoring setup is worth something in itself for trucking companies, fewer damages.


I think we should look at this probabilistically. Yes, autopilot should be improved to prevent this, but no system is perfect. Certainly not human drivers. IMO the correct comparison is whether Teslas are on average more likely to hit firetrucks while on autopilot than normal vehicles while being driven by humans.

I'm sure this data most likely won't be forthcoming, but I think it's important to keep in mind that we should be looking at accidents fleet-wide and not in isolation. Autonomously driven cars will have accidents. More importantly, their failure modes will be different than human drivers. However, as soon as they are even a small margin safer than human drivers, their adoption would save lives. Besides, the systems will continue to get better. Human drivers have proven stubbornly difficult to improve.

We ought not reject autonomous driving technology only because it sometimes gets in accidents that appear to be strange or arbitrary to us. I consider the carnage perpetrated by human drivers to be arbitrary, but most people just take death on the roads as a fact of life.


Hey Elon, why don't you build a car course and fill it with all the things your cars have failed to avoid. Why haven't you made some OTA for large obstacle non car thing?


Even with Autopilot, everyone I know who owns one: Its their little baby... they don't even dare risk it, at all. These people doing stuff like this are either really stupid, or really rich... or both. Because I have never seen anyone I know dare drive recklessly with autopilot.


I was looking to get a leaf or Cadillac in a few years when their autopilot is on used vehicles.

Isn't the point that its... Autopilot?


> The vehicle’s design “permitted the driver to disengage from the driving task”

Which vehicle's design does not permit a driver to disengage from the driving task?


When those vehicles rename "cruise control" to "autopilot" we can probably have a discussion.

Tesla did this to themselves. They had a million options on what to call the feature and they chose one that describes it as driving itself. It's as bad or worse than ISPs with "unlimited" plans that are limited. Words have meaning, and tossing them around like they don't is dangerous.


Cadillac super cruise monitors your eyes to ensure you are looking at the road. Its not perfect but its a lot better than just requiring you to put pressure on the steering wheel.


Autopilot. By design it encourages drivers to stop paying any attention - the fact that it has a “signal the driver” to try and maintain driver focus indicates that the core design produces this behavior.

If the core design produces the same behavior, in a large enough portion of the user base to require such a feature, that’s a fairly good indication that the core design is causing a problem.


Every car has a measure of "autopilot" called "wheel alignment".

The car was built before Alta Vista was eaten by Google, yet goes dead straight on the freeway while you rummage through the glove compartment for your second favorite CD.

Maybe cars should have wacky alignment that pulls randomly left and right; that would teach drivers to keep their hands on the wheel and pay attention at all times.


You are answering the opposite of the question.

all cars allow you to stop paying attention.

I think the idea behind autopilot is that it reduces your workload (like cruise control, or automatic wipers, or automatic headlights or any other labor-saving device)


Tools like autopilot encourage lack of attention.

The problem isn't "you don't have pay attention to X" it is "you don't have to pay attention to anything during normal operation". It is human nature, if not outright physiology that it is incredibly difficult to keep someone engaged, and paying attention to a task that they are not actually doing.

The problem with halfway solutions like autopilot isn't that they control the car some of the time, it's that they can, essentially without warning Stop controlling it.

Self driving cars will happen. But the current model is essentially "do enough of the driving to maximize the likelihood that the human 'driver' is not going to pay attention, but don't do enough to make their lack of attention safe".

My statement isn't "cars allow you not to pay attention", it is "systems like autopilot actively encourage a lack of attention", that's backed up by the need for alert systems when they have obvious signs that the driver isn't paying attention, like "no longer holding the wheel". If the system didn't in general usage result in lack of attention then such a system would not be necessary.


So are you saying Autopilot does or doesn’t allow you to disengage?

Because you said it didn’t then argued it did...


The Tesla damage control astro-turfing is out in full force.

Between the surveillance state they've built and their complete disregard for safety, Silicon Valley will eventually be forced to atone for this from the public at large. Make sure you're on the right side of history.


Historically the right side has generally been technology making peoples lives better. At least in material terms such as life expectancy, diseases cured and so on. They still moan though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: