Hacker News new | past | comments | ask | show | jobs | submit login
Tesla sued in wrongful death lawsuit that alleges Autopilot caused crash (techcrunch.com)
216 points by mindgam3 on May 1, 2019 | hide | past | favorite | 466 comments



Tesla's blog post: https://www.tesla.com/en_GB/blog/update-last-week’s-accident

the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider

Looks like sensors failed to see concrete divider in nice sunny weather and car slammed in to it at 70mph. Driver was obviously over confident on system's ability to self-drive, probably busy looking at phone and ignored warnings to put his hands on steering.

In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.

These stats don't help when you read the guy had two kids who will now grow up fatherless for rest of their lives. Humans killing humans is very different thing than machines killing humans even if the fatality rates are 10X lower. Companies need to aggressively enforce, both hands on steering until self-driving is really really really good.


>In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.

This is a dishonest comparison. You're comparing the Tesla to an "average" that includes 1995 Corollas that will happily let a telephone pole into the cabin and 1999 Dodge Dakotas that will readily turn into a pancake if you put them on their roof.

You're also comparing across economic classes and demographic groups. Your average Tesla owner is not going to be working 3rd shift or making a 100mi commute and consequently has a much lower chance of starting a small logging operation after falling asleep at the wheel or harvesting venison at high speed. Your average Tesla owner is also likely of an age where they're too old to be driving dangerously fast for conditions all the time but not yet old enough to be getting bad at driving. Basically the mean/mode Tesla owner is a wealthy-ish 30-something to 50-something man and that's a really safe demographic compared to the national average.

If you want to make a fair comparison compare the Teslas models to other sedans of similar age and price.


I'm paraphrasing from the last time this was posted, but Tesla's words need to be read very carefully. What they say is:

> the driver’s hands were not detected on the wheel for six seconds prior to the collision

What Tesla does not say is that the six seconds the driver's hands were not on the wheel was immediately preceding the crash. It is any six seconds before the crash.

Admittedly this is a very dark reading, however, this reading is supported by their further claims that the driver 'had multiple warnings', which was in fact fifteen minutes ago, and for an unrelated event.

This is the kind of sketchiness that makes people wary of Tesla. Funny thing is, they make cool tech, they don't need to do any of this, nor things like discounting the 'gas savings' from the sticker price. I genuinely don't understand where their apparent need for pushing the definition of truth to its breaking point is coming from.

Had they not openly misrepresented their autopilots' capabilities, they wouldn't have lie in this bed of their own making now.


They are still misreprenting their self-driving capabilities in their marketing materials:

https://www.tesla.com/en_GB/autopilot?redirect=no

In the beginning of this video there's the caption "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."

If the lawyers in this lawsuit are any good they will have a field day with this.


That is pre-release full-self-driving software under test.

It is definitely NOT a feature that has been released to the public.

The traffic aware cruise control and self-steering features tesla has released to the public require your hands to be on the wheel. If you take your hands off the wheel you will get a small warning, then a larger warning and finally a huge alert.

By the way, if you show a pattern of ignoring alerts it will refuse to drive for you anymore.


It's only a comment for the situation on this video. I'm absolutely not an expert on this topic, but I guess there are places where it's true that the car itself can drive from one location to the other. But on the same page they wrote: "Current Autopilot features require active driver supervision and do not make the vehicle autonomous."


That's not what I read in this article.

> Huang’s hands were detected on the steering wheel only 34 seconds during the last minute before impact.

If during the last minute before impact the hands were missing for any 26 seconds, then it makes no sense to say that they were missing for any 6 seconds. The 6 seconds can only be consecutive before the impact, right?


You are claiming that the accuracy of Telsas language is outrageously deceptive, using an outrageously made up example of deception: "six seconds prior to the collision = It is any six seconds before the crash"

"This is the kind of sketchiness that makes people wary of Tesla."

Its the kind of sketchiness that make me wary of the stuff people post about Tesla. If the driver had multiple warnings - about anything, any car company will mention that, there is nothing exceptionally sketchy about Tesla doing so. And that is all on which you based your incredible interpretation of "six seconds prior to". Its just an entirely made up accusation.


According to Tesla,

> the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road [1]

Tesla has made a habit of drawing conclusions about investigations that have just begun. It is sketchy, and people have noticed.

[1] https://www.businessinsider.com/tesla-says-fatal-model-x-aut...


I don't understand. What other theory are you proposing than that he wasn't paying attention?


According to the preliminary report from the NTSB,

> At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected. [1]

This means the vehicle gave the driver less time to respond. I'd argue the manufacturer bears some responsibility.

[1] https://ntsb.gov/investigations/AccidentReports/Reports/HWY1...


And how does that or the vaguely introduced link justify accusations of outrageous rhetorical trickery ?


> I genuinely don't understand where their apparent need for pushing the definition of truth to its breaking point is coming from.

One theory is Musk has been misrepresenting the company's products for years in order to keep the valuation high, and he's now dug the hole so deep he can't find a way out.


> I genuinely don't understand

This can only make sense if you ignore all the weird, incoherent, inchoate, abusive, and absurd things that come out of Elon Musk.


I'm not even a Musk fan, but have you got refs for these? Worst I've seen is smoking a joint and misleading about having an investor they didn't have. I believe his opinion that Teslas only needing cameras bc that's all humans use to drive is incorrect but honest. I don't think he's mean, I think he's overly, conveniently idealistic to a point which needs to be corrected.


    > [...]misleading about having an investor
    > they didn't have[...]
It was a lot worse than that:

1. TSLA told the SEC Musk's personal Twitter fell under their purview.

2. SEC: OK, thanks!

3. Musk proceeded to talk shit on Twitter.

4. The SEC got upset, TSLA agreed to have all of Musk's investor-related musings reviewed before posting.

5. Musk proceeds to share investment-related musings on Twitter again without going through that process.

6. The SEC tightens the screws.

There's mistakes, and then there's dangerously infantile and reckless behavior from a CEO and the board.

It's not even about the specifics of what he said. How hard is it to have one Twitter account for your random musings and another one for investment commentary?

Or, hire a full-time small team of lawyers whose only job is to be constantly VNC'd into your phone and shut it down if you're about to violate the SEC's terms about your use of Twitter.

If only there had been some prior warning signs, I don't know, maybe the board having some actionable thing to work with earlier. Say, like Musk calling a worldwide hero "pedo guy" on Twitter a year earlier, which might suggest that maybe his impulsive use of social media should be reigned in?


I guess you missed the bits where he called a rescue diver a pedophile, probably his worst. The guy has no control over what he says. https://www.bloomberg.com/news/articles/2018-07-16/musk-labe...


The problem with the no hands-on-steering-wheel detection claim Tesla keeps bringing up is that it's quite fallible and not as conclusive as they insist. A light touch(like one might expect while driving a freeway with cruise control and lane-keep assist, for example) may not provide enough torque for the sensor to detect it. Plenty of people have complained about it in the past.

https://forums.teslarati.com/threads/hands-on-the-wheel-dete...

https://teslamotorsclub.com/tmc/threads/hands-on-steering-wh...

https://forums.tesla.com/forum/forums/tesla-autopilot-not-se...


> The problem with the no hands-on-steering-wheel detection claim Tesla keeps bringing up

I don't get why it matters. Why is Tesla's responsibility greater or lesser based on the position of the driver's hands? The driver could have had one hand on the wheel while mousing around on a laptop with the other. Or he could have been intensely focused on the road ahead with his hands resting on his legs, inches away from the wheel.

I'm neither defending nor attacking Tesla here, because I don't think I have sufficient information to have an opinion either way. But I'm pretty sure that learning the answer to this particular question wouldn't have any bearing on any opinion I might eventually form.


It's a dead man switch. If the dead man switch is not working it's a system malfunction.


That might be technically true, but it's irrelevant. Nowhere does Tesla ever describe it to the customer as a dead man switch.

And it wasn't malfunctioning. He wasn't a dead man. He was very much alive, understood the warnings, understood that the car expected him to keep his hands on the wheel and keep paying attention to the road. He understood the warnings so well that he was capable of avoiding them with such proficiency that he did not receive one for 15 minutes.


It's driver responsibility to drive the car and pay attention to the road.


By representing the car to come with an auto driver, they implicitly take on that responsibility.


By naming it "Autopilot" they take on that responsibility.


Imagine a driver driving down a parkway at speed and thinking "I'm on a parkway, so I should shift the car into Park."

That's how dumb that argument is.

If a driver ignores the requirements of their driver license AND the vehicle's clearly worded warnings, then the driver is entirely responsible. Claiming that the driver is not at fault because they interpreted the word "autopilot" to mean "I can disregard my responsibility as a driver and ignore the vehicle's very clear warnings" is absurd.


Any idea why torque is used, as opposed to reliable methods like capacitive? I think you're absolutely right about torque being minimal on straight ways.


One guess is that capacitive would be problematic if the driver is wearing gloves.


Tesla hand detection doesn't indicate that the driver didn't have their hands on the wheel. It detects if you applying torque to the wheel in a direction.

Also the Tesla fatality stat is them playing with numbers. Majority of fatalities happen where autopilot wouldn't be used. Only 15% of fatalities happen on the freeway and highways. I bet that number goes to single digits with pedestrian deaths.

https://www.iihs.org/iihs/topics/t/roadway-and-environment/f...


I think the bigger issue is that Tesla would rather lie than accept responsibility and work to improve the problem. I would never trust a company like this with my life.


Tesla is improving Autopilot all the time.


Except when it regresses. See 'Tesla hit truck broadside'.


>In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.

I didn't know Tesla sold motorbikes too.

Why do they still do this? What's the point if not trying to mislead the public?


> Why do they still do this? What's the point if not trying to mislead the public?

You're answering your own question there; they're doing it not with assumed evil intent ("let's lie about the safeties so that people kill themselves ! meuahahah") but for PR and stock reasons, but it ends up being the same result: they want people to see autopilot as "it drives for you" if you're a car buyer or an investor, but they also want people to see it as "it's just a helper, it doesn't drive you need to drive" if you're a regulatory agency, a judge or a customer that got into a crash.

As for the stats comparison removing motorbikes is a big one sure, but you could go one step further and take into account only the miles on free/highway and let's compare it again, because if the US is anything like here in France most accident are in the 15 Km around the drivers' home (probably because the driver feel "safer" because he knows the road) and that's also when autopilot is the least likely to be on. Meaning they're comparing themselves with a lot of miles where they're not competing.

And even further than that, no matter how safe/unsafe you are, if all your crash ends up with "the driver should have been handling the wheel but wasn't" then it doesn't mean all those drivers did it wrong, for me it's more a case of "you disengage/safety system is incapable of handling a case that's causing fatalities yet is common enough that all your crash report point at it".


It's a terrible comparison, that's for sure. The figure they quote (1 per 86 million miles) is for all vehicles (including bicycles). Only 37% of those deaths came from passenger vehicles.

Additionally, the Tesla cohort is really modern high-safety passenger vehicles IMO. A < 5 year old passenger vehicle with stability control, etc could easily be statistically as safe as a Tesla.


There are in fact many models of vehicles with zero deaths.


> most accident are in the 15 Km around the drivers' home (probably because the driver feel "safer" because he knows the road)

A 15km radius from my house would nearly cover all but the most far-flung areas of the medium-sized city that I live in. The vast majority of my driving would be within that radius, so it's not surprising the majority of my driving accidents would be within that radius.

Just for reference, a disk with a radius of 15km would have an area of 700km². You could compare that with the area of Chicago (590km²) or New York City (780km²).


> most accident are in the 15 Km around the drivers' home

You’ll often read this, yes, but I read a fascinating breakdown that called BS on the whole thing.


Would you be so kind as to point me in the general direction of such bullshit.


Apologies if I'm wrong. Can't find anything right now (no time to search further) except some clarifications that accidents closer to home are often with parked cars, not often fatal, at slower speeds, and that possibly the biggest reason is simply because the majority of our driving is close to home.

It's that last item that I'm focussing on.

In other words, are we statistically more likely to crash near home? Yes. Are we more likely to have an accident simply because we are close to home? Not likely. If two thirds of accidents (for example) are within 5 miles of home, what percent of your driving is within 5 miles? If it's also two thirds, then it's a useless statistic that is thrown around for FUD purposes.

I can't back it up right now, though.


That makes intuitive sense, and when I reasoned about your comment earlier that's the general direction my thinking went.

I live pretty much exactly one mile from my work, and the two nearest shopping centres are less than half a mile beyond that.

I'd definitely be interested in seeing some heat maps of representing this data. It's easy to believe that the majority of people live close to at least one of the places they most often travel to, whether that is a workplace or other activity, and I think you're right that this is an important metric within which to consider such statistics.


>Why do they still do this?

This is the game-as-played, vs the game-as-we-wish-it-were-played. If you don't want a player to engage in this, don't accept it in any of the players, like the plaintiff.


> Driver was obviously over confident

I wouldn’t assume that. Videos of this exact scenario prove that the reaction time required is on the order of a few seconds.

Everything Tesla says here exaggerates driver culpability, like saying he received “multiple warnings” (15 minutes ago and completely unrelated). Don’t let that hypnotize you into thinking he was being irresponsible here. There may have been other factors at play such as getting blinded by the sun during those few critical seconds of Autopilot error.


If you look at the videos here of a tesla going for a barrier there seems to be about 3 seconds between seeing it's going the wrong way and potential impact. Which is enough time if you're expecting it but not long if it's a surprise https://np.reddit.com/r/teslamotors/comments/b36x27/its_back... (hn https://news.ycombinator.com/item?id=19443925)


The short time frame to react to unexpected events is why, if you're a driver, your first response to any unexpected stimuli has to be to decelerate while you evaluate the situation. In turn, this is why tailgating is such a dangerous behavior.


A big issue is that this can surprise a driver since an update can change the car's behavior.


Does it stand to reason that not only can an update change the cars behaviour, but that an update is likely to change the car’s behaviour.


So, Tesla's sensors failed to detect a concrete barrier and failed to detect he driver’s hands on the wheel, but we’re supposed to assume that the first which was clearly a manufacturing defect was not responsible for the crash because of the second which, we are to presume, must have accurately reflected reality.

(Since California uses comparative fault in these kinds of cases, legally this would seem to at best reduce but not actually eliminate Tesla's liability, even were a jury to be convinced by it.)

> If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.

Which I suspect says nothing about the vehicles (much less autopilot hardware specifically, and even less the autopilot function), and everything about the age demographics of Tesla owners, because age is a significant factor in accident rates, and there are probably disproportionately few teenagers and other very young drivers driving Teslas.

https://aaafoundation.org/rates-motor-vehicle-crashes-injuri...


>> In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.

HN's Contrarian Skeptics Brigade can correct me on this (I repent!), but I think this is not very different than saying:

"10 out of all 100 people get this disease. We tested this hand-picked group of 10 people and found that only 1 of them had it. This small group is ten times more healthy than the general population!"

The number of Tesla cars with Autopilot in circulation must certainly be a small fraction of all vehicles on US roads (including bicycles, etc). It makes sense that, if you look for an event in a smaller population you should expect to see it happening in a different rate than in a larger population - for the same reason that if you toss a coin ten times and count the proportion of heads it may well be different than 0.5, but if you toss it one million times you're much more likely to see it landing heads half of the time. More observations allow you to better estimate the true distribution of events.


I think that particular statistical issue is not that relevant. There are others - if you compare Teslas to other luxury cars they have more fatalities for example. The Tesla fatality data is likely incomplete. The autopilots tend to get used on freeways which have much lower fatalities than town driving. And I'm sure some more. (Here's an article pointing out flaws and suggesting Tesla's death rate is about 3x comparable luxury cars https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...)

On the other hand I think people mostly know the score. I'm sure the driver in this crash knew he should have been looking and autopilot was not 100% reliable. I enjoy riding motorcycles which have a stupid high death rate compared with cars. It's a free world to some extent.


There are indeed statistical issues with the comparison, but I'm not sure it's the one you're pointing out. There's probably enough Tesla miles being driven that we shouldn't worry about the random noise around the mean.

One immediate problem with real world stuff like this is selection. How do we know that Tesla is safer because it is built more safely than normal cars, rather than that perhaps people who are more safety conscious buy a Tesla? After all it's a more expensive car, the kind of thing you might only be able to afford once you're a bit older, the same time that you might have kids to worry about. But older people are also not the type to be racing around fast and furious style. Maybe there's an imbalance in the man/woman distribution?

It's a real can of worms to unpick these kinds of things. You find it all the time in social sciences.

Maybe you can find stats for Teslas without autopilot, or other cars with autopilot, and other cars without auto. And stats for how Tesla drivers are distributed according to age and sex?


Genuine question.

Did you mean to ignore the fact that Tesla knows the score?

Tesla knows all of those statistics about its drivers, and knows how those numbers compare to other published data.

And they chose to write up the data in the manner they did.

In which configuration of all possible worlds is this ok?


Funnily enough I hadn't actually considered it. I guess because I was considering it from the outside.


Thank you for admitting that.

In light of this new context, has your thinking around this subject changed?


Well at the time I wasn't really making any moral judgements, just thinking of it as a statistical problem.

Of course Tesla and any other firm that releases stats to prove a point need to do so honestly. And they would have made the exact same considerations that I listed. So yeah, they should have released more representative stats.

I'm sure the PR people don't know anything about stats though, plus they would want to pick whatever they think makes the company look good.


I do think the way they use the statistic is probably misleading and at last non-trivial to compare, but FWIW the ratio of in-vehicle versus non-vehicle casualties is about 10:1 in the US, and the statistic is for motor vehicles, so bicycles would either not be included (if they fell off the road) or would be part of the non-vehicle casualties alongside pedestrians.


Weeeelllll actuallly...

10/100 is same percentage as 1/10, so groups would have same mean.

Your point is fair though, but it would be more appropriate to make the example 0/10 in the small group when there was a ~1/10 chance of that happening (based on larger groups distribution) so its not super significant.


Looks like sensors failed to see concrete divider in nice sunny weather and car slammed in to it at 70mph. Driver was obviously over confident on system's ability to self-drive, probably busy looking at phone and ignored warnings to put his hands on steering.

Why would you trust the "hands-on-steer" sensors? There was a sensor problem, after all. No reason to trust the manufacturer when they claim these other sensors did work properly.


I agree with your conclusion, but the compassion angle isn't helpful in discerning whether or not it is truly safer, and every fatality is a human death and can be just as impactul to others. This guy is no different to the 3.7x more people per mile who have died in other vehicles, so where is the concern toward those manufacturers? What are they doing wrong that their fatality rate is so much higher?

If turning off automated features in a Tesla would lead to more deaths would that still be the right thing to do? At some point in automations progress this will become the Trolley Problem thought experiment, with one side being "Five people die but it was their own fault" and the other side is "One person dies but a machine did it" and we will have to decide which is morally correct.


The moral issues will get swamped by practical ones pretty quickly.

Driving while checking a phone is illegal in most jurisdictions I'm aware of. Humans driving cars could eventually be seen in a similar light - there is no defense of choosing to drive yourself if a machine that is statistically better than you was available to do the job. It would literally be endangering other road users. And it isn't clear why a rational driver would choose to drive themselves anyway, other options being available.

The only blockers are (1) how good are computers really? and (2) how much extra does it cost?


Lowering the overall odds of a crash may not be enough if the few accidents happen in circumstances that a human driver would have absolutely no trouble with.


Agreed, but a number of these accidents on dividers had previous accidents on the exact same spots with human drivers. So humans do have trouble with it.

I would also like to point out, that dividers like these are dangerous in of themselves. There should never be knife-like intrusions into the driving space on a high speed road of any kind. They are also quite uncessary, a few simple left right signs that bend over when you drive over them followed by basically grass would have been a much better outcome. That being said it is still tragic.


This exit is a left hand exit for an HOV only ramp that originates a different freeway. The interchange is in the middle of a built out suburban area; there's no room for a grassy divider.

From experience driving around it, humans get dangerously close to the divider due to trying to change lanes much too late -- either into or out of the exit lane, often with a significant amount of braking. Durable cones separating the lanes might be an option, but would add significant maintenance expense.

The Tesla in question collided with the obstacle while its automation was attempting to maintain a lane, and because the automation detected that the car ahead had moved out of the lane, it also accelerated into the divider.

I seriously doubt many (or any) human drivers would misinterpret the position of the lane in this manner at this exit, especially given the conditions (clear day, sun high avoiding glare) despite the poor striping in the area.


> Driver was obviously over confident on system's ability to self-drive, probably busy looking at phone and ignored warnings to put his hands on steering.

and based on personal experience, he was probably overconfident because the person who sold him that car was overconfident.

i remember my first tesla autopilot test drive on a freeway...i was a bit nervous...constantly stepping in when i thought the car was tail-gating or not breaking fast enough and the salesman had a very nonchalant attitude as if I was being needlessly anxious about it.


> These stats don't help when you read the guy had two kids who will now grow up fatherless for rest of their lives.

PRwise, yes. But it could've been 7.4 kids who would grow up fatherless for the rest of their lives.


> Driver was obviously over confident on system's ability to self-drive

That thing is called "auto-pilot". If its purpose is not to self drive a car, maybe it should be renamed.


It could be called assisted driving and still people will use it like auto pilot.

I had an Opel company car for two years, full of tech (lane assist, auto break, sign detection etc etc etc) after a year and a half I realized how much I relied on this stuff (hint way way too much!) our brains are wired in a way to do as much as possible with as little attention as possible...


That might be the case but Tesla is still pushing for people to see it as autopilot when it obviously isn't.


> Humans killing humans is very different thing than machines killing humans even if the fatality rates are 10X lower.

So it is better for 10 humans to die at the hands of other humans, than one machine caused death? In what moral framework is that true?


It's a public acceptance issue, not a moral issue.


> ignored warnings to put his hands on steering

If there were any. One of the claims in the lawsuit is "failure to warn" and the preliminary report of the NTSP says "the alerts were made more than 15 minutes before the crash".


The idea that the driver didn't know he was supposed to have his hands on the wheel while using autopilot seems ridiculous. Anyone who has used autopilot knows that failure to keep the hands on the wheel results in a warning.

We do know he received warnings. And we can strongly infer that he knew how to actively avoid receiving warnings. Therefore I can't possibly fathom how Tesla could be responsible for a "failure to warn".


You're assuming "failed to detect hands" means that his hands were not on the wheel, which is 100% false.

Tesla's have detection is extremely rudimentary and requires the driver to apply torque to the wheel on order to be "detected". You could be grasping the wheel hard enough to turn coal into diamonds and it STILL wouldn't detect your hands unless you move the wheel slightly.


Tesla has hammered on this "hands not detected on wheel for 6 seconds" point, but in my experience that doesn't mean anything.

Tesla's "hand on wheel" detection is terrible. It basically only works if you have one hand on the wheel at the 2 or 10 o'clock position; you have to be slightly tugging on the wheel. But not TOO much or it disengages.

I would usually drive, on long straight sections of the Interstate, with my hand at the 5 o'clock position, resting on my thigh and grasping the wheel. Very little steering input is necessary. But the Tesla always detects this as "hands not on wheel". Ditto if I have both my hands on the wheel, because the pulling force evens out between the left and right side of the wheel.

I basically never drive without my hands on the wheel, but Tesla kind of always detects "hands not on wheel". Sometimes I notice it before it sounds the audio alert, but sometimes I don't.


That statistic is so so so misleading... It is akin to blatantly lying.


Even if true, the miles accrued for others is over a period of 10 years (when there were less safe infra.) while tesla self drive is recent and hence not statistically significant.


Yes, and for any type of car (v.gr. if you only included two types of Mercedes or, say Lexus, then you might get something totally different...).

Cannot say that I am shocked, though.


Regarding the second paragraph from Tesla the interesting numbers would be fatalities per driven mile without autopilot engaged vs miles with autopilot engaged. Just being equipped with autopilot doesn't tell anything in that regard.


> the interesting numbers would be fatalities per driven mile without autopilot engaged vs miles with autopilot engaged

Exactly. I imagine hell freezes over before we see such data. Tesla did not even provide complete data to NHTSA [1]

[1] https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-...


I have to wonder what the fatality rate is among the Tesla demographic: those who have relatively new luxury cars.


“Equipped with” is not the same as “while using”. What’s that stat? They probably know it.


If you disconnect it just before slamming into a wall at 80 mph the driver was at fault. //Tesla

I really want one, but trust erodes at record speed with this kind of cherry picked and misleading claims.


The statistics argument is one the worst thing about Tesla. It's borderline immoral and most probably a PR blunder.


> Companies need to aggressively enforce, both hands on steering until self-driving is really really really good.

Hands on the wheel is such a stupid metric. Two hands on the wheel whilst watching a film on an ipad doesn't do a damned thing.

What matters is whether I'm paying attention, a much harder to measure metric. Am I looking at the road, judging distances, hazards, etc.

I say this every time autopilots come up - I think it is impossible for a human to give up responsibility for reacting to 99.99% of input and still retain attention for the remaining 0.01%. I certainly don't believe I can do it, and don't intend to use a car that ever expects it.


Chrysler/Cadillan has eye tracking - seems like exactly the thing you want. While it is in automatic driving mode, it tracks your eyes and if you are not looking at the road for an extended period of time, warnings are issued.


And they still don't work and are still driving at barriers!

https://np.reddit.com/r/teslamotors/comments/b36x27/its_back...


> Looks like sensors failed to see concrete divider in nice sunny weather and car slammed in to it at 70mph.

This doesn’t sound right.

Last time I looked in to this, Telsa’s software intentionally ignores station objects when the vehicle is traveling at speed.

Has this changed?


The other fatalities and subsequent orphans, of which there are vastly more, don't get ratings and don't offend a bevy of long standing sponsors.


It's unfair that Tesla would get sued for every death, but wouldn't get a comparable reward for every death it prevented.


But it's perhaps not unfair to sue them for misleading advertising that leads to people getting killed.

"The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."

creates very different expectations from

"While the car is driving itself, the person in the driver's seat is keeping his hands on the steering wheel and his attention on the road at all times."


Doesn't Tesla push the second scenario at the moment?


At https://www.tesla.com/en_GB/autopilot?redirect=no, one of those quotes is presented as a full-screen caption at the beginning of the video near the top of the page.

The other is found after scrolling down several screens.

Which would you consider the more prominent message?


(Just to clarify, the second message isn't actually present in the form of my suggested rephrasing; but the text that "Current Autopilot features require active driver supervision" is a long way down the page.)


They literally tell you to put your hands on the wheel and pay attention every time you don't, and sometimes even when you do. So yes, they push the idea that awareness is required.


They need to be aggressively fined until they stop using the term "auto pilot" which is highly disingenuous.


Was adding the "kid" part integral to the reply? Made me cringe a bit.


I would like to add over here that Tesla does indeed do sensor fusion in their cars. Their Autopilot combines radar and ultrasound with vision to decide where to drive. Commentators bringing up LIDAR are jumping the gun by assuming that this scenario isn't something that these sensors wouldn't have detected in either combination or individually to spot the anomaly. The problem over here is likely to be software due to a bug in their code than a simple lack of additional sensors (LIDAR). And this touches on deeper issues that could have profound ramifications for the autonomous driving industry and the broader industry in general.

At least, in my eyes, the big problem with the autonomous car industry isn't the sensor suites they are deploying (or not), but the over-reliance on neural networks. They are black boxes with failure modes that can't be adequately mapped. See: http://www.evolvingai.org/fooling

What if the neural net or the system used to detect obstacles didn't see it because the precise configuration of the data fooled it? And if that's the case then what's next? How do we decide when it is okay for safety-critical systems to be opaque? How do we deal with autonomous driving if the conclusion comes out to be a "no" for this case? How should broader society deal with a yes? And who decides all of this in the first place?

Possibilities like this scare me far more than the lack of LIDAR because replicating a bug like this would be next to impossible. We don't know what we don't know, and we can't explore and understand the system to suss out what we don't know.

Edit: Fleshed out the idea with more questions.


Your comment actually emphasizes why having a lidar is better.

Cameras need neural networks to detect objects. Lidars on the other hand provide distance to all the surrounding objects. The neural networks processing the point cloud may not be able to detect what the object is, but because of the distance available in the point cloud data you will know there is something ahead.

Fusing radar with camera does help but radars are noisy and unreliable.


Lidar is also noisy unfortunately. The input is high dimensional enough that classical tracking methods don’t work well (like for simpler radar systems) because segmentation in lidar is hard.


Lidar is active (so lower res than vision) and uses slightly longer wavelengths (again, lower res). Visible light uses passive solar illumination by day, power-efficient, far more data; and is a problem you need to solve for reliable autonomy independent of lidar.


> is a problem you need to solve for reliable autonomy independent of lidar

Not for L4. A level 4 vehicle is not expected to operate under all conditions, including weather and location.


Radar seems to work pretty damn well for the adaptive cruise control and autonomous emergency braking installed in hundreds of models of cars made over the past 5+ years.

If autonomous systems needs more redundancy, the solution is to keep adding more cameras.

The other sensor which I think is important is data captured by cars in that fleet. This is sort of what's known as "HD maps" but fully automated and based purely sensor-derived data. So if you're driving down a highway at night in the rain, your car would know the successful paths traversed a few hours ago when it was dry and sunny.


> autonomous emergency braking

I've just gotten rid of a car that did that and that was one of the contributing factors. That system was so totally unreliable that it almost caused two accidents in 20K km when otherwise nothing would have happened.

In both cases there were obstacles on the side of the road that gave a return (bridge supports/railing and a pretty sturdy billboard frame) to the radar that caused it to slam the brake full on. On the bridge that caused the car to skid , I narrowly avoided the posts on the far side of the bridge and the second instance the person behind me was totally surprised (as was I).

That sort of tech is really not yet ready for deployment as far as I'm concerned, it's a liability instead of extra safety. I contacted the manufacturer and they saw nothing wrong with the car. The system could not be disabled either.


Sadly, not all ACC and CAS deployments are created equal. Tesla uses a 77ghz RADAR that has 0.1° of accuracy on the azimuth for the far range radar at a beam spread of 6° and 0.2° for 9°. The near RADAR resolves to between 0.3° to 5° for the close one. They can pick out objects at a resolution of 40cm for the long range RADAR and down to 5cm short range. http://www.compotrade.ru/i/pdf/ARS404-21_ARS408-21_en_V1.03....

Not all manufacturers use such high specced RADAR as you can see in this comparison sheet there are cheaper options to pick from that don’t offer anywhere close to this angular accuracy and resolution https://autonomoustuff.com/wp-content/uploads/2018/02/RADAR-...

The manufacturer is probably using 24ghz RADAR which is legacy technology and is being discontinued due to lower accuracy. There are software tweaks available to reduce false positives for legacy tech but most manufacturers don’t deploy them.

See: https://e2e.ti.com/blogs_/b/behind_the_wheel/archive/2017/10...


There might be some bad implementations out there, but that is a problem with that implementation, not the technology.

I've driven extensively with AEB in a 2015 Jeep Cherokee and it has been borderline faultless. And that's surprising because I have very low regard for anything made by Fiat/Chrysler, but whatever (presumably third party) tech they integrated, it has been rock solid. Over tens of thousands of kilometres it has never, ever had a true false positive. And the radar cruise has never, ever had a false negative.

Very occasionally the AEB has "freaked out" unnecessarily; usually when I'm driving right into someone who is in the process of turning off the road. I know that they'll be out of my way in time for me to go past, so technically it's a false positive, but even then I'd give it to the car. I am driving dangerously. I am gambling that the turning driver doesn't brake suddenly before they complete their turn

(I've also done a few thousand kilometers in a 2019 Subaru Forester with Eyesight—a purely vision-based system—and its adaptive cruise is even better than on the Jeep. It's less anxious on the brakes in response to movements by the cars ahead, and you can configure how eagerly it accelerates back up to set speed when the road clears.)


One thing to remember here is that with RADAR it is, I have read, very hard to tell a soda can from a car from a above highway sign as you are cresting a hill. So they will often filter out anything that is stopped.

I read this a few months ago and it blew my mind.

Then I read about that LIDAR sensor burning a camera sensor and thought "how safe can that be?!?"


> with RADAR it is, I have read, very hard to tell a soda can from a car from a above highway sign

Yes, read about phantom braking, for example here,

https://www.reddit.com/r/teslamotors/comments/b5yx1o/welp_th...


Don't need to read about it. Last year I was on a 2400 mile trip. On a long, deserted stretch of road, suddenly, out of nowhere my Tesla slammed on the brakes. I had to throw away my underwear. :-)


That LIDAR sensor you talk about destroyed camera sensors precisely because it is safe for human eyes. Conventional lidar uses wavelenghts that can harm human eyes but the power output is restricted to a much lower value to prevent damage to humans and cameras.


> Possibilities like this scare me far more than the lack of LIDAR because replicating a bug like this would be next to impossible. We don't know what we don't know, and we can't explore and understand the system to suss out what we don't know.

I don't think this is true. According to the Tesla live stream a few days ago, the cameras are always gathering data. Tesla cars could have a black box (like planes) , and they could just replay everything that happened up to M minutes before the accident, and retrain neural networks to avoid it later. According to the live, they automatised this process with the "Data Engine".

But anyway, this is definitely not an excuse to beta test the system with actual customer's lives, IMHO.


"What if the neural net or the system used to detect obstacles didn't see it because the precise configuration of the data fooled it?"

I think the answer is that all state must be represented as a probability over states and the controller must be set up to be a robust controller under this ambiguity of state. Then the problem becomes one of making sure that probability is well calibrated so that you get the appropriate control response. The whole software stack for autonomous driving is also a black box, that is just as big an issue. If some parts are sophisticated but are being fed into others that are naive we have no idea. Every sensor has issues, I wouldn't be so quick to worry about the neural networks in particular.


This is why we use a redundant geometric approach (with LiDAR) as a fallback to NNs in our perception stack. NNs can have false negatives which is unacceptable, but dealing with false positives is more manageable and safer.


Does lidar have anything to do with the redundancy?


Since a lidar uses a discrete frequency band and it's own energy source, it's much easier to process the signal for depth information (using the lag between sending and receiving a return pulse(s) and the wavelength). Arguably such a system might provide a better basis for a backup system based on modelling obstacles and their distances from the vehicle.


While we may get to the point where sensors and a form of AI can reliably drive a car, we're so far from it that we don't even know if we're heading towards the goal post.

What we need, if we want to solve the partial self driving problem right now, is not fully autonomous systems. We could start by agreeing on a way for cars to sense the road - think in the way robotic lawn mowers work where you have to bury cables. Then implement that in highways, ring roads and other long stretches of road.

Then we can agree on a way for cars to communicate and exchange relevant information.

Then we can equip cars with hardware like the Tesla have, and then use it to collect data and compare the "auto pilot" with the known working system of following a signal in the road. We still need LIDAR to reliably stop for other cars, but that is a mostly solved problem today.


The problem we have right now is that:

1. laying cables wouldn't work because there are a great many historic towns and cities (eg Amsterdam) and digging up cobble streets to lay cables for self-driving vehicles simple isn't going happen when those cities have so many bigger problems for autonomous cars such as trams, cycle paths, pedestrian zones and tourists what will just walk out in front of everyone in an unpredictable haze.

2. you have districts that are so under funded that the roads aren't even maintained in their current state.

3. So lets say you scape redeveloping the roads and have radio sensors on lap posts - then you have the problem that many rural areas in even richer countries like the UK don't even have street lamps. And what about countries like Australia and America that can have long stretches for tens or even hundreds of miles without a single house, let alone street lights to put sensors on.

I agree with you that normally this kind of problem requires a technical approach but the problem we have is that redeveloping our roads would be such a monumental overhaul that it would actually be a bigger job than developing the AI to handle our existing infrastructure. Plus redeveloping our roads only solves part of the problem - you still have the issue of other cars on the road, pedestrians, cyclists, unexpected hazards like fallen trees, etc. You could fix other vehicles on the road by allowing all vehicles to communicated with each other - like a mesh network - but then you have the problem of doing that securely (so you couldn't just "virtually ram" another car off the road by telling it there's an obstruction that doesn't exist) and you again have the problem that it would be more expensive and take longer to retrofit every single vehicle with that tech than it would be to write AI. And you still wouldn't have addressed the problem of kids, animals and fallen trees.

AI might seem like a naff workaround and maybe if / when we colonise a new planet then we might be able to build roads with sensors inside tubes that restrict access (I guess a little like an underground rail network) but for the time being AI is our best bet for autonomous vehicles.


I specifically said that it should be done on high volume, maintained roads, such as highways and ring roads. It doesn't have to happen overnight, but it could happen gradually. It would serve other purposes too, like guidance for snow plows. It's a 80/20 solution - cover 80% of the travel for 20% of the cost. Solving the problem on highways and high volume roads without cyclists are significantly easier, and would still give a tremendous benefit.

Regarding laying the cables in existing roads. We're already pretty good at "shooting" cables under existing roads with pneumatic solutions, and I'm confident that we could do so for rural roads at a modest price. But that's a different talk.

If we want to develop an "AI" for this, it's not going to be ready in 10 year, nor 20 - we do not have the processing power, or understanding of the domain to do it at this point, and we're moving rather slowly. Some of the more practical problems such as secure communication are 100% solved, and reliable car, pedestrian, cycle detecting are problems that are, fairly well understood at this point, and we certainly can make solutions that rival even alert and good drivers.

Things like this move slowly - it's not going to be retrofitted, but agreeing on a standard and new vehicles can implement in gradually.


> I specifically said that it should be done on high volume, maintained roads, such as highways and ring roads. It doesn't have to happen overnight, but it could happen gradually. It would serve other purposes too, like guidance for snow plows. It's a 80/20 solution - cover 80% of the travel for 20% of the cost. Solving the problem on highways and high volume roads without cyclists are significantly easier, and would still give a tremendous benefit.

I've travelled by road a fair bit around different countries and I assure you that a lot of high volume maintained roads still fall into the categories I outlined. Moreover, a lot of high volume roads aren't maintained. I honestly don't think you would even get remotely close to covering 80% of the roads for 20% of the cost. Maybe if you isolate specific districts like Silicon Valley then your figure might be accurate. But for "normal" towns in Europe that wouldn't be the case and, as has been discussed elsewhere on HN, having some roads updated and others not would actually be more dangerous than having none redeveloped because humans would demonstrate the same complacency across all the roads once they're used to autonomy on the new roads.

Also highways aren't the problem - they're significantly easier to determine in code and see fewer deaths per year. It's the smaller streets in towns that are the problem. Both from the perspective of statistical deaths and also from the perspective of the complexity of AI required to navigate them. They're what really need guide wires the most.

However going back to highways: even on motorways you still have the problem of motorcyclists weaving in and out of traffic. So I'd argue that you haven't entirely eliminated the "cyclists" problem - you just now have faster and even deadlier ones.

I should say that I do agree with you that there would be tremendous benefit if it were feasible. What I disagree with is just how feasible it is.

> Regarding laying the cables in existing roads. We're already pretty good at "shooting" cables under existing roads with pneumatic solutions, and I'm confident that we could do so for rural roads at a modest price. But that's a different talk.

Shooting the cables is the easy if you have pavements with cables already installed. If you don't then you need to dig up the road to lay them. And that's still only half the story. There's a whole load of costs involved in the planning stage; scheduling the work around busy periods (which sometimes means paying for night shift work) and organising the road closures. And it's the maintenance of that infrastructure after the work has happened. Remember that a lot of busy streets aren't even maintained currently. And it's doing that work sympathetically around historical locations (where applicable).

There's a significant amount of more cost and work involved than I feel you're giving credit for.

> If we want to develop an "AI" for this, it's not going to be ready in 10 year, nor 20 - we do not have the processing power, or understanding of the domain to do it at this point, and we're moving rather slowly.

I completely agree that we're decades from a solution but I disagree that we're moving rather slowly. 10 years ago the stuff that are in consumer vehicles - even some entry level cars - would have been considered Sci-Fi just 10 years ago. But even if we are 50 years away, I can't see cables laid in twice that time. I mean half the towns in England and America don't even have broadband, let alone fibre, and you have consumers begging to pay for it. Who's going to foot the bill for redeveloping all our roads? Musk certainly wont.

> Things like this move slowly - it's not going to be retrofitted, but agreeing on a standard and new vehicles can implement in gradually.

The problem is you need it implemented on every vehicle for it to work. It's like anti-vaxers argument that "I don't need to get vaccinated when everyone else is" but it actually doesn't need to take a large percentage of unvaxinated people for an outbreak to still occur. Equally you only need one vehicle without a mesh network installed to case an accident that could kill several people - including those in autonomous vehicles. If you don't have total coverage then the mesh network isn't going to be reliable.


> I've travelled by road a fair bit around different countries and I assure you that a lot of high volume maintained roads still fall into the categories I outlined. Moreover, a lot of high volume roads aren't maintained. I honestly don't think you would even get remotely close to covering 80% of the roads for 20% of the cost.

I did not say 80% of the road, but 80% of the travel. Most of the kms driven are on highways, ring roads, and main roads into larger cities.

> having some roads updated and others not would actually be more dangerous than having none redeveloped because humans would demonstrate the same complacency across all the roads once they're used to autonomy on the new roads.

The car will pull over if you don't take over. I fail to see the issue in that. I'm advocating for a system that will autonomously drive where the infrastructure allows, not everywhere. If the system can handle driving at the highway for me for 2 hours, I'll be significantly more fit for driving the last 20 minutes.

> Shooting the cables is the easy if you have pavements with cables already installed. If you don't then you need to dig up the road to lay them

No you do not. They can even remove a small patch of tarmac, "shoot" and entire sewer line a long way and remove patch of tarmac at the destination. That's how they get optical fibers under roads and brick paving without digging it up.

> There's a whole load of costs involved in the planning stage; scheduling the work around busy periods (which sometimes means paying for night shift work) and organising the road closures

I never said it wasn't extra work. A cost benefit is needed, but the current AI approach will either cost many lives, or we agree on a better more expensive way.

> If you don't have total coverage then the mesh network isn't going to be reliable.

Again, a 100% solution is never happening. But by legislation we can increase adoption slowly, but steadily, until and past the point where we see the many benefits. In the EU all new cars must have DLR, AEB, TPM, ESP, ABS, seat belts, ect. - this would just be another thing new cars must have to ever be allowed to be sold on the market. It should not be up to the manufactures to agree, because they all want their solution, not an interoperable one.


> I did not say 80% of the road, but 80% of the travel. Most of the kms driven are on highways, ring roads, and main roads into larger cities.

I don't think your experiences are typical for most people the world over. For example in the UK, we don't have that many motorways so most of the commutes by cars are along busy "normal" roads or dual carriageways (which are a little like motorways but with frequent junctions like roundabouts that have complex social interactions and thus might need AI to navigate safely anyway. But even there, most people people in the UK don't spend 80% of their drive on dual carriageways - let alone motorways. Instead we have "main roads" which join towns together.

Commuter traffic in the towns in the Netherlands where I've been were more focused on public transport and bicycles than they are cars - though that's not to say they don't also have a heavy motor-vehicle traffic as well.

I remember when I travelled around America thinking how much different attitudes to transport were to the UK and many European countries. I think that's in part because of how much farther away places are in the US. But I'm now drifting into speculation. My point was your observation about travel behaviours isn't globally representative.

> The car will pull over if you don't take over. I fail to see the issue in that. I'm advocating for a system that will autonomously drive where the infrastructure allows, not everywhere. If the system can handle driving at the highway for me for 2 hours, I'll be significantly more fit for driving the last 20 minutes.

Again, motorway driving is the easy part. It's where there is less to observe, low risk of people stepping out in front of you etc. It's where there are fewer road deaths too. And it's also the easiest part of driving to automate. So motorway driving isn't where we should be concentrating our efforts.

> No you do not. They can even remove a small patch of tarmac, "shoot" and entire sewer line a long way and remove patch of tarmac at the destination. That's how they get optical fibers under roads and brick paving without digging it up.

That's not how they do things here :) But in any case I did say that's still only a small part of the overall cost and orchestration needed in deploying cables.

> the current AI approach will either cost many lives

Will it though? That sounds like baseless speculation to me. Any who's to say your solution wouldn't also have glitches that result in a fatality? Ultimately you're still depending on software to make decisions and even robotic lawn mowers aren't without their own odd hiccup.

> Again, a 100% solution is never happening.

I know. That was my point. It's not going to happen and partial coverage isn't good enough.

> But by legislation we can increase adoption slowly, but steadily, until and past the point where we see the many benefits.

50% coverage isn't enough. 90% coverage isn't even enough. 99% coverage isn't even enough but at least you're starting to get close. And how long will that take? Lots of people are still driving 20 year old cars every day. So lets say 10 years to design, develop. 5 years to then standardise the specification. 5 years of bureaucracy to get legislated then another 5 years for the legislation to become effective (because governments like to give a warning). They you need to wait 20 to 30 years until most people have bought a new car. That's 55 years at a minimum and you'd still not cover the thousands of people who like to drive classic cars. So you might as well double that figure. By which point AI has long since caught up with the ability to sense other cars on the road (because we've already mostly solved that specific part of the problem anyway) and at a fraction of the cost.

I've already said that if we were to build infrastructure from scratch then your approach is definitely the way we should progress. But with the way our road infrastructures run at the moment: the vehicles on there, the hazards that can occur, the cost of maintaining the infrastructure and how that's even funded; well your solutions are simple not feasible for the real world.

AI is one of those occasions where worse is better. I completely agree with you that your solutions are better - technically speaking - at solving specific problems related to driverless cars but it doesn't solve all the problems, it's more expensive to implement, can't be deployed everywhere, and yet it will still take longer to roll out than our current tact of developing AI. When you actually look at each problem in detail your proposals fall far short of being workable.

These things aren't built without considerable thought put in them and I seem to recall that early prototypes of driverless cars did work on the principle of having smart cars that communicated with each other and smart roads too. Essentially everything you've been advocating. So it's not like you're suggesting anything that hasn't already been tried. But every lecture and documentary I've watched on this subject come back to the same final conclusion: for self-driving cars to become a thing they need to build the car to drive like a human would because changing the roads and the cars on it is actually a harder problem to solve.


I'm from Denmark, and my experience ranging from all the Nordic countries, Germany, Spain, France, Italy, and a bit in other countries but not enough to have any real experience.

We could turn this system around. It would reliably cover the vast majority of semi transport if it just worked on major highways in Europe. Drive the truck to the highway on ramp, jump out and let it drive 1000km before someone picks it up and drive it "the last kilometer".

I think your timeline is overly pessimistic. 55 years ago way 1964 - a lot have happened in that time frame, and we put a man on the moon and a rover on mars in far less time. It can happen significantly faster. If we decided to start developing now, I believe that within 5-10 years we would get near a specification and legislation. That leaves changing the cars and that only have to be done gradually. We know so little about this problem domain that it's best done gradually and carefully.

I think we can agree to disagree.

I'm not convinced that weak AI will be anything but an enhancement to the human driver.

It's my belief that we don't even have a grasp of the AI domain to do this yet. Perhaps we will get it, perhaps we will not. Nothing that has happened in the AI space is new from the theoretical point of view. Neural networks was discovered in 1943, we just recently got the computational power to use it. But it doesn't scale and it's a black box approach. And it's easily fooled (see recent reports of a simply sticker on the road making a Tesla drive against traffic in the wrong lane).

The strong AI field haven't seen any real progress or breakthrough in a long time, and people still argue whether it's even possible. To put it into numbers, a worm, which we have not successfully modeled the brain of yet, have around 300 neurons. A human have 10^11, each witch have a far a greater connective complexity . Strong AI is not coming in the next 100 years, if at all.


> What if the neural net or the system used to detect obstacles didn't see it because the precise configuration of the data fooled it? And if that's the case then what's next? How do we decide when it is okay for safety-critical systems to be opaque? How do we deal with autonomous driving if the conclusion comes out to be a "no" for this case? How should broader society deal with a yes? And who decides all of this in the first place?

All good questions, but I will note that the exact same questions apply equally well or better to human drivers, yet people don't seem to be as concerned.


People are very concerned about unsafe human driving and the liability for the massive number of deaths and damages it causes. It's the reason we have insurance and a big reason why we have self-driving aspirations at all.


The sensor technology is a big factor here. You can use LIDAR for perception without needing a neural net. On the other hand with radar + ultrasound + vision you'll need a neural net to try to detect things like obstacles, since the raw data from those sensors doesn't directly tell you whether there is an obstacle in the way.


Everything you said is very wrong. LIDAR produces line of sight depth information. Stereo cameras produce line of sight depth information. Radar produces line of sight depth information. Ultrasound produces line of sight depth information. The only differences are how, how fast, and how detailed.

The biggest difference between LIDAR and stereo vision is that LIDAR doesn't depend on illumination because it uses its own.


Stereo cameras produce line of sight depth information

That is wrong. Stereo cameras provide data which, after significant processing, can result in line-of-sight depth information. With radar (including LIDAR) and ultrasound, the depth information is essentially "free" since very little processing is required to extract the depth information (i.e., the information returned is the depth).

So the biggest difference between stereovision and everything else is that stereovision requires additional processing to extract the depth information.


Aren't the algorithms for extracting depth from stereo imagery already solved and thus also essentially "free"?


No, it's like this:

Radar, LIDAR, ultrasound: returning packets have X value, where X is run through a single algorithm to return depth. In the case of ultrasound and radar, the strength of the return packet is generally directly proportional to depth (algorithmically). LIDAR systems might require an additional processing step depending on the type of LIDAR system used, since many systems use shifted-arrays or rotating frequencies.

Stereovision: returns two matrices of data (i.e., each image). Each matrix is run through edge-detection and object-detection algorithms, and the resultant data is then run through more algorithms to match and compare objects between the two images. Finally, once objects have been matched up, a parallax algorithm determines their 3d positioning. While this can all be done in hardware (i.e., ASICS), it's still multiple additional levels of processing that must be done before you can even get the data to the decision-making part of the system. Stereovision is thus always inherently laggier than a radar-based system when it comes to determining depth.


"the strength of the return packet is generally directly proportional to depth (algorithmically"

Err, no it isn't. You send a tx pulse and correlate it with a rx pulse and use the time of flight and the wave speed to get a range. You need precise information about where the tx pulse was sent to turn the range into 3d point relative to the sensor. The correlation and orientation steps are algorithmic and fundamenally statistical. For ultrasound and radar you can put a code in the pulse to make matching them up easier (the pulse lengths are typically bigger than the resolution you require). All algorithms. All can and do go wrong.

Old school stereo matching looks at cross correlation (or cross entropy) of image patches. If the cameras are orthorectified (translated with no rotation) then you just scan across the row of pixels to find the patch with the best match and how far across it is is proportional to the depth. Feature correspondence (with sift like features, very low level) might be used for getting relative rotation and translation info between the cameras for calibration and as a depth prior. People are doing the same sort of thing with NNs to get the depth map.


> Each matrix is run through edge-detection and object-detection algorithms, and the resultant data is then run through more algorithms to match and compare objects between the two images. Finally, once objects have been matched up, a parallax algorithm determines their 3d positioning.

Where is your information coming from, and can you point me to a stereo algorithm that uses object detection?


Right, almost all stereo depth calculation is matching without object detection. Maybe doing both simultaneously counts? ;-) https://arxiv.org/abs/1902.09738


Sure it’s laggier but with dedicated silicon that’s a non-issue.

Fact is vision is more than adequate. And lidar is literally blind.


The problem is not lag, it's unpredictability. For example the white side of a truck trailer may have no distinguishing features, so in some situations the parallax-based depth algorithm returns incorrect information.


Sure, and also that cameras only work if what they're looking at is sufficiently illuminated. And yet humans still drive based exclusively on stereopsis. The particular problem you mention is really a camera fidelity issue not a stereo issue, just like humans with poor eye function are also not allowed to drive.


Depth estimation is not a solved problem. Why do you think SDC companies pay so much to buy Lidar if depth estimation was solved?


We need to define solved. Used in the Musk sense, it is solved because it's possible.

Used in the academic sense, it's not as reliable as depth from LiDAR.


This is the Crux of the problem. Elon musk and Tesla are happy to use words with misleading meaning deliberately. Autopilot, full self driving, "solved", safer than a human driver etc.


There's nothing wrong with using words. The real key is to understand carefully the context in which the words were used.

The "misleading" accusation is just a projection people like you are imagining after cherry picking words without paying attention to the context.

The words you are missing are things like: "will be," "in the future," "it is possible," "assuming regulatory approval," etc.


Because they bet wrong. It happens.


[flagged]


‘Did that Tesla that smashed into the side of a semi truck bet correctly?’

That’s needlessly antagonistic and not cool.


Antagonistic to whom?

Tesla is using the public as guinea pigs. They can defend themselves with data. Customers/investors should ask them to share more than they do, which is a mere few sentences [1] [2].

[1] https://www.tesla.com/VehicleSafetyReport

[2] https://www.youtube.com/watch?v=HqvatzjHGyk&t=47m17s


Sure, but if you’ve got the processing it’s pretty easy.


That's a very common perception, but it's not true! Kalman filters have for years been used to get usable data out of these and multiple studies have shown that ultrasound can be used as a standalone in low-speed scenarios.

Design and experimental study of an ultrasonic sensor system for lateral collision avoidance at low speeds - https://ieeexplore.ieee.org/abstract/document/1336460

Environmental perception and multi-sensor data fusion for off-road autonomous vehicles - https://ieeexplore.ieee.org/abstract/document/1520113 (this one uses laser range finders not LIDAR with some vision)

"Automotive radar the key technology for autonomous driving: From detection and ranging to environmental understanding" - https://ieeexplore.ieee.org/abstract/document/7485214

Real-time SLAM has also been done with vision + inertial navigation for UAVs; A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM - https://ieeexplore.ieee.org/abstract/document/6906892 - https://www.research-collection.ethz.ch/bitstream/handle/20....

Some people have proposed laser range finders with ultrasound (sonar);

'This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times.'

A Sensor Fusion Layer to Cope with Reduced Visibility in SLAM - https://link.springer.com/article/10.1007/s10846-015-0180-8 - http://eprints.lincoln.ac.uk/16820/7/__ddat02_staffhome_jpar...

There are enough datapoints between vision, ultrasound and radar for old school algorithms to do real time SLAM and make cost maps. So the problem and solution definitely isn't MOAR SENSARS.


Kalman filters over what state space? Localization isn't really the challenging aspect of self-driving cars.


I'm not sure what you mean but SLAM includes a strong mapping component that includes edge and environmental geometry detection. The unobserved variable here are the future locations of the boundaries of vehicles and obstacles (we can't predict where a vehicle would move so can an obstacle).


The autopilot failed to detect a stationary obstacle.


KF as a particular online method for estimation of state of vehicle location and locations of things in the environment (the M bit in SLAM).


I think the 'how do we decide' is when we can show autonomous driving is significant safer than humans.

I suspect the problem will then be computers avoid many accidents humans make. But make mistakes that seem obvious to humans. So even though the overall risk is lower it becomes very hard to accept the computer did something wrong vs admitting a person was speeding and they stuffed up killing themself.

To allow this tech to exist we might need legislation that autonomous car companies can't be sued unless there is clear malpractice in their systems vs some level of 'accidents happen'. In the same way a wife doesnt typically don't sue her husband that caused an accident that killed themself.


The problem with Tesla's "Autopilot" is that it's marketed heavily as just that: autopilot. It's good enough to work that way 99% of the time, too, which is a recipe for disaster: things that work 99% of the time but have the potential to fail catastrophically often get glossed over by humans because it's hard to keep paying attention to something that rarely fails.


That's all very true, but one interesting thing is that the driver in this case knew that at this location the autopilot was 100% likely to fail. I believe I read he had even reported it several times to Tesla Service and his family.

The lane markers in this area, based on video I watched of it years ago, are pretty bad, IIRC. Like, a human could have problems following the markers bad. As evidence to support this I'll say that there had been an accident in this exact area days (again, IIRC) before that destroyed the "crumple zone" in this area. :-)

It's going to be extremely interesting to see how the courts rule in a case where "I knew the Tesla on autopilot would drive in to the barrier in this area, and I turned on autopilot in this area and it drove into the barrier."


The problem is:

- Autopilot doesn’t work in that area. Driver reports it to Tesla.

- Tesla fixes the issue in a software update.

- Driver sees the improvement, and happily relies on autopilot.

- After a while, a new software update introduces a regression.

- Driver doesn’t find out about the regression until it’s too late (and is not around to testify about it anymore).


The driver has only been not around to testify three times in the history of Tesla, and only once since 2016 (despite increasing capabilities and far more cars on the road). The instance here is the one time. [1]

Regressions undoubtedly happen, but clearly in the vast, vast majority of cases, no deaths are involved. It's not "the problem," it's "the exception."

[1] https://en.wikipedia.org/wiki/List_of_self-driving_car_fatal...


Is this what happened in this case, or is this speculation? Just curious, because it's a valid point, but I hadn't heard that it got better but then reverted. Source?


Except he won’t argue that because he is dead. Which, in itself, is the best counter argument to your conclusion that he knew what he was doing and allowed it to happen intentionally. Unless you’re also theorizing that he chose to commit suicide using one of the most creative methods he could, while making Tesla take the blame. It’s not impossible I suppose, but it doesn’t seem like the most reasonable conclusion.


Except there are multiple reports from driver that autopilot problematic in that location.


I imagine that the court or some regulatory agency could order to automate these kind of reports and use them to shutdown autopilot or make it unavailable on some routes.

It would be interesting to design the ux for this. Based on the wording it could even make them more liable if accidents happen outside of roads with known issues.


Why does it even need to be reported? Tesla should be able to detect where autopilot is regularly deactivated or does some wrong (driver takes over with sharp input on brakes or steering wheel), and use that to deactivate it prematurely.


I think they do the first part of your suggestion (they send reports from all disengagements, and detailed reports from some), but not the last part.

Sounds like a sensible idea, particularly in an area where they've seen a near collision. The difficulty might be that conditions change all the time (barriers are moved or removed), and that makes it difficult to make this kind of determination on a case by case basis.


Indeed, I meant that the car should report the manual takeovers to Tesla HQ so the information can be shared. In additional you should be able to add context to it as a human.


Well at least Tesla drivers can and to openly report. When I had issues with phantom breaking etc on my BMW while driving on the Autobahn I never reported this since there was no sensible channel.

You just learned to live with the limitations and used it where it worked under supervision.


Agreed, it's not impossible but also seems unlikely. It seems, as I was writing my previous message, like the plot for some crazy courtroom drama show: He couldn't go on living but needed to support his family!

Seems unlikely, but I just don't understand why he'd be using Autopilot in this area given his past with it.

When I first got Autopilot in my Tesla, I was playing with it on the city streets, and it decided to swerve into the divided, concrete median at the end of the intersection, deciding that was the lane. I grabbed control of it, no problem, but 2 years later I still am extra vigilant around any lane dividers. Mostly though, I don't use Autopilot in such restricted areas.


>Like, a human could have problems following the markers bad.

The black-and-white stripes on the highway divider were good, though[1]. They are hard for humans to avoid.

That's the fundamental problem: the signage optimized for being glaring to humans is sometimes invisible to self-driving cars.

Instead of trying to teach cars to read signs not meant for them, we should install signs that are easy for cars to understand.

[1]https://www.mercurynews.com/2018/03/28/tesla-claims-highway-...

(this also shows the lack of crumple zone, but clear signage).


In these kinds of civil liability cases there doesn't have to be a blanket assigning of 100% fault. The court can in fact decide that Tesla, the DOT and the driver all contributed in some proportion and assign damages accordingly.


This is exactly why Waymo and Cruise and such believe autonomy Level 3 or so is utterly dangerous and have intentionally skipped it entirely.


I wonder if AP is really 2x safer as Musk claims. Highway driving is already significantly safer per mile than local roads,

> The grim statistics provided by the National Highway Traffic Safety Administration also show that drivers on rural roads die at a rate 2.5 times higher per mile traveled than on urban highways. Urban drivers travel twice as many miles but suffer close to half the fatal accidents. [1]

Could this explain away Musk's AP safety claims, given that it's mostly used on highways?

[1] https://www.npr.org/2009/11/29/120716625/the-deadliest-roads...


I would assume that Musk's claim was comparing manually driven Teslas on the highway with AP Teslas on the highway. But I don't think there's a way to verify.


He may have been using data from the flawed NHTSA study,

https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-...


This x2.5 death rate is highly correlated to the speed the medics can arrive at the scene of the accident.


That may be. The point here is AP is mostly used on highways and that we are unable to verify Musk's claims without data from Tesla.


Do you think Tesla's insistence that LIDAR is useless has a part in this case? To me it's telling that everyone in the ASD game _except_ Tesla is using LIDAR.


Has Tesla actually asserted that LIDAR is “useless”? I know Musk has argued that LIDAR is not necessary, but “not necessary” and “useless” are very different things.


Let's ask Elon - https://youtu.be/Ucp0TTmvqOE?t=6081

"Anyone who relies on LiDAR is doomed. Doomed. It's like having a whole bunch of expensive appendices. One is bad, and now you want to put in a whole bunch of them. That's ridiculous."


It's probably a bad idea if your plan is to sell self-driving cars to consumers. Waymo can afford a substantially higher cost premium on their self-driving cars because they are looking at providing a taxi service, not a car that people can buy.


Tell that to Elon 5 years ago. I agree they'd have been better off focusing on luxury/sports EVs.


Yes. Musk said that “Lidar is a fool’s errand” and "Anyone relying on lidar is doomed" at Tesla Autonomy PR event recently. See: https://techcrunch.com/2019/04/22/anyone-relying-on-lidar-is...


It’s almost certain they are wrong.


Care to elaborate why?


Unlikely. The only comments he's provided here are one sentence sound bites that channel Musk's purest essence.

Companies using LIDAR? "They bet wrong".

People describing how stereo cameras don't inherently see 3D? "Baloney. Based on eyesight".

Companies that have spent years and billions developing and testing autonomy? "Almost certain they are wrong".


Here's a nice video: https://answerswithjoe.com/lidar-vs-computer-vision-waymo-be...

It's a little more compelling on the data side but also goes into Lidar vs vision.

It might be controversial at this point to assert that vision is better. But I don't think it's controversial any longer to assert that vision is sufficient.

Given that driving and roads are currently set up for vision and Lidar will never be able to "see", it seems very reasonable to think vision could ultimately be better.


Or none of it will work.


I and many others believe that it is very unlikely.


Well, you forgot my lengthiest scribe:

"Sure it’s laggier but with dedicated silicon that’s a non-issue.

Fact is vision is more than adequate. And lidar is literally blind."

There's no evidence that I have seen that suggests vision-based autonomy will not be sufficient. Tesla is not the only one using the approach. Lidar has its own problem with cost and that it literally cannot see.


Telsa's bet, and I think it's a really fair one, is that if humans only need two eyes to navigate as well as we do, a car with >2 eyes could eventually be as good or better than humans.

On the other hand, humans do not have lidar systems in their head, and so lidar is the real unproven technology. It's also clear that lidar is especially lacking in situations which require seeing longer distances (like say, driving!)

To be fair, lidar is winning in terms of problems vs. miles driven, but I believe their advancement will plateau far sooner (and perhaps permanently) than computer vision based systems.


Sure, humans can drive with two eyes and a brain. But do Tesla's cameras have the amazing abilities of the human eye, and do his "FSD" chips have the power of the human brain?

We'll find out in a few years...


Humans can even drive with one eye and a brain.


I'm not a pilot, but I have had flying lessons. Airplane autopilots are extremely simple. They can basically fly you in a straight line, at a fixed altitude. Tesla's autopilot is actually way more advanced than those found in airplanes.


I think you're forgetting (more than a few) important functions that an airliner autopilot can do. Off the top of my head there's auto-throttle, auto-trim, and of course, if the runway is equipped, land the plane.

Also, if the pilots input a heading change, the plane will execute a 1G turn using all control surfaces so it's almost imperceptible to the passengers.

I think you're oversimplifying airliner autopilot.. by a lot.


>and of course, if the runway is equipped, land the plane.

There is certainly no such thing as an auto-land system in which the pilots are not required to be ready to take full control of the aircraft at a moment's notice. It is Level 2 in the strictest sense.


My Tesla (or at least the one from my colleague I occasionally get to drive) blaring out "MINIMUMS" would definitely get my attention quicker than the soft chimes they have now when autopilot disengages.


I think that the comparison of an aircraft autopilot to a car autopilot is just apples to oranges. They both operate in completely different environments, with different amounts and types of noise, are designed at a high level to do totally different things (yes they both "steer", but the problems they solve have no overlap), and are functionally completely different. I see this comparison a lot, and it doesn't add anything to the conversation, other than pointless bickering.


"autopilot" covers everything from a wing leveler to what you're describing. GP's description is entirely accurate for a general aviation airplane.


It's accurate but misleading when the training for general aviation costs a significant fraction of the car in question in 1 on 1 training alone. That Tesla is using a term easily misinterpreted by anyone who hasn't read an avionics manual stinks real bad.


Autopilots run the gamut from simple wing-levelers to fully automated departures, en route navigation, arrival procedures, landings, and runway rollouts. You'd be amazed what even a light piston single-engine can pack. Just today I flew a light single with an autopilot that can do climbs, descents, level-offs, en route nav, and precision approaches (lateral nav + glideslope) down to 200' AGL. That leaves maybe two minutes of the entire flight that require hand-flying, in a box the size of a small car stereo. Pretty good stuff.


A commercial airliner autopilot (the kind people think of when you say autopilot) can perform evasive maneuvers and land the plane.


Landing a plane is fairly trivial considering that autopilot simply guides the aircraft down a radio beam, cuts throttle and raises its nose a bit when on-board radio altimeter crosses below 50 feet. First fully automated landing in revenue service predates the first microprocessor, which should give an idea of the tech used.

The difficult part is decision-making in non-standard situations like Sully landing on the Hudson with his A320.


Can you say more about evasive maneuvers? TCAS provides only advisory information, but doesn't control a plane, I thought.


Don't you need a special landing system for the system to automatically land the plane?


With a recent software update, Tesla cars now perform evasive maneuvers when necessary even when autopilot is turned off.

There is a growing body of videos from dashcams (a feature enabled by another software update) on youtube showing Teslas avoiding collisions that would have been caused by other drivers being negligent.


What are they evading?


Other airplanes. Also the ground.

But the simple autopilots (think an old Cessna) don't even try to avoid the ground for you. If you set it and forget it, you may find yourself dead on a mountainside.


You can absolutely crash a modern plane by entering altitude 0. https://www.bbc.com/news/world-europe-32063587


There are runways below sea level.

And low pressure zones.


Other airplanes.



What do you conclude about Tesla autopilot from this?


By that argument, every car with cruise control (i.e. almost all of them) has autopilot.

Clearly, with their naming of their system as autopilot rather than cruise control, Tesla is trying to imply that autopilot means something more. Namely, it means the driver / pilot can offload the work of routine driving / flying to the computer, only intervening in certain times (e.g. off0highway or takeoff / landing) or in the event of failures / extreme conditions.


Do you understand the difference between going in a straight line and going at a consistent speed?

Cruise control only enables the latter.


Given that the answer to your "question" is obviously going to be yes, why did you choose that tone? Do you think there's a virtue to being dismissive and rude? Does it accomplish some worthwhile goal?


You're probably right. I wasn't trying to be dismissive, I just naturally matched OP's energy without giving it much thought. But with a little forethought I could have presented my argument with a more neutral tone.


So Tesla autopilot is superior to cruise control because it keeps you going in a straight line, and that's totally a feature car consumers need?


To go straight in a car, just release the steering wheel.


Straight isn’t everything. Autopilot keeps you in your lane even as the road curves. With caveats. It’s beta after all.


It its "in beta" as you say, keep it on private roads until out of beta. Sincerely, everyone else.


No way. Even in beta, I’d much rather put up with roads full of autopilot Teslas than a road full of human drivers. Have you seen how people drive?


In beta it’s probably still better than most humans, so that stance makes no sense. We’d do better to get human drivers off the roads.


It's not about what the autopilot does, but what the driver/pilot has to do while it's on. In a plane, you don't need to be ready to take over instantly at any moment with autopilot on, right?


I think the pilot absolutely has to be ready to take over at any moment. I know that at least on general aviation aircraft the autopilot can automatically disconnect for any number of reasons, including various weather conditions. When that happens the pilot has to immediately resume hand-flying.


That's what happens when you mislead your customers into believing your self-driving technology is far more advanced than it actually is. I hope Tesla loses this case and is forced to change their bullshit marketing before more people lose their lives.


> According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location.

The idea that "bullshit marketing" is the problem here seems short-sighted.


But they actually didn't. They specifically tell you it is NOT a fully autonomous autopilot and that's it's more of an advanced cruise control.

They also specifically tell you to KEEP YOUR EYES ON THE ROAD. If you fail to do so, you're asking for trouble.

It is unfortunate the guy haf kids that will now grow up orphans, but judging from the information I've read in the Tesla blog post, it is clear that he was in the wrong here, at least partially, for not paying attention while driving.


So why call it 'autopilot'? Seems like bullshit marketing to me. The first association we make when we hear that term is hands-off, not 'advanced cruise control'. They're specifically using that terminology to sell their products and now they're paying the price.


In an airplane, autopilot still requires the pilot to pay attention.


In an emergency, an airplane pilot often has time to pull out the operating handbook, flip through to the emergency checklist for their particular problem, and follow the instructions. Even for time-critical emergencies, it's recommended to read through the checklist afterwards to ensure you didn't forget anything [1].

By contrast, it would be very unsafe to be reading the Tesla owner's manual while driving. The level of attention required is much higher.

[1]: http://www.tc.gc.ca/eng/civilaviation/publications/tp11575-e...


Most people are not pilots to know that.


> They specifically tell you it is NOT a fully autonomous autopilot and that's it's more of an advanced cruise control.

Not here they don't:

https://www.tesla.com/en_GB/autopilot?redirect=no

Look at the caption in the beginning of the first video. They say the driver is only there for legal reasons and not doing anything.


"Current Autopilot features require active driver supervision and do not make the vehicle autonomous."

From your link.


There’s clearly contradictory information going on, so I can see why a customer may be confused.


If I take the perspective of someone who isn't informed about the technology, I can see how they could think that the technology is safe and Tesla is putting excessive warnings because their lawyers want them to. I think superfluous warnings on so many products has trained people to take warnings less seriously.


Tesla needs to get their story straight. They're trying to talk out of both sides out of their mouth, and using whichever statement is convenient at different places.


My understanding is the video is a demo of what their research technology can do as a one off demo, not recommended procedure.


They tell you those things in safety notes, but then the marketing people at Tesla call it "Autopilot", a name deliberately designed to imply hands-off operation.

To quote https://en.wikipedia.org/wiki/Autopilot

"An autopilot is a system used to control the trajectory of an aircraft without constant 'hands-on' control by a human operator being required."

Should users pay attention at all times, of course. Does Tesla provide mixed messages to users about the efficacy of the system.... I'd say there is evidence to support that.


Not only does the car provide repeated warnings, we can pretty clearly infer that—like anyone who has used Autopilot—he had become reasonably proficient at avoiding the warnings. So not only was he warned, he was actively working to avoid those warnings.

(Edit: to those voting me down, please contribute your objections to the discussion.)


Well, the warnings were unrelated and 15 minutes before the crash for once. So that invalids the ignored warnings in that case. I assume the real chain of events will be analyzed now. And the fact that the Autopilot is named the way it is for, presumably, marketing reasons doesn't help.


Exactly, no warnings for 15 minutes. Which means the driver HAD received warnings, understood the warnings, and learned how to avoid the warnings with such proficiency that he did not receive one for 15 minutes. It is not possible to avoid these warnings without being acutely aware of what the car was warning him not to do.

Furthermore, as a licensed car driver, he should know that driving is his responsibility. Nothing a car manufacturer says can diminish that responsibility. Even if Tesla's feature descriptions were ambiguous or insufficiently clear, that doesn't override the responsibility of a licensed car driver to remain in command of the vehicle at all times.

It's not Tesla's fault if a driver ignores the requirements of their driver license AND the vehicle's clearly worded warnings. Claiming that the driver is not at fault because they interpreted the word "autopilot" to mean "I can disregard my responsibility as a driver and ignore the vehicle's very clear warnings" is absurd.


The guy was an Apple engineer, so most probably he was also technologically literate enough to know the difference(despite whatever Tesla's marketing material claims).


> but judging from the information I've read in the Tesla blog post

Lol


If the Tesla had a proximity sensor that slammed on the breaks in the last 2-3 meters, would he have survived? I ask because it's a common luxury car feature and would be easy for Tesla to implement. IIRC this crash and the semitruck crash had no evidence of the autopilot hitting the breaks according to the NTSB.

Even if his velocity was only reduced by a fraction, the amount of power involved in the collision would have been reduced by whatever that fraction was.

Edit: So it turns out the Model X does have automatic emergency breaking, but the preliminary report says that the Tesla actually increased its speed in the 3 seconds leading up to the crash. Sounds like a major software bug to me.

Here's a review of the Model S AES compared to other cars, in particular the Tesla AES trigger can't handle when a lead car moves out of way, which is what the NTSB report says happened: https://www.caranddriver.com/features/a24511826/safety-featu...

Here's the NTSB preliminary report: https://www.ntsb.gov/investigations/AccidentReports/Reports/...


His car increased speed because the adaptive cruise was set to a higher speed than his car was going, and for 4 seconds before impact there were no cars in front of him.

The adaptive cruise increased speed to his set point when his car left the flow of traffic. The AEB will not trigger on a stationary object at highway speeds. I do not believe there exists any AEB that will emergency brake at highway speed, maybe except perhaps for a pedestrian shaped object?

He hit a crash attenuator which had not been reset (so basically direct impact with concrete) after it had been hit by another car 11 days prior.

Resetting the attenuators is a simple task, but apparently this particular one is hit fairly frequently.


I don't understand that the attenuator wasn't reset. In my country the lane will stay closed until all safety feature are operational again, including replacing the guardrail.


Yea so in America, transportation departments are either underfunded, overworked and other times delayed due to contracting laws. So instead of some guy coming out to replace the attenuator, they need to by law put out a contract to hire some guy and go through a months long process to approve it. If they don't, some random guy might sue them for not putting it out to bid, etc, etc. It's just never ending stupid.

Where I live, there's a guardrail that is so heavily bent out of shape from constant crashes that it needs some serious replacement because it'll eventually give and allow cars to fly into the opposite lanes. The problem is there was already a contract to replace 10 miles of guardrail issued a year ago and the contractor has not reached this part of the road yet. The DoT can't just go replace it themselves early.


>crash attenuator

>this particular one is hit fairly frequently.

Interesting way to build your road infrastructure


One way or another highways have portions where there has to be a solid, flat, vertical surface on the edges of barriers, exit ramps, etc. It's similar in many ways to the hairy ball theorem.

https://en.wikipedia.org/wiki/Hairy_ball_theorem

You can't just round over the edge of the barrier, because that still leaves a portion where a car could collide perpendicular to it. You can't just slope it down to the ground, unless you want an impromptu Dukes of Hazzard reenactment. You can't have just an unguarded edge because that would just be a death sentence if you hit it at any appreciable speed. The solution is to place those crash attenuators so that it slows the vehicle down over a small distance rather than just over the length of the crush portion of the car.

https://megarentalsinc.com/sites/default/files/feature-image...


Not a big fan of bay area roads, but this particular junction sees crashes because of a high frequency of law breakers. There is only so much you can do against a large number of people that are super-happy to break the law because the fine/insurance rate is probably not a big deal.


With the number of people who drive drunk or sleepy or while texting, crashes are inevitable.

Unfortunately tesla autopilot drives like an impaired human.


> If the Tesla had a proximity sensor that slammed on the breaks in the last 2-3 meters, would he have survived? I ask because it's a common luxury car feature and would be easy for Tesla to implement.

My car (a Nissan) has this feature, and it comes with a warning that it works best with obstacles that are moving. (ie. other cars)

Although it does work with stationary objects, it warns that if you're moving over ~35mph, it may not detect stationary obstacles, as it has trouble differentiating between them and objects on the side of the road.


AAA tests found 40% of AEB systems failed to stop a collision at 30mph and 60% at 45mph. Granted they're not causing them but those numbers still aren't great and get worse the faster you're traveling.

https://www.computerworld.com/article/3111407/aaa-automatic-...


At some speed stopping the collision becomes difficult/impossible.

But in the meantime I'd vastly prefer colliding after significant breaking than not.


Absolutely, and most AEB systems can't or don't do that. Everyone wants a safe AEB that functions correctly in all situations but we're just not there yet.

https://arstechnica.com/cars/2018/06/why-emergency-braking-s...

I think the real question is did Tesla mislead customers into believing their system was more capable than it actually was, thus resulting in them relying on a system when they shouldn't have?


And colliding after significant braking would be even better.


Most AEBs do not brake at high speed because reacting to a false positive can actually cause accidents. This makes sense as the time needed to respond to a situation demands detecting objects at greater range which requires more than just higher resolution sensors because as you get further ahead of the vehicle you have to contextualize what you're seeing.

Tesla's system is attempting to do that contextualization where as most other manufacturers systems are simple feedback loops.


>If the Tesla had a proximity sensor that slammed on the breaks in the last 2-3 meters, would he have survived? I ask because it's a common luxury car feature and would be easy for Tesla to implement. IIRC this crash and the semitruck crash had no evidence of the autopilot hitting the breaks according to the NTSB.

It's required by law for all new cars from 2020 I believe


> it's a common luxury car feature

Almost all luxury cars implement this feature the same way that Tesla does, with radar that ignores stationary objects. I'm not aware of another company that implements the steer towards obstacles part of the problem, though.


This is simply not correct. In fact, the link in the comment you just responded to contradicts you: https://www.caranddriver.com/features/a24511826/safety-featu...

It's actually pretty frustrating to have written a little summary of what likely happened WITH SOURCES, that a lead car changed lanes and confused the AES system, only to have people ignore it.

The only thing worse then a pedant is an incorrect pedant.


None of the tests in that reference were done at highway speeds. AIUI, handling stationary objects with radar is easier at slower speeds.

Super interesting to see that Subaru is doing emergency breaking with cameras, though. Seems I'm a couple of years out of the loop.


Another interesting, and very unexpected, luxury feature is Pre-Safe Sound by Mercedes. In case of an imminent accident, a loud noise is played through the speakers of the car to cause a reaction in the ear which prepares and thereby protects it from the noise of an accident. It is a detail feature but an interesting one.


Gosh, that's one I wouldn't have thought of. As an occasional tinnitus sufferer it seems a good idea. I wonder which bit of the ear moves to protect it.


No auto-cruise-control system will suddenly brake if it comes across a stationary object at highway speeds. The danger that a false positive ends up causing a pile-up on the freeway is too dangerous.


What? I have three cars with AEB/adaptive cruise control and I've experienced two of them doing exactly what you describe and saved me a bunch of damage.


Here's a TV show trying them at 55mph and 60mph, though with a slow moving object. The Volvo still works quite well at 60. https://www.youtube.com/watch?v=PzHM6PVTjXo&feature=youtu.be...


they do have that, but as you might imagine, the emergency brakes are only there to mitigate damage in case of a crash, not to prevent them


Yes but in this case it didn't trigger.


These AEB features almost universally do not trigger on stationary objects when the car is moving more than 20 or 30mph.


https://youtu.be/kuxundRB1zM

good aeb trigger at speed. they might not avoid the crash entirely, but try to mitigate it.

Tesla is quite behind the state of the art here.


Teslas and other manufacturers will trigger at speed against a stationary vehicle detected by the adaptive cruise control system.

Teslas are beginning to trigger more at speed on perceived non-vehicle obstacles (added in a recent OTA update) but that is not without false positives which some have deemed the Tesla Brake Check.

I’ve experienced the errant braking, typically when a lane is partially obstructed but you are moving to avoid the obstruction, that is when it will trigger. I’m not a fan of this behavior. You can override it with a quick tap on the accelerator, but it jostles you.


Here is the Tesla's Model S video as well: https://www.youtube.com/watch?v=_5aFZJxuJGQ


yeah I don't understand why there are so many posts in this thread that claim AEB can't do this kind of things with so much video evidence going around.

but here's the point, vision system detects objects it knows how to detect, lidar systems detects obstacles; as such doesn't surprise me a bit tesla is trained to recognize car from that aspect only to plow into 90deg rotated trucks, as when for example they cross an intersection.


If you bothered to click on the Car And Driver link, you would have seen that these systems (particularly Tesla's system) work at least up to 50mph with stationary objects.


In the video, that’s 50kph (30 mph). The most advanced Volvo system decelerated but still crashes pretty hard at 80kph (50mph) when approaching a stationary vehicle decoy.

Note well, if the target was an inflatable rock or an inflatable jersey barrier parallel with the direction of travel, the results would be much, much worse.

In those other cars the ACC is picking up a car-like object on the sensors. A non-car-like object is designed to be ignored.


Again, you have failed to understand what I said. The CAR AND DRIVER link is this one: https://www.caranddriver.com/features/a24511826/safety-featu...

And it shows a Tesla's performance among other cars, not a Volvo's.

And anyways, that Volvo video does show AES invoking at 80km/h and 130 km/h, reducing the speed of the collision but not avoiding it.

Maybe you're right about the car vs rock distinction but so far your track record is pretty spotty. Provide a source for your claims.


I read the C&D article when it came out last year. Pretty sure it was linked from HN actually. They ran some tests with a few cars driving into inflatable vehicle decoys. As I recall they found the systems would fail as often as not.

I was speaking in terms of this crash, which was not a primary collision with another vehicle, when I said that AEB will not activate for stationary objects at speed.

The ACC will activate for a stationary vehicle at speed, but will not save you entirely at even modest speed. At 70mph even ACC can drive right into a stationary vehicle.

These systems often work by sharing the ACC sensors and lock onto and track vehicles. This is why C&D discusses specifically the case of a lead vehicle swerving to reveal a stopped vehicle, and how some systems fail to acquire the stopped vehicle in time to provide any brake input at all;

> Volvo's owner's manuals outline a target-switching problem for adaptive cruise control (ACC), the convenience feature that relies on the same sensors as AEB.

(C&D did not test drive a Volvo for that test, but they did review the Volvo system and comment on it)

It would not be fair to take the C&D testing with inflatable vehicle decoys, and use that to claim Tesla is behind by not activating AEB into a narrow profile partial forward obstruction (parallel jersey barrier).

Partial forward obstruction of a non-car object is the absolute worst case for false positives, because this roughly translates into “any signal at all on the forward sensor”.

I’ll try to find sample images of what the sensor data looks like under normal operating conditions and when looking at narrow profile forward obstructions.

This is a very tricky edge case. A bird can fly in front of you. Water can splash up from the car in front of you. An empty trash bag floating up in the air after being run over by a car in front of you.

I did not mean to imply ACC or even AEB will not slow down a car heading for a rear end collision with another stopped car. I was speaking about this event specifically. I apologize for not being more clear in my 1 sentence original reply.


A more nuanced dissection of whose fault it was or could be[0].

"Huang was apparently fooled many times by Autopilot. In fact, he reportedly experienced the exact same system failure that led to his fatal crash at the same location on at least seven occasions." ...........

"Huang knew that Autopilot could not be relied on in the circumstances where he was commuting along the 101 freeway in Mountainview, California. Yet he persisted in both using the system and ignoring the alerts that the system apparently gave him to put his hands on the wheel and take full control." ..........

"Elon Musk and Tesla should be held to account for the way they have rolled out and promoted Autopilot. But users like Walter Huang are probably not the poster children for this accountability."

[0]https://www.forbes.com/sites/samabuelsamid/2019/05/01/the-pr...


Or maybe, as I mentioned in a previous comment [1]:

   - Mr. Wuang knew about the bad spot
   - A Tesla OTA solved the issue for that particular spot
   - He got used to the car behaving properly in that location
   - A Tesla OTA introduces a regression for that particular spot
   - Mr. Wuang dies.
[1] https://news.ycombinator.com/item?id=17141784


Yep, or it didn't happen all the time and he perceived it was fixed after reporting it. We do not have all the facts.


> Yet he persisted in both using the system and ignoring the alerts that the system apparently gave him to put his hands on the wheel and take full control.

"Hands not detected" does NOT mean "hands not on wheel".

Tesla relies on driver input (torque) to "detect" hands, which means you could have both hands on the wheel but sill get the warning to place your hands on the wheel.

It seems like most people are assuming it's some kind of magical capacitive system that can detect the slightest hint of contact, which couldn't be further from the truth.


I agree, but I'm also not sure that it is a particularly important distinction. He was repeatedly warned that attention is required with this system and yet clearly was not paying attention at the time of the crash.


> clearly was not paying attention at the time of the crash.

This is the fundamental point that I'm disagreeing with though. As far as I know, Tesla has a very crude system for determining driver attentiveness. They rely ONLY on steering wheel torque, which is an extremely unreliable indicator. They don't do any sort of head or eye tracking. So when Tesla claims a driver wasn't paying attention, all it really means is "we're not sure if their hands were on the wheel".

My car uses the same method, and I constantly get warnings to place my hands on the wheel even when they're both already on the wheel. It's possible to have your hands on the wheel without applying any torque, and that's where this system fails.


When a driver unintentionally crashes into a solid object for no particular reason, noone needs vast knowledge to realize that the driver wasn't paying attention at that time.


I think it's deeply irresponsible to create a device which can drive in 90 or 95% of scenarios. It's just a way to fool humans into killing themselves.


If the system is even a few percent better than humans then it would save thousands of lives when widely deployed.

Musk has made this argument many times. Basically don't let perfect be the enemy of better.

But if the vendor is legally liable in each case then this software is untenable.


Tesla (and/or Musk) keep saying it's safe to let the car drive, and then when you let the car drive and it kills you, they say you're holding it wrong.

If you're actually trying to increase safety, the right things to build are computers to supervise the human driver, and not systems that require a human to supervise the computer driver.

If the human drifts out of the lane, nudge the car back in and beep, but don't prevent the human from intentionally going over the lines.

If the human appears to be driving into an obstacle, beep and apply the brakes, but allow the human to override.

If the human isn't paying attention, beep, then turn on the hazard lights and slow down / try to find a safe place to pull over.


A computer supervising the driver is what got Boeing into problems with their MCAS resulting in the 2 plane crashes

Tesla also doesn't say that it's safe to let the car drive. There are ample warnings and messages and feedback that say that you have to keep your eyes on the road and hands on the wheel at all times.

There are likely more non-AP Tesla/non-Tesla auto accidents and fatalities than there have been AP related ones. I feel safer riding in a Tesla with AP engaged and the driver having his eyes on the road and hands on the wheel.


What got Boeing into problems is management overpromising something engineering couldn't deliver, just like Tesla is doing with "autopilot."

Boeing promised a brand new and improved jet that was cheaper because it packed in a bunch of improvements without having to go through certification on a new airframe or pilot retraining. Engineering tried their best but either through negligence or "just following orders" and bureaucracy they failed catastrophically, as any engineer will tell you would happen with a non-redundant (in a 3+ vote configuration) sensor that could override pilot control inputs based on faulty readings


> Tesla also doesn't say that it's safe to let the car drive. There are ample warnings and messages and feedback that say that you have to keep your eyes on the road and hands on the wheel at all times.

Tesla had to be "encouraged" to say that "at all times" was in fact an amount less than "every 15 minutes", which it was just a couple of years ago, touch the steering wheel four times an hour.


They don’t say let the car drive. Not yet.


Humans can be 30+% better than "humans" by 1. not using a phone and 2. not being drunk or stoned.

A few percent better than "humans" is quite bad relative to a semi-responsible human driver.


>A few percent better than “humans” is quite bad

Yes it should be (imho) at least an order of magnitude better than humans before we consider it OK to not pay attention.

Not a few percentage points better as you say.

However, to overcome media drama, it probably will need to be not just one, but more like three or four orders of magnitude safer than humans, because of the “oh look, a Tesla had an accident” effect.


Getting more than 50 percent better than humans gets hard, since frequently the accidents are the fault of the other party.


Getting much better than the average human is achieved by a lot of humans. Deaths per mile are about 30x lower for bus/coach compared to cars, about 4x lower for vans and about 50x higher for motorcycles. There's a lot of variation. https://www.statista.com/statistics/300601/average-number-of...

If you assume vans have about the same crash protection as cars and some van drivers are idiots then I'd guess you could get 10x better than the average human just through driving skill/technique.


> Deaths per mile are about 30x lower for bus/coach compared to cars

Per passenger mile. So if busses have 30 people on on average, the accident rate could be the same, just typically only one passenger dies.

That also makes sense with the vehicle characteristics. It's much harder to die by being ejected from the vehicle or by intrusions into the passenger compartment in a bus. Also, heavier, longer bus = more gentle deceleration.

Your statement might still be true (and I suspect it broadly is), but it just isn't backed up by the cited data.


Not sure I follow. I would think the rating of how much better they are would be applied only to the non-culpable driver? But you are saying the metric should taint that driver with the fact that they didn't manage to avoid an accident caused by someone else? I sort of get it but not sure if it makes sense... can you say more?

Or maybe I'm overthinking it and you're just talking about how hard it is to get the statistics (number of accidents) down overall?


You're overthinking. Fault is a legal construct to help decide liability. What we really care about is if we're dead or not. Insurance is there to take care of the rest.

Since drivers under the influence are less likely to die in an accident because most drugs either relax the muscles or delay reflexes enough to keep the body relaxed during an accident, preventing a lot of potential injury, it's an important nuance to consider. If the people most likely to drive drunk are the least likely to adopt self driving cars, there is an absolute floor to how much safer they can be.


We have absolutely no evidence that AP is safer than human drivers because Tesla (tellingly) refuses to release information comparing AP miles to comparable miles driven by other luxury vehicles. Their refusal to release that information combined with sky-high insurance rates for Teslas indicates that they are (perhaps significantly) less safe than other modern luxury vehicles driven by humans.


The legal problem is: which humans? It's one thing to say that on average, the system will save lives compared to an average human. But perhaps the human in the car is above average. Is the car safer for them? Perhaps not. How can they discern the relative safety?

The easiest way to deal with the issue is to be safer than, say, 99% of drivers. I don't think Tesla can make that claim at present.


A few percent better than humans at what?

If it's better at avoiding harm, across the board, in all situations where humans currently operate vehicles, then yes, that would be an improvement. But Tesla is nowhere near being able to do that.

Tesla seems to be trying to claim that it's sufficient for them to be a few percent better at, for example, keeping the car in its lane, or maintaining speed, or spotting obstacles. That's not an improvement. Driving is not a contest to see who can identify the most obstacles or maintain speed or lane alignment the best. People want to get where they're going safely.


If the system is even a few percent better than humans

I don't think this can be true for a 90 or 95% solution. I think too many people would think the system can drive for them, because they're not Tesla enthusiasts and they're not reading all the caveats about what the aids can and can't do, and they'll gradually start to rely on autopilot until the day it lets them down, possibly with a very negative outcome.


A minor quibble: it's irresponsible to bring it to market or otherwise operate it without additional safety measures. Creating it isn't directly a problem.


Maybe they could just update the safety messages. "Remember to keep your hands on the wheel and your eyes on the road" is too bland.

How about, "I'm doing my best, but I still might swerve into a concrete barrier any second..."?

Or, "I wonder what would happen if a deer jumped into the road right now"?

Maybe "Don't tell me the odds"? "Was that your left or my left?" I dunno, help me out here.


While I'm not quite sure about Tesla's responsibility, I do think CA DOT has its part in this tragic accident. Had the attenuator been replaced right after the previous accident, it could have saved the driver's life.

Usually I don't complain much about the gov, but just look at the construction mess they've created on 101, it's been like that for more than 4 years!


IIRC, the barrier was damaged from an accident that happened 11 days prior [0]. One or two weeks later, I remember driving past the area of the crash a Sunday or two Sundays after the Tesla fatal wreck (I vaguely remember the time period because I remember picking up a visiting friend in early April). Traffic had slowed down to a crawl for miles, and it appeared that a crew was doing work on that barrier -- I assume they were fixing/replacing the barrier.

My point is, it seems premature to blame CA DOT for its role. Do you have evidence that the delay in replacing that particular barrier was slower than the normal time window? Because it seems likely that fixing that barrier is a construction job that can only be done on a weekend, because it's impractical to jam the 101 during a weekday commute.

[0] https://abc7news.com/automotive/exclusive-i-team-investigate...


There should have been no delay - it is designed as a critical part of highway safety infrastructure, as the death in this situation highlights; it should have been immediately reconstructed - and hopefully this lawsuit will lead to that protocol being implemented/adhered to.


Is there a law or regulation that mandates this? The article says Caltrans claims a storm caused the delay:

> "Once our Maintenance team has been notified, the Department's goal is to repair or replace damaged guardrail or crash attenuators within seven days or five business days, depending on weather. These are guidelines that our Maintenance staff follow. However, as in this case, storms can delay the fix."

The delay may or may not have been justified (I guess we'll find out if the family decides to sue the state). But an actual regulation is important, because not only would it provide explicit criteria to show Caltrans is in the wrong, it would've (or at least should've) meant funding and procedures are in place. There's always a tradeoff between cost and safety. If the state of California doesn't provide the funding and staffing it would require to replace this barrier with "no delay", then it seems difficult to fault Caltrans for negligence.


I'm not sure of the specifics - I would argue it should be mandated; at minimum they could have put barrels full of sand or water - whatever they use - as a stopgap.


> Usually I don't complain much about the gov, but just look at the construction mess they've created on 101, it's been like that for more than 4 years!

If they finish construction then they stop getting paid.


So don't blame the company whose claiming that their car can "autopilot" under all conditions.

Instead blame the road. Interesting take.


This is a bizarrely inaccurate mischaracterization of what Tesla has ever said about autopilot.


Yes, nevermind the video that emphasizes "the driver is doing nothing" on their tesla.com/autopilot page since 2016, or that all vehicles come with hardware for full autonomy.

There's nothing bizarre about threeseed's characterization.

https://www.tesla.com/autopilot


That is a demo of full self driving, which is a perpetually coming soon feature. You even need to pay for it separately.

The actual section on what autopilot can do and what it requires of the driver is pretty clear:

> Autopilot advanced safety and convenience features are designed to assist you with the most burdensome parts of driving. Autopilot introduces new features and improves existing functionality to make your Tesla safer and more capable over time.

> Your Tesla will match speed to traffic conditions, keep within a lane, automatically change lanes without requiring driver input, transition from one freeway to another, exit the freeway when your destination is near, self-park when near a parking spot and be summoned to and from your garage.

> Current Autopilot features require active driver supervision and do not make the vehicle autonomous.

As a Model 3 owner, it does do those things quite accurately. Certainly more so than the MobileEye based system on our Volvo, which can't lane keep and stops abruptly.

I find it convenient and more relaxing when driving, but I heed their warning (which I also agreed to when enabling autopilot in car) and pay attention, being prepared to take over at all times.


Except the company never claimed that.


Tesla is playing with lives and should stop offering Autopilot.

Anything less than L4 autonomous driving is completely reckless. Calling it "Autopilot" when it's an L2 system should be criminal.

Yes, it's an extreme stance, but we're going to have a really hard time getting out true autonomous driving when companies are playing around with people's lives. You can say "well they know the risks" but it's not a closed situation. There are others on the road who will also die because of Autopilot mistakes.


We are going to have a really hard time getting out true autonomous driving if they don’t do this.

That’s how the system will learn how to deal with real life scenarios.


You're okay training an incomplete self driving machine learning model on the general population?


I actually help it every day during my commute. They are not stupid and this isn’t so unsafe as people who have no actual experience with it claim to be.


People have died. So it seems pretty unsafe to me.


People have died in cars without autopilot. So what do you suggest we do with regular cars?


> Anything less than L4 autonomous driving is completely reckless.

"L4 autonomous driving is completely wreck-less"?


I hear people say “self driving cars don’t have to be perfect, they just need to be safer than humans.” Here is a good example of why that might be harder to achieve than expected. Apparently the car had gotten confused at that exact location repeatedly before. That’s what self driving cars are going to do—if there is something “weird” it’s likely that every self driving car (at least, every one from the same manufacturer) that encounters the weird scenario will run into a problem. That could result in catastrophic failure modes at scale.

What would have happened if every car on that road had been an identical Tesla? How many crashes would have happened? How long would it have taken Tesla to issue a fix? How many miles of perfect driving would be required to make up for the cluster of crash events due to that one anomaly?


I have a "lane sensing" Honda that errors out on the same highway exit during my commute home.

Everytime I take this same exit, my car warns me that lane sensing had an error, that collision detection has encountered an error, a few other systems have also stop working.


To play devil's advocate:

Assuming self-driving cars are connected to each other, either directly or via the internet, if one car crashes, or even just makes a minor mistake, the knowledge of how to avoid that can be transmitted to every other car on the planet.


Seems unlikely that the knowledge of how to avoid making a mistake could be compiled without human analysis. The technology to do that autonomously doesn’t exist. It would almost require artificial general intelligence, and if we had that self driving would be trivial.


The "human analysis" could be completed within 30 seconds and beamed out to all cars in the area.

No reason Tesla can't have operatives on duty 24/7 to assist and tweak/train the fleet of cars.


Automatic feature selection is the area of research you’re looking for.


Just as good information can be propagated, so can bad. So this had better be a robust algorithm and not one which can either be induced into something or simply learn “wrong”.


This is very close to what is actually happening, though not in real time.


Based on what evidence?


Based on what Tesla presented in their autonomy day event (https://www.youtube.com/watch?v=Ucp0TTmvqOE) but it's also what they've been saying publicly for a while now... in a nutshell, they ask the cars to send selected summary data back when interventions happen and in certain other cases.


Ah thank you. I had only ever heard that it was a beta feature, something they planned to implement in the future.


Tesla demonstrated a system doing exactly this at their autonomy event. The video is worth watching whatever you think Tesla.


This feels more dangerous, not less.


You get similar issues with humans though. The crossroads outside the house I lived in had restricted viability and some drivers would notice it late and crash, about one every couple of weeks for years and years. Eventually they replaced it with mini roundabouts which fixed the problem.

Also you get those epic motorway pileups in France when it gets foggy and 150+ cars crash in. Just to say we have odd failure modes too.


So, in a traditional embedded system, discovery can reveal the source code to help determine what went wrong (e.g. divide by zero error in x-ray machine). What are the lawyers going to get when they start looking into the Tesla's autopilot software?


Likely they will get a gigantic sea of trained deep learning weights that noone understands.


That really isn't going to play well with a judge or jury. I can see the poor programmer up on the stand not being able to say what it meant or how the car made its decisions.


That is, unfortunately, the problem with neural nets in general.

They are amazing logical constructs, but there is a fundamental opaqueness to them that absent sufficient neural network mass to convincingly simulate a human, we can't apply the same methods of formal verification and behavioral inference we could for other more specific machine implementations.

No one can explain just why certain weights work at certain situations and not others. They just do.

Whether someone in the justice system is comfortable effectively legislating from the bench by creating precedent holding companies liable for NN based behavior that there are no hard and fast ways for them to proof test against in the first place is another question however.


It’s almost as though you have to treat them like human drivers. We cannot formally verify 16 year olds, either, or sufficiently introspect accurate reasons for their behavior. Instead, we require them to pass a test, we apply actuarial cost models, etc.


16-year-olds can talk and try to explain themselves. Humans try very hard to make themselves understand. Neural nets can't (at least, not yet).


You can figure what the Tesla NN would say if they built an explanatory speech system in. Something like "Opps sorry - I thought that which line was a lane and didn't recognise the barrier." Not all that helpful here really.


There's no fundamental law that requires neural networks be hard to understand. In fact debugging and interpreting neural networks is a very active area of research and getting easier everyday. In twenty years I would not be surprised if tools are so good, due to economic forces, that it's easier to understand why a neural network made a decision than it is to understand why some complex hand written conditionals program reached a certain result.


That still doesn't solve the problem.

Nodes X, Y, and Z reached action potential tells us nada useful about useful about the NN.

It's not a question of not being able to run the code in debug; we can absolutely do that.

It's a question of the outcome only being dependent on a seemingly random set of numbers which no one can really reason from. That's the issue. With teenagers, we at least have the ability to structure incentives such that they continually improve their driving behavior, and most importantly, they are actually capable of learning after you cut them loose with the car. No hardware upgrades required.

The car on the other hand? Not so much.

W.r.t another poster's suggestion of the employment of actuarial models: I consider the employment of insurance to be a less than satisfying marshaling of our economic time, and a backdoor social control mechanism that still just makes me fidget. But that's just me.


> It's not a question of not being able to run the code in debug; we can absolutely do that.

Don't think of debugging/stepping through neural network code using the same tools as stepping through procedural code. In the future you'll have better visual tools that show you a lot of information about the state of the network at once, rather than just the contents of a few registers that you see with a modern debugger.


No one can explain just why certain weights work at certain situations and not others. They just do.

I think the problem will come when some lawyer latches on to that as a way of saying the auto maker is putting a product it cannot prove or explain on the road, but still claiming it’s safe.


Liability is about incentives. Strict liability moves the externalities of unsafe cars onto the manufacturer.


To an untrained juror, even regular computer code is a black box whose behaviour is entirely unpredictable.

The only difference between a neural net and regular logic is that the latter has someone who claims to understand it.


This person died because of Tesla's cocky marketing, which leads people to believe that "Auto Pilot" is just that and that does very little to discourage this interpretation even though it is life-threatening. In this context them blaming it on the accident victim is a 100% asshole move.

I am life-long Musk's fan, but Tesla trying their hardest to weasel out of any responsibility here is incredibly damaging to their reputation. The future will come, trying to accelerate its arrival at all costs is reckless.


I generally agree with you on the Tesla marketing issues, but it's not entirely as black and white: The driver reported issues at that very location multiple times, and complained about AP not working reliably. The driver definitely was aware of flaws and issues with AP, worse yet, was aware of issues with AP at the location of the crash. Tesla's marketing may have contributed and may generally give a false sense of functionality/security, but Huang already knew it wasn't true. Yet still, at the location he knew AP had problems, he failed to pay attention to the road.


From the article:

> According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location.

How can one possibly claim cocky marketing is the problem in this case?


Autopilot is just autopilot. There was lot of plane crashes with autopilot involved.

https://www.youtube.com/watch?v=FLrIXptqZxw


Tesla made some very big claims in their recent autonomy day event- basically, they claimed that they are years ahead of competitors, while operating on "hard" mode (no lidar). And yet, a number of participants in the short autonomy day demo rides claimed that the support driver had to disengage the autopilot.

Has Tesla provided any evidence which shows evidence that they are in fact far beyond competitors?


> Has Tesla provided any evidence which shows evidence that they are in fact far beyond competitors?

Evidence showing they are ahead in terms of customers? For now, maybe. In terms of safety? No.

According to some fans, they sell a product you control. You're allowed to use it anywhere, therefore, they're ahead.

According to Tesla, they designed a chip that's best-in-class and they share a small amount of safety data quarterly [1]. Musk will not share any more data [2] because that would allow people to "turn a positive into a negative".

[1] https://www.tesla.com/VehicleSafetyReport

[2] https://www.youtube.com/watch?v=HqvatzjHGyk&t=47m17s


Thanks for the links. Their "Vehicle Safety Report" is literally 3 sentences! I expected such a report to have much more detailed information in it.


I'm not even sure what evidence could exist to show Tesla is years ahead. Who knows how much Waymo, Cruise et al will advance next year. You can't fully see the future here.


English is not my first language. I am confused by the blatant use of the term "Auto Pilot". Does it not suggest more automation than is currently feasible? Why not intelligent assist? Why is tesla/musk getting a pass here?


I think a lot of people want to be pedantic, and point out that an aeroplane's autopilot can't do all of it itself, and you therefore still need pilots to manage an airliner.

However, that's because a lot of people on HN understands how a plane's autopilot works, but most people do not. They assume it is what they think a plane's autopilot is supposed to be. Since a plane could technically take off, fly and land itself by wire, but only under the best of conditions.

At least, that's my theory.


No, you are exactly right. “On autopilot” is English vernacular for “without thinking or paying attention”. It completely sends with wrong message, and the lawyers will have a field day with this in court.


Autopilot in an airplane is similar. It can do a bit and make the trip easier, but it's not going to dodge a rock that happens to fall from the sky. It's a pretty simple system that really just relies on instrumentation. It's not using any fancy algorithms to make bold maneuvers. I might wager than Tesla's is doing more.


Then maybe using the Autopilot or a Tesla vehicle should be subject to rigorous training and repetitive simulator sessions every 6 months to ensure that the drivers are using it correctly and are aware of its current capacity.

If it’s like an airplane, should be regulated like an airplane.


Yeah but most people are not aware of that. To them it’s a button you push that flies the plane. This is why the marketing around this term is so deceptive.


Do they not have Rumble strips in the US?

https://en.wikipedia.org/wiki/Rumble_strip

I have been in situations in my youth where I was commuting or travelling tired, and nothing alerts you like the loud rumbling sound, and I'm sure the autopilot could detect it and bring the car to a stop if the driver hasn't been alerted by the in car warning system or the rumble paint...


We do have them, though they're not required by law and are not always present.


I know the argument is always that autopilot is not intended by Tesla to be abused (never mind the CEO on national television abusing it).

However, there's what you tell humans, and then there's pragmatism regarding human nature. We should consider the latter.

It's possible to be both academically correct on the warnings/instructions given and also practically wrong on human psychology.


Your reasoning is that people are stupid and therefore you should manipulate them 'for their own good' because they have no self-agency and self-responsibility.

Needless to say, people who think like this make terrible dictators when given the chance.


You could can easily apply this logic to say "we should revoke laws that make it illegal to drive without a seat belt", or dozens of other examples.

Humans are notorious for thinking "I'm in a real hurry, I'll leave the safety system off because I likely won't get hurt". We know that laws, rules and protocols with enforcement and negative consequences are required in order to make humans act safely at scale.


It's a good thing I did not propose that I become Supreme Dictator Strawman.


It seems like a (relatively) trivial addition to the Autopilot system would be to allow the drive to tell the car when it has made a mistake and if Autopilot(s) consistently make mistakes in the same area then Autopilot should give a specific warning, or force a disengagement, when it detects that it is approaching that area.

I can't imagine that this isn't already a thing or that someone at Tesla hasn't come up with this... am I missing something here?


Tesla engineers talked about this in great depth during the recent streaming event. Manual disengagement of Autopilot is the mechanism used to trigger an upload of the sensor data. It is automatically uploaded to Tesla servers and used to train the driving model.


A Twitter user who rooted his Tesla claims that autopilot disengagement reports are very small (< 1 KB) and do not contain any actual sensor data, just things like GPS coordinates, speed, heading, etc.: https://twitter.com/greentheonly/status/1096322810694287361

Not the kind of information that you use to train computer vision algorithms. I found his claims to be an interesting read.


Definitely interesting but I wouldn’t trust a rooted system to behave the same as a non-rooted system.

Nor is there any guarantee that the snapshot in time when he did his research is representative of everything they do.

And such logging behavior can vary by car, by location, and by software version.

I tend to think it is in Tesla’s interest to collect the most useful data, efficiently, and make good tradeoffs when doing so. They say they are gathering data on interventions that will help improve their neural nets. I believe them.


You might want to read the whole Twitter thread again.

The long section where he describes “campaigns” is Tesla's mechanism for capturing high quality data based on certain triggers, and including pictures.


However the rest of the thread explains that Teslas are "trigger"ed to capture imagery data (in rapid succession, almost like a video) in a variety of situations like being close to pedestrians/cyclists not just disengagements.


You are not missing anything. They have that and gave details in their last Autonomy Day event.


The crux of the issue will be that autonomous capabilities will/have made cars overall safer. Tesla will be able to cite data and circumstances where the safety features saved lives, and likely more times than lives were lost.

The problem is that the lives that are lost when the safety features fail are different lives than would have been lost without it at all. The families of those killed in this manner will have their day in court.


> Tesla will be able to cite data and circumstances where the safety features saved lives, and likely more times than lives were lost.

It would be great if Tesla would share such data, however their safety report is only a couple sentences [1] and Musk just recently refused to share more safety data because it would allow people to "turn a positive into a negative" [2]

[1] https://www.tesla.com/VehicleSafetyReport

[2] https://www.youtube.com/watch?v=HqvatzjHGyk&t=47m17s


I don’t think a juries are known for valuing statistics over gut feeling, and something tells me technophobia will win here.


can we just make a law that the driver is responsible/liable for the car they drive, regardless how they drive it? I mean breaks failing while you are actively driving is one thing, but to completely surrender control, that is a choice.


But that's what Elon promises. He is careful never to publicly say it outright but his PR machinery wants you to believe this car can do things it literally cannot.


I really like a lot of what Tesla and Musk do, but their PR campaigns are absolutely repugnant. Like in this case where they try to make the NTSB out to be the bad guy when in reality they go completely against accepted NTSB standards in an attempt to control the narrative. It's awful.


If you surrender control to a doctor to perform a surgery, and the doctor decides to stab you in the heart, why aren't you responsible for being stabbed, since you chose to 'completely surrender control'?


Tesla is certainly on the knifes edge... they must be extremely confident


or delusional ...


How do you feel as a Tesla owner knowing that after your death, Tesla will publicly post that it was your fault, based on data from your vehicle?


Why does this TC article need to mention he was an Apple engineer? Just trying to fill space?


"Move fast and break things"

So it has to be for rapid technological progress to be made.


That news about this case is being spread everywhere irks me a bit, because people aren't considering the actual circumstances of the accident. Excerpt from the article:

"According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location. The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so."

This deserves a WTF. He understood autopilot makes errors, complained to his wife several times [0] that the car usually makes errors in that exact spot, and yet wasn't paying enough attention on a clear day with ideal driving conditions to commute safely.

[0] "Family members say he complained about his Tesla veering into the same barrier at the exact location of the crash and that he brought his car into the dealership several times to report a problem with the autopilot function." from https://sanfrancisco.cbslocal.com/2019/05/01/family-driver-d...


That's an excerpt from a statement made by Tesla that is quoted in the article. Tesla was reprimanded by the NTSB for making statements like this during the investigation process.


The article I linked in the footnote has quotes that are arguably more WTF than the Tesla statement. It quotes the wife describing how she had told him many times that the car was swerving in that spot.


Well yeah but he’s not the one suing. The family lost him and probably isn’t thinking rationally. Or, maybe they are thinking rationally. Lawyers probably took the suit on contingency right?


That's fair enough, but the problem I have is the way this event is being portrayed in news and social media. The implication everywhere is that autopilot just unexpectedly swerved into a wall and there was nothing the driver could have done. It actually was expected, and any reasonable driver could have easily prevented it, because it had almost happened so many times in the past.


> any reasonable driver could have easily prevented it,

Not really. The crash attenuator was missing because someone else crashed into it. That highway exit is ridiculous. As for faults, there is obviously fault on both sides. The driver should have been more alert and the "autonomous system" should have detected the obstacle and slowed down the vehicle before hitting the guard.


Both sides? Shouldn't there be some responsibility for the organization tasked with making sure roads have dividers that people don't crash against multiple times in a month?


What would you suggest Caltrans do instead, other than replace the attenuator more often?

I can think of various other things that Caltrans can place there for people to crash into, but at some point there will be the start of a rigid wall.


The repeatable problem suggests a lapse in the current layout. The fact that a new crash happened within two weeks suggests that replacing the attenuator should have happened faster than that.

That being said, I'm only pointing out that in this case there can be extra variables to account other than the car manufacturer and the driver, the same way that when people speeding, running stop signs and pedestrians not paying attention when crossing an intersection can all result in a collision there is always also a bit of missing urban planning that could have helped avoid the situation in the first place. I believe the same would be true of this case.


This is the issue with wireless updates on any vehicle.

What if tomorrow Autopilot is updated and it begins occurring 2 miles down the road?


On the flip side, Tesla fixed this specific issue (and apparently unfixed it again) with over the air updates. If this was another manufacturer the cars would need to be recalled and it would likely be years before a fix was rolled out to all vehicles and therefore lives would continue to be in danger. So it isn't like over the air updates are inherently bad. The reality of the situation is that there almost certainly needs to be some level of government oversight over this type of thing.


I won't argue against over the air for everything but when your 'autopilot' system can be updated any day for any reason, I would argue that it failing and killing you shouldn't be as excusable.


It is worth noting that Teslas don't update "any day for any reason". The updates are very rarely more frequent than once every several weeks and have to be manually triggered and approved by the owner.


>It actually was expected, and any reasonable driver could have easily prevented it, because it had almost happened so many times in the past.

Not true if is the driver first time on that road at that time of day on that software version.


These articles take off because the general (non-Tesla-owning) public believes "autopilot" is full autonomy and Tesla is covering something up.

If this was branded as a "driving assistant", people wouldn't bat an eye when the driver failed to maintain control.


As we've heard from other Tesla drivers on HN in the past: Tesla continues to deliver OTA updates, which occasionally causes people to think that previous issues have been fixed, especially if they haven't popped up again recently. Either they continue to avoid using autopilot because they can't take the chance of finding out whether Tesla ever bothered fixing the problem, or they give it a test, think it's working fine now, and are lulled into security ... right until it doesn't work again.


As a Tesla driver, I look out the windows when operating my motor vehicle. If my car crashes on AutoPilot or otherwise, it’s my fault.

When Tesla tells me I can stop looking while the car drives, instead of how they currently do it where it constantly reminds me to be alert, then I might change my position.

Until then it is abundantly clear to every single Tesla driver, who exactly is responsible for controlling the vehicle. The car reminds the driver of this when activating AutoPilot and persistently while under way.

The name of the feature could be MagicCarpet, AutoDrive, AutoCruise, SmartPilot, Pilot, Jeeves, I don’t care what. The car says in big bold letters every time you activate the feature to be alert and ready to take over.

I also expect the performance of the Beta AutoPilot software to change with each update. Navigate on Autopilot didn’t exist when I first got my Tesla. Blind Spot warning functionality either. Adaptive cruise and merging behavior have both significantly improved.

Last week a car in the lane to the left of me put on its blinker and I could have sworn my Tesla slowed down to let them in. I did not notice the car make any rightward motion to enter my lane before my car slowed slightly. It was not a jerky deceleration but just a smooth increase in the distance just enough to maintain the follow distance with the new car in place. The car changed lanes in front of me and we kept driving along. I’ve never seen that before, and I wasn’t aware Tesla could do that based on optical recognition of the blinker alone, but I’m pretty sure that’s what happened. It could be the Tesla saw the car moving to enter the lane before I did, but it had certainly not crossed the dashed line before the deceleration.

Drivers flirt with danger when they allow themselves to drive distracted and stop paying full attention to the road, with or without an AutoPilot capability.

Two days ago I was driving without AutoPilot and was on a phone call on speakerphone (Bluetooth through the car speakers). It was a fairly contentious call and even though I was hands-free and looking ahead it was definitely the first thing on my mind. I was approaching a left-lane must exit, in the middle of a 3 lane highway when at the last minute a car in the left lane swerved to avoid the exit in front of me.

I probably had enough time to slow down without causing an issue behind me, but I hadn’t checked my rear-view in some time due to being distracted by the call. So instead I jerked to the right and moved partially into the right lane. A car in the right lane ~3 car lengths behind me then had to hit their brakes hard enough for their tires to squeal.

After reviewing the dashcam video (Tesla records if you plug in a drive) I was unhappy with my choice there. I should have slowed quickly and stayed in lane. If I wasn’t on that call I would have made a better choice.

But worth mentioning, if AutoPilot was engaged, it would have made the better choice in that situation as well (since AutoPilot will never swerve to avoid an obstacle in fact).


> instead of how they currently do it where it constantly reminds me to be alert, then I might change my position.

So you're a new Tesla owner then?

Because a few years ago, it'd only tell you to put your hands on the wheel once every 15 _minutes_...


Relatively new, I bought a TM3 last July.


As a Tesla driver, I look out the windows when operating my motor vehicle. If my car crashes on AutoPilot or otherwise, it’s my fault.

Legally, both you and Tesla are at fault if the car crashes. You, for not maintaining full control of the car. And Tesla, for developing the AutoPilot system which directly caused the crash. In this context, Tesla is the "proximate cause* of the crash, so their liability generally would supersede your own. Moreover, in a lawsuit, you and Tesla would be jointly liable for damages, but any sane lawyer would settle with your for a small amount and go after Tesla for the remainder.


You’re the lawyer. Is there case law to support this? I ask not to mean “citation required” but because I would be very interested to read it.

I don’t know if and how much Tesla can disclaim responsibility for the crash based on warnings written into the UI and user manual and TOS.

For example, I’m fairly certain that autopilot systems on boats do not absolve the Skipper of liability if a boat crashes, and suing a small craft autopilot systems manufacture for maintaining (or not maintaining) a set course into a collision would be laughable.

I would also say, if we can’t ship self-driving capabilities without some way of managing and constraining the inherent liability, we will miss out on saving 10s of thousands of lives just because we can not save all the lives.


Yes, just break open a 1L torts treatise and there are dozens of cases.

The difference between autopilot on a boat and Tesla's Autopilot is a matter of agency/autonomy. Boat and plane autopilots mostly just go in a straight line--they make no decisions.

Tesla's autopilot functions autonomously from the driver--and that, legally, makes all the difference, because it can make different decisions from what the driver would do. The driver still bears some responsibility for their failure to oversee and correct TA, but TA bears direct responsibility.

The analysis would be different if TA was just a glorified cruise control or lane-keeping system.


Autopilot systems on (small) boats neither have, nor advertise, nor imply that they have any form of collision detection, let alone avoidance.

Tesla on the other hand describes its system as being safer than human drivers at avoiding accidents.


Why does that matter? The next person who comes by with autopilot on who doesn't know that might also die.


Sounds like his smartphone is about as liable as Tesla here. If he hadn't been distracted this shouldn't have happened. He knew his autopilot doesn't handle this section of the road.


I'm confused here.

Has Tesla publicly stated which sections of the road autopilot doesn't work on ?


The docs say to keep your hands on the wheel and stay alert at all times.

Never mind Tesla CEO on national television taking hands off the wheel and not watching the road.


I'm fully aware of their recommendation.

But if Tesla is aware that certain parts of the road are more likely to cause incidents and they haven't communicated that. Well sounds to me like that would be criminally negligent.


So like when a manufacturer puts a label on a set of knives saying “warning, bottom edge is sharp” something like that? Except popping up on the screen as you drive every five seconds so you can read about the hazards you are seeing outside the windows?

Nanny car, anyone?

I think drivers are already well aware that roads contain an ever changing variety of risks, and they can see the signs of these risks with their eyes.

The driver in this case was acutely aware that the beta software could not yet properly handle that segment of road. So much so that he even visited the Tesla service center specifically to let them know about the issue.


You know there's a difference between a kitchen knife and a 1 tonne metal structure travelling at 100km/h on a freeway.

People are dying from Tesla failing to let their users know when and where autopilot should be used.


Oh okay then let's tell people every five seconds what to do or not do with each knob and button in every car. Or why not every two seconds? Maybe one?

Forget Tesla, we should have these warnings in all cars!

"You are entering a curve. It may be unsafe to let go of the steering wheel here especially if you don't have autopilot."

"You have encountered another patch of snow. The wheels might slip on this spot. Autopilot will handle it just fine but in case it doesn't, stay attentive and keep your hands on the wheel."

"You are starting to go downhill. Warning: brakes may be required."

"The road is curving to the left 7 degrees. At this time and date the sun's rays will be entering your windshield at an angle that may obscure your view of distant objects to your right."

"Last night's rain left some fallen wet leaves strewn on the opposite side of the roadway. Watch for oncoming cars who have lost control due to conditions."

"Your blood sugar is getting low. Please get something to eat. And you haven't called your mother in a week. Make sure you call her soon to prevent dangerous distraction from incoming calls."

These warnings are very important, people are dying in droves, there are at least 2 reported incidents, from all car companies not telling everyone about what they can see for themselves.

I mean come on, it's a 1 tonne metal structure hurtling down the freeway at 100km/h I'm telling you. People are dying and it's all down to the car companies not providing these messages, which we totally have time to read while driving, every one second.

/s


Surely there's some sort of "autopilot does not work on all roads" disclaimer, right? That's why you're supposed to keep your hands on the wheel and pay attention.


Just like there is a warning on cigarettes, yet we have stopped allowing smoking in enclosed public places. Kind of like driving on a freeway...


Yep, there is.


It’s clear from what Elon said about high definition maps that Tesla’s stance is that roadways are dynamic environments that can change at any time.


> Has Tesla publicly stated which sections of the road autopilot doesn't work on ?

yes: noone of them. you are to be able to take control at all time.


Your argument sounds the same as: people who smoke know that cigarettes cause cancer, in fact it says that on the pack, still they decide to smoke. So, if they get cancer and die it's their fault not the cigarette manufacturer's.


Cigarette manufacturers got sued into oblivion because they suppressed science and insisted their product was beneficial. Now that they acknowledge the dangers, nobody’s suing.


Of course people are still suing. And in many countries governments are continuing to sue as well.

Just Google it. Plenty of examples.


How many of them are not based on previous lies and fraudulent claims by tobacco companies?


People are suing based on injury and illness that occurred during the period this knowledge was being suppressed, etc. Not because they took up smoking last year and are sick.


Knowingly manufacturing a product that has such a high risk of causing cancer as do cigarettes is certainly unethical in my opinion. But today when that risk is known to consumers (as opposed to in the past when it was not) it is ultimately the responsibility of anyone that decide to start smoking today when doing so ends up giving them cancer.

I say this as someone who is a low volume consumer of another tobacco product (Swedish snus) that is also bad for my health and which also has a risk of giving me a form of cancer (oral cancer). I am betting on the fact that I keep my use of the product on the low end to save me. But I accept the fact that I am to blame if this ends up giving me cancer.


Autopilot arguably prevents more accidents and deaths than it causes, just to keep this in perspective. So the tobacco analogies can only go so far here.


Keep this in perspective: based on the number of avoidable deaths in Tesla cars driving on Autopilot in otherwise favorable conditions, its arguable that Autopilot causes more accidents and deaths than it avoids, especially given that Autopilot is used in the lowest-risk driving conditions.


Yeah good to keep an open mind on both possibilities, but it’s overwhelmingly likely that autopilot will only get much better over time.


Those deaths were not caused by autopilot though.


Which is right.


Do you disagree with that?


Yes. What is your point ?


Tesla should get taken to the cleaners for this, I hate that these cars w/"autopilot" are operated on the same streets I use.

They should focus on making quality, well-performing electric cars. Stop using us all as beta testers for an autonomous future most of us never asked for.


Living around silicon valley with all the tesla drivers here, you learn real fast it makes for some of the worst drivers out on the road. Regularly see them running red lights and just generally not paying attention. They crutch too hard on the auto-pilot and dick around on their smart phones, which is probably exactly what this guy was doing. It's bad for tesla drivers and it's bad for drivers around them. Hope Tesla loses this lawsuit just because of that.


Is it that Tesla drivers are actually worse, or that Teslas stick out to you more than other mundane cars and are pretty common around Silicon Valley?


Good point, it could be they stick out more to me. They are super common in SV. I've personally had teslas run red lights in front of me on two different occasions within last couple months and those particularly stand out in my mind since they put my life in danger mid-intersection. I ride a motorcycle and suspect the Tesla auto pilot is particularly bad at spotting motorcyclists.


Generally, the more someone pays for their car, the more entitled they are and the more laws they break.


I think self driving cars should have both high dynamic range cameras and LIDAR and maybe time of flight cameras. Input from a LIDAR system would be much more likely to detect that barrier, and computer vision via a camera much more likely to be fooled. I think an investigation into why the computer vision system failed to detect a barrier under clear daylight conditions will show the negligence on the part of Tesla. Lane lines are frequently not well marked, and sunlight glare is a difficult problem for cameras. However, you have to be able to detect a concrete barrier, in the worst of conditions. Does Tesla have in place some kind of determination of its lane detection accuracy and then alert the driver that it is turning off auto-pilot when accuracy is low?


Yes it shuts off with a beep if it can’t handle current conditions. In case you think this kind of abrupt shutoff sounds dangerous, keep in mind this is in the current generation of the system which relies on a human driver being attentive and ready to take over at all times.


> In case you think this kind of abrupt shutoff sounds dangerous, keep in mind this is in the current generation of the system which relies on a human driver being attentive and ready to take over at all times.

What is a driver to do when the system randomly decides to brake? [1]

Phantom braking has been an issue for awhile, and has yet to be acknowledged by Tesla.

[1] https://www.reddit.com/r/teslamotors/comments/b5yx1o/welp_th...


They do acknowledge it. You asked what to do. Apply gentle force to the accelerator pedal. And that video is from a super old software version.


> they do acknowledge it.

Where has Tesla acknowledged phantom braking as a persistent issue? It has been around since 2016.

> that video is from a super old software version.

Looks like the video was taken a month ago. The driver had the version of software that Tesla gave him. Drivers can't choose which software version they get, so if it was old, that's on Tesla, not the driver.


You can choose not to update.


The above example is about a case where a driver did not receive an update. He can't choose to download it.


Guidelines definitely say keep your hands on the wheel and be ready to take over.

plus it beeps at you to take over.


I think it's unacceptable for automation to produce a worse result than a human in the same situation, with the same information. i.e. it's not acceptable for automation to fail danger, it must fail safe, including even if all it can do is give up (disconnects, warning tone, hands control over to the human driver).

I think it's reasonable, in the narrow case where primary control is asserted and competency is claimed to be as good or better than a human, to hold automation accountable the same as a human. And in this case, if this driver acted the way autopilot did based on the same available information, we would say the driver committed suicide or was somehow incompetent.

I see this as possibly a case of automation committing involuntary manslaughter (unintentional homicide from criminally negligent or reckless conduct).


> ... it's not acceptable for automation to fail danger, it must fail safe...

The scary thing is, by definition of what is being automated there can be no fail-safe. The only "safe" is to stop the car. If the car detected it was going to crash, then it wouldn't have had the failure in the first place. By the time the failure might be detected, it is already too late for "safe".


It's worse than that. Sudden breaking is also dangerous.


> or was somehow incompetent

If they survived, they would have lost their license and been charged with some form of reckless driving.

The underlying issue is that manufacturers are beta testing on the road and almost literally playing with peoples lives in the process.

Legislation, although would hurt innovation, in this case is appropriate to ensure that the race to the finish doesn’t have a blood-stained legacy.


Tesla is the only one claiming to have a working solution and is selling it to the general public.

Everyone else is far more cautious.


you cant expect the human driver to be ready to take over autopilot in any meaningful way. if the driver is alert enough to the conditions and physically ready at the appropriate controls to do that, then they are already driving. still, despite it not being based on science, but legal reasons, companies pretend that humans can "take over" when alerted to. maybe a lawsuit will change that idiocy.


A few people dying of their own irresponsibility does not warrant ruining it for the rest of us.


There are other people on the road, it's not just the drivers that are impacted by this.

Calling it "Autopilot" is what's irresponsible.


Pilots still need to take over sometimes despite the autopilot. The name is fine.


A lot of people said the exact same thing about lawn darts and a whole host of products and activities, but sadly, society doesn't go that way.


im all for the technology and accepting it will be fallible, just not for putting the blame for its errors onto the human non-driver


This sounds good, but is impractical.

First, a human who is paying attention makes better decision than a human who is surprised by being given control during a crisis. Therefore if automation sees a crisis, it's standard has to be that it is worse than a surprised human, and not how it compares to an alert driver.

Second, the analysis that concludes that we're now in a specific situation where a human would do better is often beyond the powers of automation. For example it can recognize that we are now in a class of cases where, on average, automation does better. But whether it will in any particular case may be beyond its reasoning power.


This is called the Halting Problem. We don't waste time trying to make machines that do that because it's theoretically undecidable.

Furthermore, you're then throwing more neural network (that has to be trained improvably across the dataset which we have no proof is the full dataset required to fully enumerate the problem space), to successfully detect the problem areas in the data going into the driving NN for edge cases. Repeat ad absurdum or until most of your self-driving car's energy is consumed trying to solve the Silicon equivalent of Plato's Cave.

Or... You just teach people to drive the blessed car, which is what the absolute best a theoretical NN implementation would converge to anyway.

The only benefit to a theoretically perfect NN which simulates the average driver being the car doesn't run on a processing system that is adversely effected through the recreational and voluntary (by the processing hardware) application of ethanol to the processing matrix.

Well... Thermally speaking, it could end up causing serious problems if someone were dumb enough to build the capability to do that into the car for some daft reason, but we're talking existential failure of computing hardware rather than gradual degradation of functionality like you'd see with a brain, but still I think everyone gets the point and I can stop torturing this poor analogy.


What you say has just enough technical detail to seem entirely plausible to someone who doesn't understand technology.

It is also entirely wrong.

The problem that I am describing is not the Halting Problem. A theoretically perfect NN is also able to do a lot better than any person could. The ability to have binocular vision in multiple directions, and mix that with radar awareness of the local environment and electronic communication with other cars to be aware of obstacles that are out of sight. This technology can drive more quickly and safely than humans ever could.

This sort of thing is admittedly some time off. But it is doable, and it is almost certain to happpen within our lifetimes.


>It's also entirely wrong.

It's not. You're asking for a Neural network capable of making generic inferences about the output of another neural network, in real time.

That's Halting Problem. Will the data I'm feeding in cause the program I'm running to encounter an edge case (return/not return).

You can't generify a neural network beyond it's training set. Cheekyness of the previous post aside, I stand by my statement.

Comments that hail the almighty neural network as being anything more than an interesting exercise in feature extraction/input output mapping/information synthesis woefully underestimate the fundamental limitations of the technology.

With current technology, short of everyone participating in your network of street cars (hint: many won't) and is a good agent (they won't be), you'll be at the mercy of the same forces that make "driving" such an interesting task today, just in different forms, with more processors involved,and when the NN rewrites happen to need beefier hardware, everybody's vehicle gets recalled.

Throw in second and third order social effects (I.e. implementation of multi-spectrum panoptic surveillance networks for exploitation, possibility of remote exploitation of the driving software, decay of the actual skills required to drive safely, and the sudden stranglehold position that the self-driving vehicle manufacturer gets when their customer base gets large enough) leads me to the conclusion that there are a lot more problems to be solved before self-driving anything becomes a no-brainer turn key solution.

Consider that now, with thermal cameras being ubiquitous, there is debate over whether your thermal signature is considered "public information" that can be collected and analyzed by law enforcement sans warrant. Next we'll get LIDAR on cars that will have a software application made to tap into the LIDAR feed which would then be capable of reading vibrations off glass with a bit of setup. Does everything you say in your home become public information too just because the cars we drive become mobile labs equipmentwise?

That sound cool? Not to me. People need to think beyond first-order outcomes. As programmer or system developer or User alike.


A neural network capable of making generic inferences about the output of another neural network, in real time, just needs to be able to run a simulation of that other network. While we do not currently have the resources to do it with a human brain, it is in principle quite doable.

The critical difference between this and the Halting problem is that solving the Halting problem by running a simulation would require simulating in finite time what another system does in infinite time. This requires simulating in finite time what the other system also does in finite time. Which requires a better system, but not an impossibly better system.

Moving on, you are over-estimating the requirements of the system that I describe. Today, with humans, a system like Waze can provide warnings about the road ahead which provides a useful assist to human drivers. This with a very small fraction of humans using the program, and even fewer actively registering hazards like "object on road". And yet, it is useful.

Automated driving systems can participate in a similar system. Only they will be more likely to provide information, and their information can be more detailed. Such as what lane the foreign object is in. It doesn't take a lot of data about what is out of view ahead to improve driving by a lot. But humans are built to only pay attention to one thing at that speeds. Automated systems can integrate information from multiple. Which means that they can be better.

Yes, what they are doing is simply feature extraction/input output mapping/information synthesis - but in principle nothing more is needed to drive better than is possible for humans.

At the moment, humans are better. But it is not impossible that computers can become better. It is, in fact, inevitable.


Well, this brings to light the concept from the movie I, Robot. Is it a crime when an ai kills someone, unintentional or not, or is it an industrial accident? If it's suppose to replace a human task due to a "level of intelligence", is it still taxed as equipment or as an employee? Is the company (due to equipment failures) or the ai (choosing to do an action on its own) at fault?

To be fair, these questions need to be hard lined pretty soon.


> these questions need to be hard lined pretty soon

I’d argue that they’re a long way from being an issue.

Firstly, current AI are far less ‘intelligent’ than almost any animal. Even a mosquito has more brain power, and we don’t think twice about killing mosquitoes.

Secondly, even if an AI had similar intelligence to a human, there is no reason to believe it would be a moral creature, capable of making moral judgments, and being judged as such. Our morality evolved over thousands, probably millions, of years (or if you prefer, it was granted by some divine power). Either way, intelligence and morality aren’t synonymous.


Don't know why this is downvoted - it's a good statement of the essential issue.


> it's unacceptable for automation to produce a worse result than a human in the same situation, with the same information

I don't think anyone disagrees on that. The question is: is the software worse than a human driver? Do we have enough data for a statistically significant judgement on that? Is it even autonomous enough to say anything either way, like, if the driver is required to be at attention anyway, can the software be blamed for anything in the first place? Those are the questions, I don't think there is a point saying "software must be good!"


>> it's unacceptable for automation to produce a worse result than a human in the same situation, with the same information

> I don't think anyone disagrees on that.

I disagree on that. If there's an autonomous vehicle that is better than a human in most situations, and worse in a few situations, such that the overall accident/death rate is lower, and there is no reasonable away to identify the rare dangerous situations in time to disable the autopilot, I would want to drive that car and would advise others to do so.

In fact, if there was an autonomous vehicle that was almost exactly as safe as a human but slightly more dangerous (say, a 10% higher death/accident rate), I would frequently use it because the large benefits outweigh the minor statistical costs. (Indeed, I use a car at all because of its benefits over walking, busing, or staying at home, despite the higher rate of death.) If other people understood the risks, I would also suggest that they to do likewise.


We don’t expect all kinds of drivers and vehicles to fit the same safety bell curve. You’ve made an assertion, but what legal framework are you are using to treat this particular human-machine interaction differently without introducing a whole new class of liability for humans and traditional manufacturers?


>but what legal framework are you are using to treat this particular human-machine interaction differently without introducing a whole new class of liability for humans and traditional manufacturers?

Humans are not robots?


So many people here have hardons for tesla hate.

The guy fucked up, bad. Tesla is not at fault here.

>"According to the family, Mr. Huang was well aware that Autopilot was not perfect and, specifically, he told them it was not reliable in that exact location, yet he nonetheless engaged Autopilot at that location. The crash happened on a clear day with several hundred feet of visibility ahead, which means that the only way for this accident to have occurred is if Mr. Huang was not paying attention to the road, despite the car providing multiple warnings to do so."




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: