Hacker News new | past | comments | ask | show | jobs | submit login
Tesla that crashed in Autopilot mode sped up before hitting truck: police (theguardian.com)
110 points by pmoriarty on May 25, 2018 | hide | past | favorite | 182 comments



Tesla "Autopilot" cannot see stationary objects. It has a radar, but it is 1-D and low resolution. It uses this to localize other moving cars. In theory, it could use this to see a highway divider or firetruck it's about to hit, but since it's 1-D that would also mean that it would have to slam on the brakes when it sees an overpass coming up, because it can't tell the difference between an overpass and a highway divider or a firetruck. It can assume that overpasses aren't driving at 60mph, so it will see other cars. The Tesla algorithm is "look for lane markings and try to stay between them, and don't hit any moving vehicles. If there is any stationary object, including another a vehicle in your lane, the "Autopilot" will plow right into it. If they're not explaining this to customers, it's negligent.


> If they're not explaining this to customers, it's negligent.

Offering a footgun to consumers which puts other people in danger is a questionable choice.


How is this a footgun significantly more than the plethora of other vehicles with the same functionality?


The problem is calling it "Autopilot" in the first place compared to other brands. If you call it "Autopilot", people expect it to be an autopilot.


This is kind of a kicking-a-dead-horse discussion, because as others have pointed out, "autopilot will fly you into a mountain" (complete with warnings).


But does the average driver know that? It is their perception that's important here. The Utah driver expected the car would not fly her into the mountain, so to speak. Naming, marketing context of self driving alongside this feature, and driver instruction are all likely part of the puzzle.

If you think the name has nothing to do with it, would you also say that if it had the name "Deluxe Cruise Control" people would treat it the same? I'm not so sure about that. Engineers maybe.


I agree that the name issue is a red herring, but not for this reason. Autopilots are operated by people who are trained in their use, and, for the most part [1], understand their limitations. 'Understanding their limitations' is exactly the issue here.

[1] Following a number of WTF-type accidents, there is some concern that airplane automation has become too complex for pilots to reason about when it partially fails, but if that is actually the case, it raises the bar for all partial automation, including for cars.


Pilots are professionals who are trained and paid to use autopilot.


It's a dangerously misleading name in aeroplanes too.


You are confusing autopilot with unmanned vehicle.


That's what autopilot means to me, both look synonymous and their marketing also plays on that confusion. They could just call it "assisted driving" to be more honest.


autopilot has a technical meaning which this is totally compatible with, however it has a common meaning that diverges.

The common meaning has been derived from the technical and is affected by some common misunderstandings, such as:

autopilot flies planes all over the place and hardly ever runs into anything becomes autopilot is very safe and not autopilot doesn't run into anything because in the sky there is not very much to run into.

I guess Tesla should have named it something else, but this kind of mistake is quite easy to make and I would say is the norm if anything.


I heard about significant number of plane crashes with autopilot involved, however I'm 42 year old. Maybe meaning of the word was changed with new generation. How old are you?


There is nothing in the post you are replying to that suggests this is any more of a footgun than genuinely similar cases. Any potentially lethal and undisclosed or insufficiently emphasized corner case is a footgun, and not responding to stationary objects is not even a corner case for collision avoidance: it's front-and-center.


What do you mean by "1-D"? If it doesn't measure height, then it would still be 2-D.

EDIT: it's a 1-D array of distances, ie, an array of polar coordinates on the 2-D plane. So, it's 2-D data.


Definitely, these things are subject to differences of opinion in terminology. What I mean is that Tesla's radar produces something like 1-3 1-D arrays as output. Those arrays are probably distance, intensity, and doppler. A normal camera produces three 2-D array of R, G, and B. Something like a Kinect produces arrays of R, G, B, and depth/distance. When one of the parameters is distance, sometimes people talk about a sensor having an extra 0.5D. A Kinect might be 2.5D. In this world, the Tesla radar is 1.5D. Fundamentally though, it can't discriminate by height, and so it can't solve the overpass/overhead sign problem. In a very real sense, that is why these accidents have happened.


I got it later - I'm not used to the terminology, thanks for the explanation :)


So its 2D if it returns a 1D array of distances?

If it returns a 2D array then its 'seeing in 3D'.

So I guess that works out.


Ok, now I understand the confusion. The array is 1-D, but it is a representation of polar coordinates on the 2-D plane. So, the data is 2-D.


But the Autopilot has cameras, and therefore it has 3+1D data (2D frames, 1D inferred depth from stereo or training, 1D time from frame diffs). Even with a 30fps frame rate at 60mph, two frames from a video feed would see the truck move closer by a meter. The network can be trained to distinguish a truck in the lane from an overpass, so those two frames should be sufficient to signal danger. Instead, the car accelerated over 70 meters before the human, again using just her eyes, applied the brakes.

This is an edge case, and clearly something that requires more training to avoid. The challenge in edge-case learning is to encounter them often enough, or perhaps create them artificially, and then to ensure the learning can be transferred and generalized to all similar situations.


This is not even the first time they hit a giant red stationary fire truck.


Seems like it happens whenever a car in front moves out of the way. It goes full stupid-cruise-control mode and just accelerates to the stored speed. Of course there's a lot of noise in terms of things in the stationary reference frame, but still, big red firetrucks don't exactly look like lane markings.

At the very least, as a driver, I'm slightly cautious when I see a car move out of the lane in front. I feel like the solution lies in: (a) separating the signal from the noise in the set of things moving with the ground, and (b) getting the car to 'understand' why other cars change lanes. Sometimes it's so they can avoid obstacles in the road. It almost feels like cars need to have a theory of mind of other cars. I certainly drive with an intuition of driver intentions stuck in my head.


The thing is Tesla will never be an autonomous car or a reliable semi-autonomous car with its current hardware. Elon needs to admit it (he insist, this isn't the case) and he needs to reliably convince his customers this isn't the case. Tesla needs to see a couple of cars before, in order to drive safely (these are no edge cases, this happens in real life a lot), but it can't see ahead without LiDAR but Elon insists they don't need LiDAR.

The problem is you can't have a pre-existing bias when working with this kind of technology, you need to be able to adapt to whatever changes that need to be made.


Well it's not completely clear to me that cameras can't look a few cars ahead. If we can, cameras can too. Maybe they're not smart enough yet, but I don't see the fundamental limitation. Apart from that, I agree that cheap + effective LIDAR would help significantly to get autonomous vehicles on the road now.


> If we can, cameras can too.

A camera is not a replacement of how our eyes work, how our brain distinguishes between objects and understand from prior experience and can make independent decisions on the fly. But neither does LiDAR, but with LiDAR and good machine learning (ie, Waymo), you can drastically improve the reliability of your autonomous system.

Elon is insisting on a no-LiDAR system at the same time advertising tesla autopilot as an autonomous system and has said multiple times that All Tesla on the road is already equipped with the hardware needed for fully autonomous driving. This is dangerously false.

There is not a single autonomous car out there (at least serious ones that we know of), that has a non-LiDAR system.

Reality Distortion Field is strong with Elon, Unlike Steve Jobs and Apple products, this one will kill people.


So I do a lot of work with both vision and lidar data, and am pretty intimately familiar with the public state of the art in terms of perception systems that rely on each or both, if not planning.

I don't think you need lidar.

I'm not saying that vision-only won't take longer, because it will, but I believe it'll probably happen unless the cost of lidar falls a great deal.

Have a look at, for example, "learning to see in the dark" (the CVPR 2018 one), DensePose (the FB research one) to get a sense of how quickly difficult problems are being solved.


The cost of LiDar has been falling for a while now: https://arstechnica.com/cars/2018/01/driving-around-without-...

Waymo LiDAR is custom made and they cost a lot less than what is available in the market. https://arstechnica.com/cars/2017/01/googles-waymo-invests-i...

I don't think it's a cost, Elon has a personal bias, or maybe he didn't account for the fact that in the not so near future LiDAR cost will go down drastically.


  But the Autopilot has cameras, and therefore it has 3+1D
  data (2D frames, 1D inferred depth from stereo or
  training, 1D time from frame diffs).
When making safety-critical real-time systems, we prize simplicity as simple code has fewer bugs and can be audited more thoroughly.

I mean _really_ simple - to deploy an airbag, early systems were as simple as checking the accelerometer works at startup, then firing the airbag if the acceleration passes a threshold consistent with a crash.

If your safety system relies on multiple cameras, stereo vision, camera motion tracking, and a neural network, it ain't simple.

That's why IMHO every self-driving car needs LIDAR - you need to be able to detect the firetruck or highway divider in from of you without relying on a million lines of code.


This is not an edge case. My understanding is that the Tesla autopilot uses the camera solely for lane marker detection. The Radar is used solely for moving vehicle detection. There is no sensor for stationary object in your lane detection. That's up to the driver, and that's why Tesla emphasizes the number of seconds the driver had their hands off the wheel in each of these crashes.


Precicely this scenario you are describing is in the owners manual that comes with the car.


Nobody actually reads the manual though.


I read manuals, back-to-back. Admittingly, I skip legalese disclaimers because they are intellectual noise. However, many manuals contain golden nuggets of information revealing that most people misuse a certain product or don't use it the best possible way.

A well written manual can be a teaching lesson about the relevant stuff given by domain experts for regular users. Who'd say No to that?


Hacker News commenters seem to be a poor representation of the average population.


Heaps of people read the manual.


I feel like this is the kind of thing you should be made aware of (for real, not just in impenetrable terms of service) before you even buy the car.


I feel like certain people will constantly complain about “not being made aware” about things no matter how clearly you state it to them. It is obvious in hindsight that many owners do not read the manual, don’t read screenwar warnings and skip the first time intro. But they still complain that “they weren’t made aware!” Ultimately if Musk himself took a trip to each individual owner and screamed at the top of his lungs the terms of service in their face, we would still be hearing people saying they weren’t made aware.

Legally they were made aware, if you want to change the law do so. But please stop letting user play morons selectively when it turns out they didn’t rtfm. That is on them and should be on them.


The primary sensors are the multiple cameras so this criticism is not relevant. The radar is primarily for low visibility situations (according to marketing). Of course the use of cameras as the primary itself is controversial but they do in theory disambiguate the situations you describe regarding stationary objects, limited resolution, and degrees of vision.

In balance: https://www.theverge.com/2018/2/7/16988628/elon-musk-lidar-s...


Also there was a car between the truck and the Tesla for some time. Since they are not specific about the timing, the car in front might have just avoided the truck which leaves little time for sensors below the lights to even register that there is something else. Does Tesla also have sensors at the top to see two+ cars ahead?


My gripe with this argument is that even some humans can have a reaction time to at least start braking under these circumstances, I don't mean all drivers but as an example: I raced go-karts (just as an amateur but took some lessons and also some racing classes in cars with 300 hp) and sometimes my reflexes while racing were surprising, even for myself, narrowly avoiding collisions at 100+ km/h in tight circuits. With that said I'd have thought that sensors and computers could be much faster than even my trained reflexes.


In a continuous system like racing, there are two more things to consider: prediction and relative motion. Avoiding something you can predict, or avoiding something which is moving much slower than 100+ km/h relative to you, isn't too difficult (I do it all the time on motorbikes and scooters). But this perception can fool you into thinking you have more control than you do - should something really unexpected occur, your reaction time will surprise you by being slower.


And what about the ultrasonic sensors?


source?



Tesla has no autopilot or self driving capabilities. It is just something called Adaptive Cruise Control and Lane Assist in other vehicles with some minor additions. Tesla’s marketing gives the impression that it is much more, but it isn’t.


Precisely. Although it makes reasonably good cars, Tesla still relies heavily on hype and astro-turfing (mostly on sites where enthusiasts/potential buyers congregate) [0] to get the word out and generate gargantuan amounts of intrigue.

Choosing this 'hype-heavy' model of "advertising" can sometimes thrust customers into relying on those hyped-up features way more, thus leading to accidents like the article talks about.

I'm saying this with the firm understanding that majority of the responsibility for safety lies at the hands of the driver in command, but letting Tesla's hype driven marketing strategy off the hook would be unfair to the customers.

[0] https://www.talkwalker.com/blog/tesla-marketing-strategy-soc...


After talking to quite a few drivers and driving thousands of miles on Auotopilot myself, I don’t think people are confused by the name. It is pretty clear very quickly what the car can or can’t do. I think people are poor at estimating risk and we would have these accident whatever you call these semi-autonomous features. The interesting thing for me is that despite the shortcomings of the current Autopilot, I wouldn’t go back to a car without. It lowers the strain of driving sufficiently to really be worth it. And I am a driver with a hand on the wheel and eyes on the traffic, all the time.


Then tell me. Why do we hear news from Tesla cars all the time and none of VW, BMW, etc. where we had adaptive cruise control for quite some time?


Because those companies aren't putting out videos like this one: https://www.tesla.com/videos/autopilot-self-driving-hardware...


Tesla's brand power has substantially raised the public interest factor in any story involving it, including accidents. There are no single-edged swords in PR as it seems.


Better click bait is my speculation. I haven’t seen any comparative statistics of adaptive cruise control accidents. Would be interesting to see. But then the features are relatively different so hard to compare.


Why do we see headlines every time an iPhone spontaneously combusts, but nothing when there are (isolated) incidents with Samsung, Huawei, LG etc.? Because one brand generates clicks, the others don't


Samsung phone combustion gets coverage. You may recall a while back where they had to recall a phone due to them bursting into flames, and some airlines specifically banned the combusting model.


I had that in mind, which is why I specified isolated incidents


Tesla isn’t Apple. Tesla is more like Essential.


Not in terms of name recognition and public interest, which is what I was talking about


Probably because their emergency braking actually works. Source: my VW saved me already. It was able to spot a stopped vehicle in my lane.


Because one death is a tragedy, but a thousand is a statistics. There are literally dozens of thousands of death on the road, millions of accidents per year but, unless a Tesla or Waymo is involved, no one hears about them


Betting on human beings to remain focused for a long time while being absolutely passive is bound to create issues though, we suck at that. It doesn't help that it's even more difficult to react when the autopilot messes up because your actual reaction time is not from the moment you see the object but from the moment you realize that the autopilot itself has not seen the object.


Help me understand. How does it lower the strain of driving if you have to remain 100% as attentive anyway? Or are you actually less than 100%.


The same way traditional cruise control reduces the strain of driving by allowing you to not constantly exert force on the accelerator pedal using your leg/foot, Autopilot reduces the strain of driving by allowing you to not constantly exert force on the steering wheel using your arms/hands.


>allowing you to not constantly exert force on the steering wheel using your arms/hands.

This is not the benefit of autopilot because this task requires virtually no effort to begin with and you still need your hands on the wheel anyway. The real strain in driving is the mental strain, and that is why people turn it on.

The reality is that Telsa is using humans to iron out the bugs in their fully autonomous vehicles and legal clauses to get them out of the fact that they are killing people.

I have a car with all of those new safety features like warnings about cars to your side as you change lanes, warnings about cars driving past as you reverse out of a car park and so on. The reality is that you stop performing the roles that the car starts performing. You stop looking as much when you reverse out of a car park because you trust the beeping noise will occur if a car is coming along, and so on.

It is just human nature to start relying on something that works every time you use it. I don't really feel sorry for anyone driving a tesla though. They are the ones choosing to be Tesla's bug testers.


>I don't really feel sorry for anyone driving a tesla though. They are the ones choosing to be Tesla's bug testers.

They're only doing that if it's an informed choice. I think Tesla's marketing is to blame, and think it's not an informed choice, so I do feel bad for the people getting in accidents (and the few dying) because of it.


A lot of users seem to know what they are getting into though.

https://youtu.be/QM5z0dW60KE?t=279


> The real strain in driving is the mental strain, and that is why people turn it on.

Exactly. If people were actually complying with autopilot’s TOS it wouldn’t actually be more relaxing. But then it wouldn’t be a compelling feature. You’d just be volunteering to help with fleet learning, which is a noble goal, but probably not something people would do en masse or pay thousands more for.

Tesla is implying it’s a lower mental load to get more people to use it.. while having TOS that holds you responsible if you pay less attention.


If your car is outfitted with all of those same features, why is Tesla different? This exact news and discussion has popped up enough times among "ordinary folk" that nobody is really fooled (and I don't agree that was Tesla's motive) about the capabilities. If they are, I'm chalking it up to negligence. Tesla drivers are given ample warning from Tesla themselves, their cars, the news, and our social circles.


It is just a difference of magnitude. If my car is wrong, I will get into a minor accident. If the Tesla is wrong it will drive you into a rail at 100km/h.

It seems impossible to go from manual driving to fully autonomous without killing a lot of humans. The companies making these technologies know it. Everybody who buys one of these cars should be aware of it. I would never buy one. I don't even want the end product. I want control over my life. In a manual car, there is a bit of control because you can drive safely and so on. You are still susceptible to other drivers though. In an autonomous car, it can kill me on an empty road.


It might be helpful to identify a third kind of car, which is a manual car with automated collision avoidance. Oftentimes such features are lumped in with autonomous driving, which leads to claims about autonomous driving safety. But they can be broken out and are where the bulk of the safety benefits reside.


If rate of accident per mile is lower, then lot of people lives will be saved. It's same as with trains and planes: lot of people can be killed in accident by autopilot, but trains and planes are much safer than cars on _average_.


"If your car is outfitted with all of those same features, why is Tesla different?"

Maybe a) Tesla's tech isn't as good and b) Tesla's customers are more likely to be tech nerds / Tesla fans and so more likely to overestimate the capabilities of the car.


I find the benefit of cruise control is that in normal situations in country driving (over 90% of my driving) I can now completely ignore the speedometer, and consequently the entire dashboard, and don’t need to worry about adjusting my foot on the accelerator pedal to match the speed limit. Cruise control thus allowed me to focus more fully on the road, and increased my perceived safety.


The only thing I've ever found Cruise Control useful for is avoiding a traffic ticket. If I see a police officer patrolling it's a tool I can use to go at slightly under the posted speed and only worry about traffic around me.

When using the cruise control I find I actually have to pay /more/ attention to the vehicle actively, as I need to be aware of when to cut off an external (otherwise) uncontrolled system. If I'm controlling acceleration it's handled by more unconscious portions of my mind.

I'm also a much more alert driver and far more aware of my present state of tiredness when I have to maintain interaction with the vehicle directly, including minute adjustments in speed with the accelerator.


There is a huge difference between Adaptive Cruise Control and regular Cruise Control. I agree completely with you on Cruise Control, however with adaptive, stress level is low there is no comparison. I use it pretty much every single day in my commute.


Indeed, you can concentrate on the overall traffic whilst being aware of exactly what the car is doing through the feedback in the wheel. I think of it as overseeing the work.

Also there are long stretches of driving which are with no or few cars around, where the car doesn’t do much interaction with other traffic, and stop and go queues where the speed is very slow. In both those situations not much happens where you would need to intervene. It is less effort driving.


How is “concentrating on the overall traffic” substantially different than concentrating during regular driving, unless it is less focused or attentive somehow? Aren’t you taking your attention away to a degree from the nitty gritty of tracking the obstacles in front of you and staying in lane?

Citing the “long stretches with no cars” case really worries me that you are in fact paying less attention. It is not only traffic that you need to worry about hitting.


In my experience the answer to the second question is: no. Part of the brain, which would otherwise be occupied with the details of physically driving can now supervise instead. You should try it.


But you don’t “constantly” exert force on the steering wheel. You only exert force when turning. Otherwise you are just resting your hands on it, same as autopilot. It’s not like the accelerator pedal at all.

Again the only significant reduction would seem to be in the mental load of paying attention.


> But you don’t “constantly” exert force on the steering wheel. You only exert force when turning. Otherwise you are just resting your hands on it

Even if you're driving on a perfectly straight road, you're not just resting your hands on the steering wheel, you're constantly adjusting it to keep the car travelling straight. Most roads aren't even that straight to begin with, so even more constant adjustments are required.


Ok, so micro-adjustments are automated. But those don’t require much force and it’s certainly not constant. The only constant load is the attention required. I.e. paying attention to straying from the lane.

Thus if you’re feeling a big relief, it’s because you’re paying less attention to the road. No?


Roads naturally go through turns and bends. You need to manually turn the wheel through these bends. You also have lane switching to avoid a slower driver, which tesla does automatically.


I don't have a Tesla, but I have a car with ACC.

When I activate ACC I do not have to pay 100% attention to the distance between my car and the one in front of me and the speed. The car does all the work for me. I only have to keep steering and make sure that if the car in front of me stops in an emergency way that I start breaking myself because in certain cases it is too late for ACC to notice it.

In traffic jams (< 30 km/h) ACC takes away 90% of the work.

The difference is that my car got marketed with ACC as a driver assitance and not "self driving autopilot."


I am not talking about ACC. I’m talking about autosteering. But it’s interesting you concede the primary benefit is paying less attention. If you did that in a Tesla you would be violating the TOS and be responsible for resulting accidents. Ergo, what is the TOS-compatible benefit?


I would rather say, it allows me to pay more attention to other things, like the traffic around me, or doing lane changes etc.

But that TOS thing is kind of weird. What is "not paying attention"? Looking at a sign while the car in front of you slams the breaks? Technically you are not paying attention to the car at that moment and breaking the TOS.


So if I understand correctly, you’re saying it allows you to pay less attention to the car in front of you and more to the traffic around you.

Yeah, you’re absolutely not supposed to do that (in a Tesla at least). You’re supposed to pay 100% as much attention to stuff you might hit as without autopilot. If you hit the car in front of you because autopilot motivated you to look more at other cars around you, Tesla will say you misused the feature.

Ergo, what is the benefit? I’m only hearing benefits that result from violating the TOS.


I'm driving a 2018 VW Tiguan Allspace with ACC, Lane Assist, Side Assist and everything. If that's what Autopilot is, i'm scared. In part I ordered those features because of assessments like yours that this is pretty much Autopilot. It isn't.

My car can in theory follow the lane, but only in pretty much ideal conditions and even then it sometimes loses the lane and goes straight ahead, even in turns, possibly plowing into oncoming traffic or off the road. If it follows the lane, it does so very broadly, sometimes with driving actually on or a little over the lane in a turn, screaming at me to take control. This is the best case scenario.

I can't imagine Autopilot being this bad when looking at YouTube.

Don't get me wrong, I love my car and those systems for what they are, but they couldn't be called Autopilot in anyones wildest dreams.


The systems in VWs are like that by purpose. Their premium brands do behave better, yet they still not on the level of Tesla's system (I refuse to call it autopilot). Still, no one knows if what Tesla does is indeed that much better, or it just has all the settings set to eleven.

Digression: I've seen a direct comparison of all the popular semi-autonomous systems somewhere, but can't find it now. So, take it with a pinch of salt, but from what I recall the only worthy rival to Tesla's system is GM's SuperCruise, and BMW has a somewhat competent lane assist system.


I think they intentionally make lane assist kinda sloppy to keep people from using it as self driving. I’m sure if they wanted to tighten it up and allow for more variances in what it thinks are lane markers.


I am pretty sure VW is advertising them as driver assits and not as autopilot.


Yes, VW does so, but multiple people in this Thread say that Teslas Autopilot is ACC + Lane Assist. That's what I'm challenging.


Its neither a driver assist or an autonomous car, it's somewhere in between. Which makes it even worse, because the driver is not aware of the limitation unless it's too late. Ramming into stationary Red Firetruck TWICE is not an edge case.


People are buying, using, and in some cases dying because of the impression they gained from the clearly pitched marketing.


Lets change the marketing, then people won't die from car accidents anymore.


Well they won't die from Tesla car accidents anymore. That's something, right?


If Tesla calls it Autopilot mode, you can hardly blame everyone else doing so.


Just because Tesla calls it Autopilot doesn't absolve drivers of responsibility in my mind. Especially since Tesla explicitly says drivers should still be attentive. That said, I agree it's poor marketing.


Autopilot is not a word made up by marketing though. It’s already defined: https://en.oxforddictionaries.com/definition/automatic_pilot

“A device for keeping an aircraft on a set course without the intervention of the pilot.”

If you enable an autopilot, intervention by the pilot is not necessary by definition


> If you enable an autopilot, intervention by the pilot is not necessary by definition

Only if you're happy for the aircraft to continue travelling in the same set course.

I guess by the same virtue, driver intervention is also not required if you're willing to let the vehicle travel in the same direction and disregard the firetruck ahead...

Tesla's Autopilot is perfectly capable of maintaining direction/speed of travel, and it even follows the curves of the road most of the time which is presumably more than an aircraft autopilot system can do. It is the driver who is not willing to disregard the obstacles ahead, the same way presumably aircraft pilots are not willing to disregard obstacles in their course and will take over control of the aircraft if there is one.

Nothing wrong with our tendency to avoid obstacles in order to not die, of course, but it's not the autopilot's job since the definition of autopilot is to maintain a set course, as you quoted. You may be confusing it with full self-driving, which doesn't exist yet.


Aircraft autopilot will happily fly you into a mountain or other aircraft.


While true, there is like a lot of redondant auxiliary warning system in a plane that work independently of the autopilot.

So if autopilot tried to fly you into a mountain, the humans pilots would have to willingly ignore all the warnings thrown at them (it happened recently, and was qualified as human error).

What is terrifying with Tesla autopilot is that there is seemingly not enough auxiliary warning system. With only two of them (hand on wheel or stop, static object detection) you could have avoided almost all Tesla crashes.

They putted beta vehicle in hand of customers, that’s a choice, now they have to live with it.

Edit: first sentence was misleading


I'm glad someone mentioned this. It's frustrating that the general public doesn't understand what the word 'autopilot' means... Otto the automatic pilot from the movie Airplane! isn't what an automatic pilot is.

However, since it is commonly misrepresented, they should change the name of Tesla's system.


General public has little idea of what a plane autopilot can or cannot do. Somewhat ironically, the Tesla Autopilot is quite close to a real plane autopilot in that it’s a rather dumb system that cannot adjust to unexpected events. But Joe R. Driver reads Tesla marketing material and doesn’t realize that, dictionary definition notwithstanding.


“By definition“ you are not driving a car.


It seems like deceptive marketing to me. If I sold shoes with proprietary NailStopper soles, with a disclaimer the soles don't actually stop nails and it's up to the user to make sure they don't step on any, would that be cool?


I'm not saying the marketing is cool. Although your example is fundamentally different since in the "Autopilot" case, you're not just dealing with your own safety, but the safety of others. (Presumably you're not putting others in danger by wearing NailStopper shoes and continuing to step on nails.)


Fair enough. I guess this is a case of something lost in translation. To me, bad marketing is when the marketing doesn't help sell the product. Deceptive marketing misleads the consumer as to what they're being enticed to purchase.


So, now that we pinpointed who is responsible, can we discuss which design leads to many people making deadly errors and which design leads to few people making deadly errors? And how much responsibility the designer when choosing between former and latter?


Tesla's marketing has directly contributed to a number of deaths now.


It's also directly prevented a number of deaths. The same is true of the consumption of water. However, I can count the number of deaths caused by Tesla on one hand, but I cannot count the number of deaths caused by the consumption of water on one hand [1][2].

[1]: https://en.wikipedia.org/wiki/List_of_autonomous_car_fatalit...

[2]: https://www.scientificamerican.com/article/strange-but-true-...

If you really care about road safety? Encourage the use of Uber among the intoxicated. It saves more than a handful of lives in this world in which more than a hundred thousand people die each day.


Note I'm specifically talking about their dishonest marketing not their technology.

Their technology when used correctly is fine, but it't not even best in class, nevermind true auto pilot, but calling it auto pilot has evidently inspired false confidence in it's capabilities.

The marketing message is dangerous to people who can buy a Tesla but are not technically astute enough to understand the limits of their product, and it's not their fault - Tesla allows them to think that their technology is better and safer than others on the market.


Noted, I see the distinction you're pointing out. The marketing is different from the technology itself.


People frequently argue that "X hasn't caused very many deaths compared to Y" and thus the dangers of X are insignificant, while ignoring that people are reacting in a frightened way to X because it might scale such that it soon becomes significant.


What I was hoping to show was that there are things where the property of causing deaths doesn't mean that the thing itself was a bad idea. Water is an example of that. Going without water is fatal. Getting water can be fatal. It would be very easy to write thousands of news articles about water killing people. But its still good that people can drink water. The same is true of these driver-assistance technologies. Without them, people are already crashing and dying all the time. They do this at a greater rate than the rate at which people crash and die in vehicles that have these driver-assistance technologies. So on the whole, its actually better to have people in cars with these driver-assisting technologies, not worse.


RedBull got in some trouble for claiming that 'RedBull gives you wings'. Tesla should be in for more trouble than RedBull, considering a number of people have died as a result of their false claims. https://www.telegraph.co.uk/news/worldnews/northamerica/usa/...


I am now starting to fear being in front of a Tesla on the freeway, or on the streets.

Why? Because the idiotic driver might not even be paying attention to the road, and is dazing off and looking at the clouds, or his phone, instead of focusing on the car in front of him, or the surrounding traffic around him.

These Tesla crashes are not only a danger to the Tesla occupant, but, it is even more dangerous to the victim that the Tesla would hit! Because that victim can now suffer from whiplash, or worse.


The trouble is, I see a lot of drivers that aren't paying attention to the road, who are dazing off, who are looking at their phones/clouds whatever. The majority of them aren't driving Teslas. At least with a Tesla I know there's a backup system (the 'autopilot').

I'm not saying the Autopilot is 100% faultless/reliable. I am sure however that humans have a much higher chance of handling a situation badly than the autopilot, simply because they're human and humans do stupid shit.


Let's take a scenario. This seems to happen all the time on the freeway.

You're cruising down the freeway at 75 MPH. All of a sudden, traffic comes to a screeching halt.

There are 3 cars in this scenario. The first car is the one that slams on his brakes, and comes to a stop.

The second car, cannot stop in time, and he swerves out of the way, and safely gets into the adjacent lane. Close call, but he drives away safely.

The third car, is the Tesla, with the driver on Autopilot, and browsing Facebook on his iPhone. So, he is inattentive.

The whole sequence plays out in less than 5 seconds.

In this scenario, the third car, the Tesla, will not recognize the situation that the second car swerved out of the way. So the Tesla will collide into the first car at high speeds. The velocity of the impact kills all occupants inside the first car, and the third car, the Tesla. So, if both cars are full of occupants, then maybe up to 10 people are now dead.

The problem is that the Tesla does not have situational awareness. It cannot read the current road condition. It cannot see through the tinted windows of the 2nd car, to see that the first car has slammed on his brakes.

So, the accident severity of the Tesla, is much greater, than compared to a scenario where a human was driving the third car.

In this scenario, I think the human might be able to see the situation, and attempt to avoid it. Or to at least mitigate the severity of it. Like, he would slam on his brakes, or steer out of the way, or something. He might still collide with the first car, but probably at a much lower velocity, that all occupants of both cars can still survive.

It just seems that the severity of the accident with current robotic cars, just seems greater, than compared to when a human is the driver.


A quick google search puts the average number of deaths due to car crashes at 3,287 per day [1]. Another one gives me a list of four deaths attributed to autonomous vehicles [2]. Not four per day. Just four.

Ever.

Memory leads me to recall that statistical reports indicating that vehicles with assistive features like autopilot have a better than traditional safety record [3].

I especially disagree with your thought experiment, because I've watched YouTube videos of situations in which Tesla's autopilot avoided accidents both through stopping and through swerving out of the way [4].

I don't agree with your claim that robotic cars tend to have more severe accidents than traditional cars.

[1]: https://www.google.com/search?q=average+number+of+deaths+via...

[2]: https://en.wikipedia.org/wiki/List_of_autonomous_car_fatalit...

[3]: http://www.iihs.org/iihs/news/desktopnews/stay-within-the-li...

[4]: https://youtu.be/FrJ2uPRRtz0?t=46

The short seller thesis for TESLA is that since the company is one which is valued based on its story, not its underlying fundamentals, it follows that its price will be volatile in response to negative press (accidents, missed projections, etc). There is a clear path from this observation to making money via negative press.

If you're interested in hearing more about how the world is a lot better than you thought it was, but the truth of our situation is distorted by competing forces, here are some cool TED talks [5][6].

[5]: https://www.youtube.com/watch?v=yCm9Ng0bbEQ

[6]: https://www.ted.com/talks/hans_rosling_shows_the_best_stats_...


Now let's add a fourth car, a Toyota without TACC, whose driver is also browsing Facebook on his phone. This driver also does not recognize the situation and will collide into the first car at a high speed. Is this worse than the third car? No, probably not. In short, the assumption that a human driver without Autopilot is paying attention and will swerve out of the way is not a valid one. People drive distracted in traditional cars all the time.

BTW, from what I have heard, Teslas might actually be better at seeing through other cars than human drivers are, since radar can bounce in ways visible light can't.


A Toyota driver probably wouldn't put that much faith in a mere Toyota. Tesla drivers who are high on Elon's farts put that much faith in their cars.

"BTW, from what I have heard, Teslas might actually be better at seeing through other cars than human drivers are, since radar can bounce in ways visible light can't."

Or maybe not, since a Tesla couldn't even see a stopped fire truck.


People who drive normal cars are on their phones while driving all the time. There's no "probably" about it — I know lots of people who have had their cars hit by inattentive drivers. I myself got rear-ended by a guy who was honest enough to straight-up admit that he didn't see me stop because he was messaging on his phone.

Also, I'm not sure what connection you see between recognizing a stationary fire truck and seeing two cars ahead with radar. They seem like pretty different tasks.


I'm sorry, but I think you're wrong. You make it sound as if all Teslas are suddenly as dumb as their phone-wielding drivers.

The Tesla recognizes a stopped car on the freeway much, much earlier than a human driver. The forward collision warning system is always attentive, never blinks. It won't panic slam the brakes, but will brake with exactly the required force.

You really are overestimating the ability of human drivers here. If human drivers are so excellent in reading road conditions and anticipating to traffic, why do we have so many accidents?

The severity of accidents with robotic cars is a magnitude lower, simply because they always pay attention. It's just that fatalities with autonomous vehicles are newsworthy, whereas fatalities with human drivers aren't. It's a classic example of the Baader-Meinhof phenonemon. Just because you read about it more, doesn't mean it actually happens more.

I'm not saying that Teslas are 100% safe or that Tesla shouldn't investigate why these accidents happened. I'm also not saying that there aren't edge cases where a human driver could have prevented an accident where the Tesla would fail. But statistically speaking, autonomous vehicles with all their sensors are a lot safer than human drivers.


Look, don't get me wrong. I am all for autonomous cars.

In fact, I dislike some human drivers so much, that I hope everyone is forced to drive an autonomous car. You know, the idiot drivers that drive erratically, are distracted, swerves in traffic, tail gates, speeds in excess compared to average traffic speeds, etc. These people should get their license revoked.

Driving is a burden for these people, and they are better off taking the subway. Or an Uber. But they still need to get to their destination for work, for play, or for their errands.

What I am stating, is that the current generation of robotic cars, are inherently unsafe.

They need more situational awareness, such as the ability to see all traffic on the road. Including the car in front of the car in front of it. This can possibly be handled by the Vehicle-to-vehicle communication standard that the DOT is proposing.

The other scenario might be to make illegal, all non-robotic cars from driving on the freeways and highways. So, all roads that allow you to travel at high velocity, must be a robotic car.

But this presents another problem. If you take away people's ability to drive, then you also take away, their ability to get experience with driving at high speeds. So, in the future, you end up raising a whole new generation of people, that don't know how to drive properly at high speeds.

Basically, until we get to a level of technology where these cars are fully safe, then the edge cases where accidents and collisions will happen, may end up being more critical with a robotic car, than with a human driver.

The danger, lies in the fact, that these robotic cars, will lull the human driver, into a sense of complacency. Where they think that 99.90% of scenarios will keep them safe. And maybe it might. But what about that 0.10% scenario where it won't. And not only is it dangerous for them, then it is also even more dangerous for someone else.

And when the robotic car gets into a situation where it needs human assistance, then it is an extreme edge case, where no human can possibly react safely in time. The human driver would never have gotten themselves into that situation in the first place.

But even with all that, I am still rooting for robotic cars to succeed, and to safely drive us all one day. And to safely handle that 0.10% scenario.


Totally agree with the mandatory robotic cars and that we should make them more safe, thanks for the clarification!


I don’t agree. The machines and humans make different kinds of stupid mistakes.


In the Uber self-driving accident in Arizona, the Uber supposedly sped up as well or was in a speed up mode after coming over a slower bridge speed of 25 to 40 mph on Mill Ave. I wonder if there is an issue with some of the LIDAR software that coincides with speeding up or a state issue in that condition, not able to identify the objects correctly in that state.

The Uber classified the pedestrian pushing the bike across the street as an 'unknown object' and the driver was supposed to take over but did not and the Uber kept accelerating. [1][2]

[1] https://www.theverge.com/2018/3/19/17140936/uber-self-drivin...

[2] https://www.youtube.com/watch?v=iNDtfHSD_M4


You're making it sound as if Tesla invented the phenomena of distracted drivers that cause traffic accidents.


No but they invented the concept of people thinking they are in a self driving car when they are not and running straight on into static objects.


I don't get it.

"Autopilot" aka adaptive cruise control is going to continue failing since drivers think it's more than that.

More people are going to die in spectacular, headlining fashion.

Imagine if that was a schoolbus instead of a firetruck.

Even Tesla's "non-autopilot" crashes are front page news because it's such a story.

Every front page story whittles away a little at the magical Tesla brand.

A brand, which in the midst of production issues, capitalization issues, and leadership issues is their major asset that could carry them to the other side.

Turn the thing off if hands aren't on the wheel, end of story.


> Turn the thing off if hands aren't on the wheel, end of story.

I assume you mean that Tesla should do this automatically.


> "Autopilot" aka adaptive cruise control is going to continue failing since drivers think it's more than that.

> Turn the thing off if hands aren't on the wheel, end of story.

Spot on. IMO calling this "Autopilot" was a big marketing misstep-- it should've been called "Advanced cruise control" or something along those lines.

"Autopilot" yells "this car can drive itself", which is great marketing, but if your consumers have that idea no matter how much you warn them they'll assume it's probably not that dangerous to take their hands off the wheel and leave the car unchecked for a minute.


> Turn the thing off if hands aren't on the wheel, end of story.

To make things even worse, there have been several instances (mentioned on Reddit and other forums) where Tesla doesn't register you having your hand on the car. You will actually have to nudge the wheel a bit. This is not universal and can differ from car to car. So just putting your hands on your car might not be good enough.


> Turn the thing off if hands aren't on the wheel, end of story.

Search "Tesla Orange Trick"


It irks me that not only this feature is called Autopilot, but it doesn't even come with a "beta" label. Gmail was in beta for like 5 years.

You go to Tesla's website and it greets you with (as of the time this comment was written):

"Full Self-Driving Hardware on All Cars"

"All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."

Which gives the impression this car can drive better than one can, and gives little indication to the fact the _software_ is in active development even if the hardware for "full self-driving" is already there.


As an autopilot regular, I hate reading stories such as this. Just last night I had a 150 mile drive in non-perfect conditions (light rain and dusk) and had Autopilot (v1 with a single camera) engaged for the vast majority of it. It is probably my favourite feature of my Tesla. It is significantly better than Audi’s lane assist features I have used and genuinely reduces strain on long journeys. I now perform journeys without hesitation (sometimes looking forward to them coupled with a supercharger stop to recharge myself) when before I’d expect to arrive at my destination stressed and exhausted.

However, I will admit that to use Autopilot safely and effectively you MUST understand the system’s inputs, what its behaviour is based on those input (and driver controlled variables) and what its limitations are. I am a software developer by trade and whilst I couldn’t hope to replicate the autopilot software stack, I probably could write a detailed pseudo code representation of the systems behaviour (without factoring in sensor failure etc).


How much do they detail those inputs, variables, and limitations vs you making assumptions about them?


Wasn’t v1 made by mobileye (?) and teslas in house v2 has a much worse rep?


It seems fairly obvious me that an imperfect autopilot is worse than no autopilot at all, since it trains drivers to “relax” and let the computer drive when some non-trivial percentage of the time the autopilot will get things wrong.

It’s unfortunate that there’s no real way for there to be a perfect computer driver, and all the real world training data these things are collecting is needed to ever get close.

But if these systems keep making mistakes like this that seem “stupid” to humans, I suspect these things will be banned before long.


I agree. Moreover, it's obvious to me that an imperfect human pilot is worse than no human pilot at all.


> The driver of the vehicle, Heather Lommatzsch, 29, told police she thought the vehicle’s automatic emergency braking system would detect traffic and stop before the car hit another vehicle.

> Police say car data show Lommatzsch did not touch the steering wheel for 80 seconds before the crash. She told police she was looking at her phone at the time and comparing different routes to her destination.

This is not news. This is someone not following the explicit instructions of the manufacturer to maintain control and awareness while driving. They choose to ignore those instructions and it resulting in a crash.

Replace "Telsa" with "Audi/VW/Toyota/Holden/Ford/GM using adaptive cruise control", and this doesn't make front page news.

aresant's comment below is spot on. "Turn the thing off if hands aren't on the wheel, end of story."


> did not touch the steering wheel for 80 seconds before the crash

From wikipedia [0]:

> Above 45 mph free hands are allowed for three minutes if following another vehicle or one minute without following a car

Wow I can't believe this is permissable. Ms Lommatzsch was actually within the rules of what Tesla allows as far as touching the wheel.

> This is someone not following the explicit instructions of the manufacturer

Drivers are stupid and Tesla know this from the videos on the internet. Tesla should be providing failsafes especially with such new technology.

[0]: https://en.wikipedia.org/wiki/Tesla_Autopilot#Alerts


Audi/VW/Toyota/Holden/Ford/GM didn't name their feature "autopilot" though.


And don't advertise on tesla.com/autopilot that the feature is capable of driving itself.


In bold letters on the same page:

> Please note that Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction. It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval.

Are you suggesting that it's not the people's fault they can't get to the bottom of the marketing page before going out, buying a Tesla, ignoring obvious UI warnings, and driving it without knowing that it currently doesn't full drive itself?


Yes, it's not the people's fault, when there's a large video right there showing the car driving itself. The text makes it seem like it's just a "red tape" thing where autopilot is there like in the video but it just needs approval in your jurisdiction. If this marketing came from any other car company, you probably wouldn't be defending it.


Dissect this sentence and tell me how I can deduce that this feature does not exist yet. Like the car is unable to do that. That's not dependent on "regulatory approval" or "software validation" - it's dependent on that the software doesn't exist yet.

Also, for argument sake, lets assume someone reads that sentence and thinks "oh, maybe this isn't available yet" - you get to the next sentence, and guess what

> Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.

Does this sound like the feature is nonexistent? To me, it does sound like it exists, just in some locations some pesky regulators don't like it.


That would only suggest to me, not knowing any better, that the car can in fact fully drive itself but might not be allowed to yet in my jurisdiction, so only a legal barrier not a tech one.


Tesla repeatedly claimed that the car drives itself in marketing.

https://www.dailydot.com/debug/tesla-self-driving-car-video/


The manufacturer should not be giving such unrealistic instructions in the first place. Everyone damn well knows that people will treat this like full autopilot. And everyone damn well knows that if you give people the chance to be a bit lazy and stop paying attention to the road, they will! Tesla knows what human behaviour is like.

We've been saying this from the beginning. This half-baked "autopilot" will only end in blood.


"Replace "Telsa" with "Audi/VW/Toyota/Holden/Ford/GM using adaptive cruise control", and this doesn't make front page news."

Or maybe it doesn't happen because people have suitably realistic expectations of Audi/VW/Toyota/Holden/Ford/GM products.


Why doesn't Tesla have an eye tracking camera that ensures the driver is watching the road? It would cost hundreds of dollars at most. Basic eye tracking is a solved problem and would ensure drivers are actually paying attention.


It's definitely not a solved problem in the real world. Eye tracking in the lab works great but in the real world there are lots of challenges. Low light levels at night, drivers wearing sunglasses, drivers squinting etc.


That's solvable: if it can't see your eyes, then you can't use Autopilot. That should still let the majority of people use Autopilot the majority of the time.


I want to wear sunglasses.


Further than that, if you're driving west at sundown it's much safer to be wearing sunglasses than squinting.


Then you can't use Autopilot. Where's the confusion.


Making sure you're at least facing in the correct direction (chin up, facing forward) would help even if sunglasses prevented locating the actual gaze target.

If you still want to use your phone, it'll be easier for the cops to catch you doing it.


Your desire to wear sunglasses is a lower priority than the health of the people you share the road with.


So being blinded with autopilot on is safer than being able to see, but allowing the possibility of inattentiveness? That's ridiculous.


No. Your assumption that I'm saying that is, frankly, ridiculous. If you're being blinded, put the sunglasses on and turn Autopilot off. Of course, I doubt that eye-tracking works particularly well if you're squinting and averting your eyes anyway: the act of being attentive precludes being blinded in itself.


Why would the attentiveness monitor be tied to autopilot? Some modern cars without autopilot already have attentiveness monitors. It's ridiculous to require people not to wear sunglasses. Just for the sake of attentiveness monitors. Anybody driving east to west in the afternoon or west to east in the morning wouldn't be able to use it at all.


GM's Super Cruise has eye tracking [0]

[0]: http://www.cadillac.com/world-of-cadillac/innovation/super-c...


GM's Super Cruise uses an infrared camera so it can effectively see through most sunglasses. Some glasses do block infrared light too so the Super Cruise system doesn't work in all cases [1].

[1] https://eu.detroitnews.com/story/opinion/columnists/henry-pa...


I believe Mercedes lane assist does this with via the steering wheel. They test whether you are actively driving. As soon as you take your hands off or don't move enough the system beeps after a few seconds to put your hands back.


Tesla does that as well but it's a poor proxy for determing if the user is paying attention. It would be much better to check if the driver is actually watching the road. The user that crashed into this firetruck was not watching the road which the Tesla could have easily recognized through eye tracking.


Tesla does it after three minutes. I can't think of any good reason an attentive driver would need their hands free for that long. It only takes a person thinking they can get away with checking their phone, or eating food, or looking around the footwells to result in deaths and, crucially, that person doesn't have to be the fatality.


It is more complex than that. I analyses your corrections and compares it to its own data of the road. https://www.youtube.com/watch?v=A66zgJ4Oj8o


The Model 3 does have a driver-facing camera, but would require a software update to enable eye tracking, assuming the resolution and angle are good enough.


Someone asked this and Musk replied that it wasn’t effective. https://twitter.com/elonmusk/status/996102919811350528?s=21


I call bullshit, eye tracking works fine. If more information is required to determine if a driver is paying attention there are many other factors that can be added. For example Mercedes-Benz Attention Assist works fine and doesn't use eye tracking.


> "Eyetracking rejected for being ineffective, not for cost."

Simply saying that it is ineffective is not an answer at all. Anyone with access to YouTube can see eye trackers working remarkably well.

Maybe he simply wasn't willing to disable Autopilot for users that could not be eye tracked accurately (glasses, etc), but that seems like a very bad reason. They should start with the ideal cases and add support for wearing glasses or low light over time.


It's not as complicated a problem. Turning off the feature when hands aren't at the wheel, or not calling it "Autopilot" so people don't assume it's smarter than it is, IMO, should be enough.


Should we be allowed to call autopilot autopilot?


Hm. Any one hear willing to talk about the fact that the driver didn't had the hands on the wheel for 80 seconds before the crash and got away with "just" a broken foot?


"got away with "just" a broken foot?"

She's fortunate she didn't plow into the back of a church van full of kids instead of a fire truck. The kids would probably be dead thanks to the high speed impact of a 4,900 pound car.


I've kept my hands off the wheel for much longer when driving cross-country. I don't see how they're related, though.


I think a lot of factors could be ironed out by a solid mathematical grasp of the problem, instead of letting customers become guinea pigs or running simulations and then deduce a blanket solution that looks okay in those average cases, a room with a bunch of mathematicians will surely state the problem, limits and solution in a infinitely:) more solid way. I have seen this happening with software countless times, spend the money on a math ph.d and save endless testing time with half guessed solutions where limit cases are ignored...etc.


To be fair, the fact that the car increased speed just before the crash is entirely irrelevant here. It is not as if the car was some conscious agent being willfully reckless.


I'm an early adopter but I have no idea why anyone would early adopt self-driving or autopilot features, much less pay for it.


Another one of those dreaded "autopilot" malfunction reports. When are people going to realize that an 'autopilot' system required to be supervised at all time is not something any non super-human driver is able to override timely.


When Tesla stops advertising it as such? Maybe also when the system aggressively disables itself when not under human supervision (hands on wheel)


The Darwin Awards are going to become a big Tesla advertisement.


The use and marketing of the word "Autopilot" will go down in history as one of Tesla's biggest mistakes.


Huh, I'm a little concerned with how much information they were able to collect and therefore possibly get her to admit she was looking at her phone instead of the road and hands on the wheel.

I hope she's got great insurance because that guy who got whiplash is probably going to get a bit of money.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: