• At 8 seconds prior to the crash, the Tesla was following a lead vehicle and was traveling about 65 mph.
• At 7 seconds prior to the crash, the Tesla began a left steering movement while following a lead vehicle.
• At 4 seconds prior to the crash, the Tesla was no longer following a lead vehicle.
• At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected.
This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera and Mobileye (at least in early models) vision software. It also recognizes lane lines and tries to center between them. It has a low resolution radar system which ranges moving metallic objects like cars but ignores stationary obstacles. And there are some side-mounted sonars for detecting vehicles a few meters away on the side, which are not relevant here.
The system performed as designed. The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.[1] If the vehicle ever got into the gore area, it would track as if in a lane, right into the crash barrier. It won't stop for the crash barrier, because it doesn't detect stationary obstacles. Here, it sped up, because there was no longer a car ahead. Then it lane-followed right into the crash barrier.
That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure. It follows directly from the decision to ship "Autopilot" with that sensor suite and set of capabilities.
This behavior is alien to human expectations. Humans intuitively expect an anti-collision system to avoid collisions with obstacles. This system does not do that. It only avoids rear-end collisions with other cars. The normal vehicle behavior of slowing down when it approaches the rear of another car trains users to expect that it will do that consistently. But it doesn't really work that way. Cars are special to the vision system.
How did the vehicle get into the gore area? We can only speculate at this point. The paint on the right edge of the gore marking, as seen in Google Maps, is worn near the point of the gore. That may have led the vehicle to track on the left edge of the gore marking, instead of the right. Then it would start centering normally on the wide gore area as if a lane. I expect that the NTSB will have more to say about that later. They may re-drive that area in another similarly equipped Tesla, or run tests on a track.
One more thing to note - anecdotal evidence indicates that Tesla cars did not attempt to center within a lane prior to an OTA update, after which multiple cars exhibited this "centering" action into gore sections (and thus required manual input to avoid an incident) on video.
To me, that this behavior was added via an update makes it even harder to predict - your car can pass a particular section of road without incident one thousand times, but an OTA update makes that one thousand and first time deadly.
Humans are generally quite poor at responding to unexpected behavior changes such as this.
And this is exactly why all of these articles recently about how "great" it is that Tesla sends out frequent OTA updates are ridiculous. Frequent, unpredictable updates with changelogs that just read "Improvements and bug fixes" is fine when we're talking about a social media app, but is entirely unacceptable when we're talking about the software that controls a 2 ton hunk of metal flying at 70mph with humans inside of it.
The saying has been beat to death, but it bears repeating: Tesla is a prime case where the SV mindset of "move fast and break things" has resulted in "move fast and kill people". There's a reason that other vehicle manufacturers don't send out vehicle software updates willy-nilly, and it's not because they're technologically inferior.
This isn't an issue specific to Tesla as all automakers are now making cars that are more and more dependent on software. So what is the right way to handle these updates? You mentioned a clear flaw with OTA updates, but there are also numerous advantages. For example, the recent Tesla brake software issue was fixed with an OTA update. That immediately made cars safer. Toyota had a similar problem a few years ago and did a voluntary recall. That means many of those cars with buggy brake systems were on the road for years after a potential fix was available and were driven for billions of potentially unsafe miles.
>This isn't an issue specific to Tesla as all automakers are now making cars that are more and more dependent on software.
Cars have been dependent on software for a long time (literally decades). This isn't something new. Even combustion engine cars have had software inside of them that controls the operation of the engine, and this software is vigorously tested for safety issues (because most car manufacturers understand a fault with such software could result in someone's death). Tesla seems to be the only major car manufacturer that has a problem with this.
>So what is the right way to handle these updates?
The way that other vehicle manufacturers (car, airplane, etc) have been doing it for decades is a pretty good way.
>You mentioned a clear flaw with OTA updates, but there are also numerous advantages. For example, the recent Tesla brake software issue was fixed with an OTA update. That immediately made cars safer.
There is no evidence that said OTA update made Tesla cars any safer. There is evidence that similar OTA updates have made Tesla cars more unsafe.
The brake OTA that you mentioned has actually potentially done more harm than good. Tesla owners have been reporting that the same update made unexpected changes to the way their cars handle/accelerate in addition to the change in braking distance. These were forced, unpredictable changes that were introduced without warning. When you're driving a 2 ton vehicle at 70mph, being able to know exactly how your car will react in all situations, including how fast it accelerates, how well it handles, how fast it brakes, and how the autopilot will act is crucial to maintaining safety. Tesla messing with those parameters without warning is a detriment to safety, not an advantage.
>Cars have been dependent on software for a long time (literally decades). This isn't something new. Even combustion engine cars have had software inside of them that controls the operation of the engine, and this software is vigorously tested for safety issues (because most car manufacturers understand a fault with such software could result in someone's death). Tesla seems to be the only major car manufacturer that has a problem with this.
The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles. I wouldn't be surprised if their Lane Keeping Assist (LKA) systems have similar problems.
>WARNING
When Pilot Assist follows another vehicle at speeds overapprox. 30 km/h (20 mph) and changes target vehicle – from a moving vehicle to a stationary one – Pilot Assist will ignore the stationary vehicle and instead accelerate to the stored speed.
>The driver must then intervene and apply the brakes.
This comparison just sold me on how morally wrong it is what Tesla is doing. Intentionally misleading and marketing to customers a feature called Autopilot that is only a marginal improvement on what other cars already offer. What if Volvo started calling their (clearly not independent) feature Autopilot and saying it was the future of hands-free driving? Seems inexcusable.
>Super Cruise is not a crash avoidance system and will not steer or brake to avoid a crash. Super Cruise does not steer to prevent a crash with stopped or slow-moving vehicles. You must supervise the driving task and may need to steer and brake to prevent a crash, especially in stop-and-go traffic or when a vehicle suddenly enters your lane. Always pay attention when using Super Cruise. Failure to do so could result in a crash involving serious injury or death.
Riffing off the parallel thread about Google AI and how "corporations are controlled by humans" and can have moral values - no, corporations are controlled primarily by the market forces. When Tesla started branding line assist as autopilot, it put market pressure on others to follow suit. Hence, I'm absolutely not surprised about this ad and the associated warning in the manual.
Ideally, yeah, every manufacturer would have to take all the puffery out of their marketing, or better yet, talk about all the negatives of their product/service first, but I doubt I'll ever see that.
This article portrayed Super Cruise as something qualitatively different, based on the maps of existing roadways. I'm not sure if they've also considered integrating the multiple systems involved in driver assistance. I'm curious if Tesla has either for that matter.
I’m opposed to over-regulation of any sort, however it seems obvious that vehicle manufacturers need to do a better job informing consumers of the driver assistance capabilities of modern vehicles. Something similar to the health warnings on cigarette packs.
> The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles.
Software should not be driving a car into any of them. I think that LIDAR would see the obstacle, but as I understand, the crashed Tesla car didn't have it.
LIDAR probably would have seen the obstacle and avoided it, but so would a human driver who was operating the vehicle responsibly and correctly. It sucks that people treat level 2 systems and level 3 or 4, but the same thing applies to many convenience features in a car (cruise control, power brakes, etc...). There's always going to be some bozo doing what they shouldn't be doing with something.
I'd love to see LIDAR on consumer vehicles, but AFAIK it's prohibitively expensive. And to be fair, even Level 4 autonomous vehicles still crash into things and kill people.
Last but not least, every semi-autonomous system all the way back to Chrysler's "AUTO-PILOT" has had similar criticisms. People in the past even said similar things about high speed highways compared to other roads WRT attention.
> The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles.
Literally every car I have driven equipped with Cruise Control and Collision Avoidance (TACC) hits the brakes and slows down to 20-ish km/h if it senses ANYTHING moving slower (including stationary) in front of the car at possible collision path.
This really affects the nature of the situation. 20 years ago, cars contained microcontrollers with a tiny bit of code which was thoroughly reviewed and tested by skilled professionals. Today, all cars run so much code, even outside of the entertainment system, that the review and testing just can't be the same. (And there's way more programmers, so the range of skill and care is also much wider.)
When the Toyota electronic throttle "unintended acceleration" accidents were in the news, the software was described as a "big bowl of spaghetti" but the NTSB ultimately determined that it was not the cause of the problems. It was drivers using the wrong pedal.
I've long been curious about the "big bowl of spaghetti" comment (and all the other criticisms made by the experts who inspected Toyota's code). There were some extremely serious accusations which don't seem consistent with the fact that the vast majority of Toyotas on the road aren't showing problems caused by their MCU's spaghetti code.
AI takes it to a whole new level. Neural networks are all black box, they can't be reviewed. You feed in your training data, you test it against your test data, and just have faith that it will respond appropriately to a pattern that's slightly different than anything it's seen before. Sometimes the results are surprising.
That's my biggest problem with AI and neural networks. You can't really measure progress here. If you wanted the same safety standards as for every other automotive software you'd have to test drive for hundreds of thousands of kilometres after every change of parameters, because there's no way to know what has changed about the AI's behavior except for testing it thoroughly.
Compare this to classic engineering where you know the changes you've made, so you can rerun your unit tests, rerun your integration tests, check your change in the vehicle and be reasonably sure that what you changed is actually what you wanted.
The other approach to autonomous driving is to slowly and progressively engineer more and more autonomous systems where you can be reasonably sure to not have regressions. Or at least to contain your neural networks to very very specific tasks (object recognition, which they're good at), where you can always add more to your test data to be reasonably sure you don't have a regression.
I don't think we'll see too many cars being controlled by neural networks entirely, unless there's some huge advancement here. Most of the reason we see more neural networks now is that our computing power has reached the ability to train sufficiently complex NNs for useful tasks. Not because the math behind it advanced that much since the 60s.
> There is no evidence that said OTA update made Tesla cars any safer.
That particular OTA update significantly shortened braking distances. [The update] cut the vehicle’s 60 mph stopping distance a whole 19 feet, to 133, about average for a luxury compact sedan. That's a safer condition, IMO, and I'm uncertain how to argue that it doesn't make the car safer.
> being able to know exactly how your car will react in all situations
If one depends on intimate knowledge of his own car for safety then he’s likely already driving outside the safety envelope of the code, which was written to provide enough safety margin for people driving bad cars from 40yr ago.
I didn't say no car has ever relied on software. I said cars are becoming more reliant on software. I don't think that is a controversial statement. I also don't think it is controversial to say that other automakers also occasionally ship buggy code. The Toyota brake issue I mentioned in the previous post is one example.
Additionally, the argument that we should continue to handle updates this way simply because we have done it this way for decades is the laziest possible reasoning. It is frankly surprising to see that argument on HN of all places.
As for the evidence that OTA updates can make things safer, this is from Consumer Reports:
>Consumer Reports now recommends the Tesla Model 3, after our testers found that a recent over-the-air (OTA) update improved the car’s braking distance by almost 20 feet. [1]
That update going out immediately OTA is going to save lives compared to if Tesla waited for the cars to be serviced like other manufacturers. I don't think you can legitimately argue against that fact.
> That update going out immediately OTA is going to save lives compared to if Tesla waited for the cars to be serviced like other manufacturers. I don't think you can legitimately argue against that fact.
There is again no evidence to support this fact. There is evidence that Tesla's OTA software updates have introduced safety issues with Tesla cars. That's a fact.
Better braking distance is of course a good thing but if anything, the fact that Teslas were on the road for so long with a sub-par braking distance is more evidence of a problem with Tesla than it is evidence of a benefit of OTA updates.
The other factor in that brake story is that it took mere days for Tesla to release an update to "fix" the brakes. This isn't a good thing. The fact that it was accomplished so quickly means that the OTA update was likely not tested very well. It also means that the issue was easy to fix, which calls into question why it wasn't fixed before. It also highlights the fact that Tesla, for some reason, failed to do the most basic testing on their own cars for braking distance. Comparing the braking distance of their cars should have been one of the very first things they did before even selling the cars, but apparently it took a third party to do that before Tesla was even aware of the issue. This doesn't inspire confidence in Tesla cars at all.
I simply don't know what to say to you if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.
EDIT: The comment I was replying to was heavily edited after I responded. It originally said something along the lines of improving braking distance is good but there is no evidence that it would improve safety.
> if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.
Nobody is arguing that. We're arguing that there is no evidence the Tesla OTA update made the cars safer on net.
You're trying to set up some sort of "OTA updates are dangerous in general, but this one is clearly good, how do we balance it" conversation, but the problem is, this OTA update is not clearly good. OTA updates are dangerous in general, and also in this case in specific. You need to find a better example where there's actual difficult tradeoffs being made, and not just a manufacturer mishandling things.
> I simply don't know what to say to you if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.
If the car can’t see the obstacle, the braking distance simply does not matter.
And yet again, the same OTA update changed other parameters about the way the car drives that do make it less safe. I don't know why you're trying to ignore that fact. If I drastically improve the braking distance of a car, but in the same update I also make it so that the car crashes itself into a wall and kills you, is the car safer? Hint: no
As for your edit, you clearly misread the original comment, which is why I edited it for you. I said that there was no evidence that the OTA made the car safer. Please try to read with better comprehension instead of trying to misrepresent my comments.
If I drastically improve the braking distance of a car, but in the same update I also make it so that the car crashes itself into a wall and kills you, is the car safer? Hint: no
You don't have enough information to come to that conclusion.
It's quite common to have to brake hard to avoid a cousin. It's pretty uncommon to see the specific scenario triggering this crash behavior.
I never denied that. Your comment pointed out a problem with OTA updates and I agreed calling it "a clear flaw". I pointed out a benefit of OTA updates then asked an open ended question about how they should be handled. You responded be attacking the example I provided. I was looking to debate this serious issue, not getting into a pissing match about it.
I never said you denied it, I said you ignored it. If you wanted to debate this serious issue, then maybe you shouldn't keep ignoring one of the crucial cornerstones of the discussion. If you're unwilling to discuss points that challenge your own opinion, then it's clear that you're just trying to push an agenda rather than have an actual discussion.
> So what is the right way to handle these updates?
Avoid doing them in the first place? It's not like bit rot is - or should be - a problem for cars. It's a problem specific to the Internet-connected software ecosystem, which a car shouldn't be a part of.
So basically: develop software, test the shit out of it, then release. If you happen to find some critical problem later on that is fixable with software, by all means fix it, again test the shit out of it, and only then update.
If OTA updates on cars are frequent, it means someone preferred to get to market quickly instead of building the product right. Which, again, is fine for bullshit social apps, but not fine for life-critical systems.
Tesla does test the shit out of it before they release a patch. The problem is that users expectations of the systems performance suddenly get out of sync with what the car is going to do.
Part of me wonders if there should be a very quick, unskipable, animated, easy to understand explanation of the patch notes before you can drive when they make material changes to core driving functionality.
While using Autopilot (Big A), there should be a loud klaxon every 30 seconds followed by a notification "CHECK ROAD CONDITIONS" and "REMAIN ENGAGED WITH DRIVING" in the same urgent tone of an aircraft autopilot (small a) warning system.
Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.
I don't know why Tesla defenders keep repeating this FUD:
> Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.
This is beyond broken, it's a fundamental misunderstanding of how physical products are supposed to work. Software people have gotten used to dismiss the principle of least astonishment because they know better —and no user got killed because of a Gmail redesign—, but this is a car, it's hardware with its user on-board, a lot of kinetic energy and all of it relies on muscle memory.
I'd vote in favor of such explanation, though this alone may not be enough to cancel out possibly thousands of hours of experience with the previous system behavior.
The first thing about doing it right is to make sure it has been developed in an appropriate manner for safety-critical systems, which includes, but is by no means limited to, adequate testing.
The second thing is to require the owners to take some action as part of the installation procedure, so that it is hard for them to overlook the fact that it has happened.
The third thing is that changes with safety implications should not be bundled with 'convenience/usability' upgrades (including those that are more of a convenience for the manufacturer than for the user.) To be fair, I am not aware of Tesla doing that, but it is a common enough practice in the software business to justify being mentioned.
And it has to be done securely. Again, I am not aware of Tesla getting this wrong.
Great that they fixed the brakes OTA. But how exactly did the inferior braking algorithm get on the Model 3 in the first place? And what are the chances of a regression?
While I like Tesla, I find the praise for Tesla's fast OTA update for its braking problem to be freaking terrifying.
A problem with variable stopping distances is the sort of thing that should be blindingly obvious in the telemetry data from your testing procedures. Brake systems, and ABS controls in particular, are normally rigorously tested over the course of 12-18 months in different environments and conditions.[0] That Tesla completely missed something like that suggests either their testing procedures are drastically flawed (missing something that CR was able to easily and quickly verify in different cars), that their software development process isn't meshed up with their hardware testing and validation, or a combination of the two. Neither option is a good one.
The fact that Tesla was able to shave 19 feet of their braking distances is horrifying. After months of testing different variations and changes to refine your braking systems, shaving off an extra 19 feet should be impossible. There shouldn't be any room to gain extra inches without making tradeoffs in performance in other conditions that you've already ruled out making. If there's an extra 19 feet to be found for free after a few days of dev time, you did something drastically wrong. And that's completely ignoring physical testing before pushing your new update. Code tests aren't sufficient; you're changing physical real-world behavior, and there's always a tradeoff when you're dealing with braking and traction.
Tesla is being praised by consumers and the media because, hey, who doesn't like the idea that problems can be fixed a couple days after being identified? That's great. In this case, Tesla literally made people's cars better than they were just a few days before. But it trivializes a problem with very real consequences, and I hope that trivialization doesn't extend to Tesla's engineers. Instead of talking about a brake problem, people are talking about how great the fast OTA update for the problem is. Consumers find that comforting, as the OTA updates can makes what's otherwisea pain in the ass (recalls and dealer visits for software updates) effortless.
Hell, I'm a believer in release early, release often for software. Users benefit, as do developers. At the same time, the knowledge that you can quickly fix a bug and push out an update can be a bit insidious. It's a bit of a double-edged sword in that it gives you a sense of comfort that can bite you in the ass as it trivializes the consequences of a bug. And when bug reports for your product can literally come in the form of coroner's reports, that comfort isn't a good thing for developers.
At least you can rely on 99% of humans to try to act according to self-preservation instinct MOST of the time.
Nope. I see tremendous numbers of distracted drivers who don't even realize there's a threat. I also see many utterly incompetent drivers who will not take any evasive action, including braking, because they simply don't understand basic vehicle dynamics or that one needs to react to unexpected circumstances.
Updates should fix problems not create new ones. The tried and true method for silicon valley bug fixing is to ship it to the users and let them report any issues. This is wholly insufficient for car software. Car software should seldom have bugs in the first place, but OTA updates should never bar never introduce new bugs to replace the old.
> So what is the right way to handle these updates?
Require updates to be sent to a government entity, which will test the code for X miles of real traffic, and then releases the updates to the cars. Of course, costs of this are to be paid by the company.
Current development of cars is done with safety as a paramount concern. There is no need to filter everything through a government entity. However the automobile companies are responsible for their design decisions. This should absolutely apply to software updates. That does mean complete transparency during investigations, a complete audit trail of every software function invoked prior to a crash.
So, no filter, but government penalties and legal remedies should be available.
"Current development of cars is done with safety as a paramount concern."
That's exactly the impression that I don't get from Tesla very much. Instead I see the follwing:
Get that thing to market as quickly as possible. If the software for safety critical systems is sub-par, well, can be fixed with OTA updates. That's fine for your dry cleaning app. For safety critical software that's borderline criminal
Hype features far beyond their ability (autopilot). Combine this with OTAs, which potentially change the handling of something that is not at all autopilot, but actually some glorified adaptive cruise control. For good measure: Throw your customers under the bus if ineviteble and potentially deadly problems do pop up
Treating safety issues merely as a PR problem and acting acordingly. Getting all huffy and insulted and accusing the press of fake news when such shit is pointed out
I could go on. But such behavior to me is not a company signaling that safety is of paramount concern.
"That does mean complete transparency during investigations, a complete audit trail of every software function invoked prior to a crash."
Let's just say that Tesla's very selective handling and publication of crash data does not signal any inclination for transparency.
I agree. I think companies should be losing serious money and individual should be losing jobs over crashes like these, much like in the aircraft sector.
Testing is absolutely necessary. We're talking about millions of cars here, which are potentially millions of deadly weapons. You don't want companies pushing quick fixes, which turn out to contain fatal bugs.
That sounds like a great way to stall all further progress, which has a horrific human cost of its own.
Government has a valid role to play, though, by requiring full disclosure of the contents of updates and "improvements," by setting and enforcing minimum requirements for various levels of vehicle autonomy, and by mandating and enforcing uniform highway marking standards. Local DOTs are a big part of the problem.
Yeah, because we know governments are really good at giving certifications and doing tests that mean sonething. Lets put every design decision in the hand of governements then! or better, nationalize car companies! Problem solved?
Flying in an airplane is safe because of direct intervention by the government.
Cars have been made safe for us also by direct intervention by the government. From important things like mandating seat belts and crash safety to smaller things like forcing the recall of tens of millions of faulty air bag inflators.
These are just a few of the many things Uncle Sam has done to make things safer for us.
Isn’t flying mostly safe because of post hoc safety analysis followed by operating requirements? I don’t think the FAA tests every change made to aircraft before they can fly?
First, any change in design (or in configuration, in the case of repairs) is backed by PEs or A&P mechanics who sign off on the changes. Their career rides on the validity of their analysis so that's a better guarantee than some commit message by a systems programmer.
Second, the FAA basically says "show us what you are changing" after which they will absolutely require physical tests (static or dynamic tests, test flights, etc., as appropriate to the scope of change).
And I'd say flying is so safe mainly from the blameless post-mortem policy that the American industry instantiated decades ago and which is constantly reinforced by the pros in the NTSB. It's a wonderful model for improvement.
I think that the FAA's role is theoretically as you express, but in practice, there is significantly less oversight (especially direct oversight) than implied.
As an example, the crash of N121JM on a rejected takeoff was due (only in part) to a defective throttle quadrant/gust lock design that went undetected during design and certification, in part because it was argued to be a continuation of a conformant and previously certificated design. (Which is relevant to the current discussion in that if you decide to make certification costly and time-consuming, there will be business and engineering pressure to continue using previously certificated parts, with only "insignificant changes".)
If I, as an engineer, sign of on changing the screws on the flaps for cheaper ones and the plane crashes because the flaps go loose due to the screws being unable to handle the stress, my career can be assumed over if I have no good explanation.
If an engineer signs off a change they sign that they have validated all the constraints and that for all they know the machine will work within the specs with no faults.
If a software engineer commits code we may run some tests over it, look a bit over it. That's fine. But if the software ends up killing anyone, the software engineer is not responsible.
And yes, to my knowledge, every change to an aircraft is tested before flight or atleast validated by an engineer that understands what was just changed.
In any case, let a third party control the actual updating, so that we know when and how often cars are updated. Require at least X months of time between code submission and deployment to cars. We don't want a culture of "quick fixes".
This is a popular idea: Just put someone in charge! It ignores the incentives for those gatekeepers, who are now part of the system. In practice I don't think you're going to get better updates, you're going to get "almost no updates".
It took years for the FDA to investigate Theranos in case you are not aware. And they only did when the press started digging up. Poor, poor track record.
There's a lot of sunlight between letting pharma companies run rampant and having the FDA. One could imagine private non-profit testing and qualifications standards organizations along the lines of the underwriters laboratories
It is not completely out of this world to imagine multiple private entities involved in pharma dossier reviews instead of having the FDA. The FDA employs tons of private consultants anyway so they bring virtually no value.
Certainly communication of any changes to all drivers inexperienced with the latest version; ideally user interaction required for the update to be applied, and potentially even the ability to reverse them if they are unhappy with the changes.
At the very _least_ when you introduce a change in behavior have it to be enabled from the user through the dashboard. This creates at least one touch point for user education.
This seems testable. IANAAE (I Am Not An Automotive Engineer), but why can't you run both the new and old code side by side and if the actions they take are materially different investigate further? Like, if in one case the new code would want you to move left, and the old code goes straight, one of those behaviors is probably wrong. If the driver corrects, then the new code is probably correct, but if the driver does not, then the new code is doing something probably incorrect.
At the very least, you should be able to get some sort of magnitude/fuzzy understanding of how frequently the new code is disagreeing, and you can figure out where and go check out those conditions.
It has already been touched on by another commenter, but testability of Machine Learned systems outside of a training dataset is pretty much a crapshoot.
An ML solution stops "learning" after training and only reacts.
To illustrate the difference, have you driven on roads under construction lately? As humans, when you've driven the same road hundreds of times, you start to do the same thing as a machine learned implementation. You drive by rote.
When you get to that construction zone though, or the lines get messed up, your brain will generally realize something has changed, and you'll end up doing something "unpredictable", i.e. learning a new behavior. The Machine Learning Ali's output (a neural net) can't do that. It can generify a pattern to a point, but it's behavior in truly novel circumstances cannot be assured.
Besides which, the problem still stands that the system is coded to ignore straight ahead stationary objects. Terrible implementation. It should look for overly fast and uniform increase in angular field coverage combined with being near stationary in terms of relative motion as a trigger to brake. I.e. If a recognized shape gets bigger at the same rate on all "sides" whilst maintaining a weighted center at the same coordinate. It's one of the visual tricks pilots are taught to avoid mid-air collisions.
Admittedly though, the human brain will likely remain WAY better at those types of tricks than a computer will be for a good long time.
I think the point is to run the two models on the same data, either simultaneously in everyone's cars, or using recorded sensor data from the past (maybe in the car while parked after a drive for privacy reasons). Initially, only the old version gets to take any action in the real world. Any difference in lane choice would then have to be justified before making the new version "live".
You can do this sort of stuff when replacing a web service too, by the way. For example running two versions of Django and checking if the new version produces any difference for a week before making it the version the client actually sees.
The problem, however, is your test can't be assumed to generify the way an explicitly coded web service would.
You can look at your code and say "For all invalid XML, this, for all input spaces, that." You can formally prove your code in other words.
You CANNOT do that with Neural Nets. Any formal proof would simply prove that your neural network simulation is still running okay. Not that it is generating the correct results.
You can supervise the learning process, and you can practically guarantee all the cases within your training data set, and everyone in the research space is comfy enough to say "yeah, for the most part this will probably generify" but the spectre of overfitting never goes away.
With machine learning, I developed a rule of thumb for applicability: "Can a human being who devotes their life to the task learn to do it perfectly?"
If the answer is yes, it MAY be possible to create an expert system capable of performing the task reliably.
So lets apply the rule of thumb:
"Can a human being, devoting their life to the task of driving in arbitrary environmental conditions, perfectly safely drive? Can he safely coexist with other non-dedicated motorists?"
The answer to the first I think we could MAYBE pull off by constraining the scope of arbitrary conditions (I.e. specifically build dedicated self-driving only infrastructure).
The second is a big fat NOPE. In fact, studies have found that too many perfectly obedient drivers typically WORSEN traffic in terms of probability to create traffic jams. Start thinking about how people drive outside the United States and the first-world in general, and the task becomes exponentially more difficult.
The only things smarter than the engineers trying to get your car to drive itself are all the idiots who will invent hazard conditions that your car isn't trained to handle. Your brain is your number one safety device. Technology won't change that. You cannot, and should not outsource your own safety.
Edge cases in this scare the hell out of me. I'm envisioning watching CCTV of every Tesla that follows a specific route on a specific day merrily driving off the same cliff until it's noticed.
I mean what would have happened here if another Tesla or two were directly behind Huang, following his car's lead?!
Possibly nothing, I'd assume the stopping distance would be observed and the following cars would be able to stop/avoid, but I wouldn't like to bet either way. Perhaps, in some conditions, the sudden impact on the lead car would cause the second car to lose track of the rear end of the first? Would it then accelerate into it?
The report indicates the system ignores stationary objects. I would not be surprised if a suddenly decelerated car in front of the system effectively vanished from the car's situational awareness. Your scenario does not seem that far-fetched.
An attentive human would realize something horrible had happened and perhaps reacted accordingly. A disengaged or otherwise distracted one may not have the reaction time necessary to stop the system from plowing right into the situation to make it worse.
> but why can't you run both the new and old code side by side and if the actions they take are materially different investigate further?
Because there are no closed facilities that you can use to actually perform any meaningful test. You could test "in-situ", but you would need an absolutely _huge_ testing area in order to accurately test and check all the different roadway configurations the vehicle is likely to encounter. You'll probably want more than one pass, some with pedestrians, some without, some in high light and some in low, etc..
It's worth noting that American's drive more than 260 billion miles each _month_. It's just an enormous problem.
This particular case might have been testable "in-situ":
[ABS News Reporter] Dan Noyes also spoke and texted with Walter Huang's brother, Will, today. He confirmed Walter was on the way to work at Apple when he died. He also makes a startling claim — that before the crash, Walter complained "seven-to-10 times the car would swivel toward that same exact barrier during auto-pilot. Walter took it into dealership addressing the issue, but they couldn't duplicate it there."
It is very believable that the car would swivel toward the same exact barrier on auto-pilot.
BTW - I'm running a nonprofit/public dataset project aimed at increasing safety of autonomous vehicles. If anyone here wants to contribute (with suggestions / pull requests / following it on twitter / etc) - you'd be most welcome. Its: https://www.safe-av.org/
This is where simulators play an important role. Many AD( automated driving) solution suppliers are investing into simulators to create different scenarios and test performance of their sensors and also SW. Else, as you said it’s impossible to drive billions of miles to cover all usecases.
A/B testing self-driving car software to find bugs? Would probably work great, but that is also terrifying!
But you're right, if you're really going to do a full roll out, may as well test it on a subsegment first - I'd hate for it to be used as a debugging tool though.
> why can't you run both the new and old code side by side and if the actions they take are materially different investigate further?
As I understand it, this is essentially what they were doing with the autopilot 'shadow mode' stuff. Running the system in the background and comparing its outputs with the human driver's responses, and (presumably) logging an incident when the two diverged by any significant margin?
> And this is exactly why all of these articles recently about how "great" it is that Tesla sends out frequent OTA updates are ridiculous. Frequent, unpredictable updates with changelogs that just read "Improvements and bug fixes" is fine when we're talking about a social media app, but is entirely unacceptable when we're talking about the software that controls a 2 ton hunk of metal
I recently got downvoted for that exact line of reasoning.
Looks like some people don't like to hear that :-)
I think you got down-voted because you misrepresented what Telsa is actually doing, which is a difficult arbitrage between:
- known preventable deaths from, say, not staying in the lane aggressively enough;
- possible surprises and subsequent deaths.
There is an ethical conundrum (that is quite different from the trolley problem) between a known fix and an unknown possibility. If both are clearly establish and you dismiss the former to have a simplistic take, yes, you would be down-voted because you are making the debate less informed.
Without falling to solutionism, in that case, the remaining issue seems rather isolated to a handful of areas that look like lanes and could be either painted properly, or that Telsa cars could train to avoid. The later fix would have to be sent rapidly and could have surprising consequences -- although it seems decreasingly likely.
That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software. This explains why this community prefers (to an extend) Telsa to other manufacturers. Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment. There are ethical backgrounds to non-interventionism.
As the victim of car violence, I happen to think that their position is unethical. I’m actually at what is considered an extreme position, of being in favour of Telsa (and Waymo) taking more risk than necessary and temporarily increasing the number of accidents on the road because they have a far better track record of learning from those accidents (and the subsequent press coverage) and that would lower the overall number of deaths faster.
As it happens, they don’t need to: even with less than half of accidents from their counterparts, they still get spectacular learning rate.
Thanks for your comment. I think you may be right.
> I think you got down-voted because you misrepresented what Telsa is actually doing, which is a difficult arbitrage between:
>
> - known preventable deaths from, say, not staying in the lane aggressively enough;
>
> - possible surprises and subsequent deaths.
I don't think I was misrepresenting anything (at least, I was trying not to). I just pointed out that behaviour-changing updates that may be harmless in, say, smartphone apps, are much more problematic in environments such as driving-assisted cars.
I think this is objectively true.
And I think we need to come up with mechanisms to solve these problems.
> That learning pattern (resolving unintended surprises as they happen decreasingly often) is common in software.
My argument is that changes in behaviour are (almost automatically) surprising, and thus inherently dangerous. Unless my car is truly autonomous and doesn't need my intervention, it must be predictable. Updates run the risk oof breaking htat predictability.
> Others have preferred the surprise-free and PR-friendly option of not saving the dozens of thousand of lives dying on the road at the moment.
My worry is that (potentially) people will still die, just different ones.
> being in favour of Telsa (and Waymo) taking more risk than necessary
If I'm taking you literally, that's an obviously unwise position to take ("more than necessary"). But I think I know what you meant to say: err on the side of faster learning, accepting the potential consequences. Perhaps like NASA in the 1960s.
But my argument was simply that there is a problem with frequent, gradual updates. Not that we shouldn't update (even though that's actually one option).
We ought to search for solutions to this problem. I can think of several that aren't "don't update".
But claiming that the problem doesn't exist, or that those that worry about it are unreasonable, is unhelpful.
Two years ago there was an Autopark accident that made the news. Tesla blamed the driver -- very effectively.[1] But if you look closely, it likely was due to an unexpected OTA change in behavior combined with poor UI design.
In the accident, the driver double-tapped the Park button which activated the Autopark feature, exited the car and walked away. The car proceeded to move forward into an object. The driver claimed he merely put it in park and never activated the auto-park feature. Tesla responded with logs proving he double-tapped and there were audible and visual warnings.
Well, I looked more closely. Turns out Tesla pushed an OTA update that added this "double-tap Park to Autopark" shortcut. And it's one bad design decision after another.
First, let's note the most obvious design flaw here: The difference between instructing your car to immobilize itself (park) and instructing it to move forward even if the driver was absent (Autopark) was the difference between a single and double tap on the same button. A button that you might be in the habit of tapping a couple times to make sure you've parked. So it's terrible accident-prone design from the start.
Second issue is user awareness. Normally Tesla makes you confirm terms before activating new behavior, and technically they did here, but they did it in a horribly confusing way. They buried the mere mention of the Autopark shortcut under the dialog for "Require Continuous Press".[2] So if you went to turn that off -- that's the setting for requiring you to hold down a FOB button during Summon -- and you didn't read that dialog closely, you would not know that you'd also get this handy-dandy double-tap Autopark shortcut.
Third is those warnings. Useless. They did not require any kind of confirmation. So if you "hit park" and then quickly exited your vehicle, you might never hear the warning or see it on the screen that, by the way, that shortcut that you didn't know existed just got triggered and is about to make the car start moving forward while you walk away.
So I think it's quite plausible that the driver was not at fault here -- or at least that it was a reasonable mistake facilitated by poor design. It's unfortunate that Tesla was able to convince so many with a "data dump" that the driver was clearly at fault.
I still recall that poor NYT journalist that Tesla "caught driving in circles"[3] -- while looking for a hard-to-find charger at night. Now I hope we are developing a healthier skepticism to this (now standard) response from Tesla and look more deeply at potential root causes.
Even if there are patch notes and a clickthrough waiver, there is no possible way that they could express in words what they've changed when pushing an update to a neural net based system. Saying "you accepted the update" is ridiculous when it's not even possible to obtain informed consent.
You know, I must say as a European driving in America I've done this on occasion. Combined with drivers not letting you in, it's tempting to follow the gore sections into the (barely visible) barrier. I mean, I slow down when that happens, but otherwise I seem to react pretty much the same as this Tesla car. Why can't the "gore" sections be marked like this ?
> Humans are generally quite poor at responding to unexpected behavior changes such as this.
I know what you mean, once you truly start to trust someone else to do a job you simply stop giving it any attention. It's being handled. So you maybe hover a bit at first, keep an eye, generally interfere. Once that stage is over, you just do other things safe in the knowledge that someone else is competently handling the task. Until it blows up in your face.
This level of autopilot legitimately terrifies me. Not because it's bad, but because of the way it will make the humans who are supposed to still be responsible stop paying attention
Meanwhile, Tesla is busy putting out press releases saying "We believe the driver was inattentive, and the car was warning him to put his hands on the wheel."
I am utterly, completely lacking in surprise that they didn't provide the relevant context, "... fifteen minutes prior."
This just looks... really bad for Tesla. It's more important to them to protect their poor software than drivers.
It is a car company. Their products directly kill over a million people every year. The injure 10x as many. Their pollution causes a similar number of deaths. They are responsible for so many horrible side effects, including dangerous city and suburbs to people taking up more and more space.
Yes, cars kill, injure, and create disease, but they also connect people across the world, to relatives, to work, to buy and deliver goods and services.
Without cars, our level of productivity would be a fraction of what it is today as employment is confined to a tiny geography. Many more people would die from fires, disease, and crime as emergency services arrive on horse drawn carriage. Most people would never venture out of their hometowns.
Car companies, for all their faults, for all their fraud and corruption, create products that immeasurably benefit us every day. Before we call them evil, we must look at the impact they have on each of us. That impact is decidedly positive, as evidenced by the widespread ownership of cars.
No, it is not the crash that makes the evil. It is how they handle it. And how they put blatantly misleading marketing out there, along with poorly researched features that may or may not have lead to overconfidence from the drivers along with half baked tech, that resulted in crashes such as these.
There haven't even been five deaths yet in the entire history of Tesla autopilot car accidents, but there are more than three thousand deaths a day due to car accidents. Tesla's safety record actually is such that if scaled up, their would be a massive drop in the number of deaths per day. Their marketing reflects this. So does their communication on the topic. I know it can sound offensive when they say that a driver died without his hands on the wheel in the six seconds leading up to the crash, but their communication is reflective of a desire to preserve life. So I wouldn't call it evil, let alone call it blatantly evil.
Stepping back further, away from this accident, Tesla is also a leading player in moving away from destroying our planet. By that I mean they are pushing renewable energy. Again, not something I would call evil, let alone blatantly evil.
* every update to the software makes the record irrelevant, because one is no longer driving the same car which set the previous record. Lane centering for instance was introduced in an OTA update and it likely contributed to this accident.
* most of the safe miles driven by Teslas are not with autopilot on. The NTSB explicitly said they did not test autopilot.
* finally, some HN users did the math and it turns out that humans have overall a better safety record than autopilot Teslas.
To me it looks like Tesla's communication is only reflective of covering their asses. From blaming that journalist to the latest accident.
I agree that there are aspects of the Tesla safety record which have caveats, but even when I take those caveats into account, I still come out thinking of the Tesla as being relatively safe. The same feature which adds danger at every update is one that also has the potential to add safety at every update. Eventually for example, that same update feature is expected to bring the car to the point of being superhuman in its ability to drive safely. The inability to rely on the autopilot means that I'm still responsible for my own safety, so the autopilot safety isn't as important as the car's safety in general.
I disagree that their communication is only reflective of them covering their ass. I feel their is the expectation that self-driving is going to prevent more deaths than it causes. Especially as the technology improves, but even to an extent now if only through the virtue of it not being a system people should be using without being ready to intervene.
Don't get me wrong here, you raise good points. I just don't think it's a case of blatant evil.
Also, if you have the link to the math, I'd love to read it.
An ad hominess attack doesn't not trump evidence. My claim is true. The above article talks about an example of an update which improved the safety of the vehicle.
Dude, that is not an attack. Only if you have written software, you ll be aware of how seemingly innocent changes can break it in unexpected ways...And your "proof" article does not change anything...
You question my credentials, while quoting one of my premises. This is, implicitly, an ad hominem argument against the premise you quoted.
In addition to that, you're arguing against a strawman. I never disagreed that there was potential for the safety to be negatively impacted with every update. In fact, I explicitly agreed with this claim.
>I never disagreed that there was potential for the safety to be negatively impacted with every update...
The point you are missing is that while update might be slightly enhance safety, the negative, unpredictable impact might be catastrophic. because one is planned and other one is not.
No, the point you're pretending I'm missing is that there is this potential. You quoted text in which I explicitly acknowledge the danger of updates. Read it. Also, read further down where I explicitly ask that I not be taken the wrong way, because good points were made.
My acknowledgement only make sense if I agree that there is some level of danger in each update. That is why you're addressing a straw man.
Or maybe you're being uncharitable with me, because as you put it in our other thread you find the things I've said "stupid". So you are just guessing that I hold the stupidest possible belief you can ascribe me, even when I tell you otherwise.
>No, the point you're pretending I'm missing is that there is this potential
No. You are not missing the potential. But you do not seem to get difference in magnitude. One is incremental, reviewed safety enhancement, Other is unpredictably catastrophic.
You only seem to grasp very superficial aspects of my comments, which is why I requested to give some thought before responding. So I think there is some kind of barrier. But it is not of language. but for lack of a better word, I think it is of a lack of enough shared sensibilities..
You're projecting that the projected future upside isn't superhuman. My comment projects that it is. We disagree on projected benefits.
Right now human driving is one of the leading causes of death. I believe that technology can eventually eliminate this as a leading cause of death. So I project a much greater potential upside. I also figure this is a matter of time and effort applied to the problem. Or in other words, there is a finite amount of time before an update brings the car to this point. This puts a ceiling on my mental tabulation of the amount of risk endured prior to achieving an extremely good end. So despite the severe risks, the limited nature of that risk allows me to rule in favor of taking the risk despite its presence.
You're assuming that I haven't pictured a sweeping update which adds the car murdering anyone who was unaware. I have! Your assumption is incorrect.
And if I was being superficial, I would have answered that yes, I'm a software developer. But its a fallacious appeal to authority.
Your first point is absolutely misleading.
You can apply the same logic to the other cars saying that they are all different because of the different tyres, the different conditions of the tyres, of the brakes, the different degree of care and so on.
You can’t just say that they are not the same cars because of the different variables.
>Tesla's safety record actually is such that if scaled up, their would be a massive drop in the number of deaths per day...
There is not enough data to do this "scaling up". So doing so would be incredibly misleading (But doesn't stop tesla's PR from doing the same).
>Tesla is also a leading player in moving away from destroying our planet.
The actions of this company and the persons behind this somehow does not feel compatible with such a goal. I am sorry. I am just not buying it. It is more probable that this "saving the planet" narrative is something that is meant to differentiate from the competition and to attract investors. Do you think Elon Musk could have created a company that builds ICE cars and emerged as a major player? It is "save the planet" for tesla and "save humanity by going to mars" for spacex..
> There is not enough data to do this "scaling up". So doing so would be incredibly misleading (But doesn't stop tesla's PR from doing the same).
There are tens of thousands of Tesla vehicles on the road, many of which have been driven for years. However, the strength of Tesla vehicles safety doesn't rest on Tesla vehicles alone. Tesla vehicles are a class of vehicle which implement driver assistance technologies. There are many other cars that do this. Independent analysis of these cars in aggregate have shown them to reduce car accident frequency and severity.
> The actions of this company and the persons behind this somehow does not feel compatible with such a goal. I am sorry. I am just not buying it. It is more probable that this "saving the planet" narrative is something that is meant to differentiate from the competition and to attract investors.
Tesla is a leader in the renewable energy sector. There is a need for renewable energy as a consequence of climate change. Being a leading player in renewable energy means being a leading player in combating climate change. So Tesla is a leader in combating climate change. Combating climate change is an effort to save the planet. So Tesla is a leading player in the effort toward saving the planet.
At no point in the chain of logic is it necessary to call upon the motivations of Elon Musk. If someone were kill another person, the motivation for doing that deed would not change whether or not they did in fact kill someone. In the same manner, the fact that Tesla is helping to solve the problem of climate change is a fact irregardless of the motivation of its founder.
To be clear, by misleading marketing, I meant things like the "autopilot" feature. And claiming that they have "full self driving hardware". I am not sure how the safety of vehicles with assistive tech is relevant. I am not at all disagreeing on that aspect. You were saying that fact that vehicles with assistive tech are safer is reflected in tesla's marketing and PR. I am still not sure how that could be the case. How does it justify calling a half baked self driving tech as autopilot and selling them to unsuspecting people?
>At no point in the chain of logic is it necessary to call upon the motivations of Elon Musk.
We are interested in their motivation because we are thinking long term. When you are need of a million bucks, and a person shows up with a million bucks that they are willing to give you, without asking for payback, will you accept it right away? Or will you try to infer the true motivation behind the act, that may turn out to be sinister? This is irrespective of the fact that the other person is giving you real money, that can help you right now. Will you think like, we don't need to worry about their motivations as long as we are getting real money. Will you?
> I am not sure how the safety of vehicles with assistive tech is relevant. I am not at all disagreeing on that aspect. You were saying that fact that vehicles with assistive tech are safer is reflected in tesla's marketing and PR. I am still not sure how that could be the case. How does it justify calling a half baked self driving tech as autopilot and selling them to unsuspecting people?
I brought up driver assistance technology as a way to continue discussing safety statistics. If you recall, I claimed autopilot was safer and you ruled this out on the basis of not enough information. Now you are saying that you don't feel the broader class is relevant to the discussion. So we return to the point where there is not enough information to make statistical claim about safety. As a consequence of returning to this point, your own claim about the system being half baked is without merit. Its a claim about the performance of the system which you have claimed we can not characterize with the currently available statistics.
> We are interested in their motivation because we are thinking long term.
The thing I'm ultimately arguing against is the idea that Tesla is as you put it blatantly evil. Blatant means to be open and unashamed, completely lacking in subtlety and very obvious. The things Tesla is doing with regard to the environment are blatantly good. They say they are doing it because of care for the environment and their actions reflect that. If we think long-term, their actions are part of what allows the long term to exist in the first place. They are not just lacking in shame for that, they are proud of it. Brag about it. Exult in it. It is blatant that they care about the environment.
In your post you're saying that you speculate that their motivations might not be what they have claimed. This contradicts the idea of blatant evil. Blatant evil is obvious, lacking in shame, lacking in subtlety. The hiding of something is the definition of subtlety. The need to hide is reflective of a shame.
> Its a claim about the performance of the system which you have claimed we can not characterize with the currently available statistics.
I claimed the feature they call "Autopilot" is unsafe because it has only limited capability (as per Tesla's documentation). But the naming of the feature and its marketing inspires false confidence in the drivers, leading to accidents. This is a very simple fact, and it should have been apparent to people a Tesla, and the fact that they went ahead and did this kind of marketing makes them "blatantly evil" in my books. Because, as you said, it is open and they are unashamed about it. Other safety features that are widely available in similar cars from other companies is irrelavant here. I am not even sure why you dragged it into this.
>If we think long-term, their actions are part of what allows the long term to exist in the first place.
What kind of circular logic is that? If they are not really interested (their real motivation) in the "long term", then their actions cease to be part of "what allows long term to exist".
> I claimed the feature they call "Autopilot" is unsafe because it has only limited capability (as per Tesla's documentation).
In citing their documentation, you acknowledge that there communication is enough to deduce the limits of their technology. In claiming that there is not enough data to make declarations about safety, you disavow the validity of your own proclamation of (a lack of) safety. In doing so, you've refuted many of the premises of your own argument.
> But the naming of the feature and its marketing inspires false confidence in the drivers, leading to accidents.
How is this different from any other name? Every word concept pair starts out without the word and the concept linked together. For example, the name given to our species is 'homo sapien' which means roughly 'wise human being'. But humans aren't always wise. So why isn't the person who coined the term 'homo sapien' blatantly evil for coining the term?
> If they are not really interested in the "long term", then their actions cease to be part of "what allows [the] long term to exist".
Maybe we're talking past each other but this is... an absurd idea. And wrong. So very wrong.
If someone wakes up in the morning and they say they got up because they wanted to see the face of their loved one, but really they got up because they wanted to pee, they still got up out of bed. The existence of imperfectly stated motivations doesn't cause a cessation of causal history.
> there communication is enough to deduce the limits of their technology..
Not deducing. By what they explicitly state in the manual. About the "need to keep hands on the wheel always". So again. I am not "deducing" it.
>So why isn't the person who coined the term 'homo sapien' blatantly evil for coining the term?
I don't know. Was the person who coined the the term trying to sell human beings as being wise? Are people suffering because of this word? What is your goddamn point?
Tesla is evil because they use lies to SELL. use lies and project a false image to get INVESTMENT. Please keep this in mind when coming up with further examples.
>The existence of imperfectly stated motivations doesn't cause a cessation of causal history.
Ha. Now you are talking about "history" that does not exist yet. Are you really this misguided or just faking it?
You clearly don't know what deduce means. You've also clearly haven't understood anything I've said during this entire conversation. Or even much of what you've said, since you don't seem to realize you've refuted your own points.
> Tesla is evil because they use lies to SELL. use lies and project a false image to get INVESTMENT. Please keep this in mind when coming up with further examples.
You've utterly failed to establish that they are lying.
> Are you really this misguided or just faking it?
Tesla already has an established history. Therefore, it is not necessary to speculate about future history.
>You've also clearly haven't understood anything I've said during this entire conversation.
Oh I understood you just fine. I just find it stupid.
>You've utterly failed to establish that they are lying.
That is because you are overly generous with assumptions to justify their claims, which is typical of people who are apologetic of fraudulent entities such as Musk
>Tesla already has an established history...
But they haven't save the planet yet. Please give some thought about what you are writing before responding.
> That is because you are overly generous with assumptions to justify their claims, which is typical of people who are apologetic of fraudulent entities such as Musk.
No, I actually conceded. I gave up the generous assumptions on safety, backed by data, because you claimed we couldn't generalize from that data and I agree that doing such a generalization would be in some ways misleading.
This is what I mean by a lack of understanding on your part. Even in the post where you are telling me that you understood me just fine, but find my ideas to be stupid, you don't actually address what I'm saying.
As a consequence, I'm not going to continue this conversation. Have a nice day.
> The paint on the right edge of the gore marking, as seen in Google Maps, is worn
Indeed; here's a close-up of the critical area. Note that the darker stripe on the bottom of the photo is NOT the crucial one; the one that the car was supposed to follow is the much more faded one above that that you can barely see:
(Note that I'm not blaming the faded paint; it's a totally normal situation on freeways that it's entirely the job of the self-driving car to handle correctly. But I think it was what triggered this fatal flaw.)
Wow. The start of that concrete lane-divider looks incredibly dangerous for a highway. It pretty-much just "starts", with only some sort of object that they call a "crash attenuator" in front of it. I would imagine its purpose is to dampen/dissipate kinetic energy and maybe deflect oncoming objects from hitting the concrete slab directly.
I don't even see any sort of road-bumps to warn drivers of this dangerous obstacle approaching.
This is pretty apples-to-oranges. You see things like this in a lot of less dense U.S. areas as well. But in the crash example, it's right in the middle of an interchange, with the lane needing to form and rise quickly. Additionally, the gently rising metal barriers aren't safe in many situations because of their ability to launch cars. That's why they've been replaced by crash attenuators and sand/water barrels. Unfortunately the crash attenuator wasn't replaced after a recent (<1 week ago) accident, ensuring the next accident was fatal.
First, your counter-example is, ironically, pretty apples-to-oranges, as it is literally in the middle of nowhere. Meanwhile, the municipality where my interchange is located has a population density 1.5x that of Mountain View.
About the A20: it was built around 1970, inspired by American designs. Something like this would probably not be built today. Meanwhile, the specific ramp where the accident occurred, was constructed around 2006.
I do agree that safety measures should be adjusted according to their location, there is indeed no one-size-fits-all solution here.
You're right that my first example is in the middle of nowhere, but there's a good reason for that. American and Dutch city designs differ so much to make them incomparable. Even NYC has 0.63 vehicles per household (http://www.governing.com/gov-data/car-ownership-numbers-of-v...), and San Jose (the closest to Mountain View I could find) has 2.12. There's a lot more traffic to deal with, and a lot more sprawl, meaning less space for interchanges and long ramps.
At highway speeds, if that "fence" is strong enough to prevent vehicles from falling from the raised left lane into the lower right lanes, then it would seem likely to cut a car in two. I think I'd prefer the crash attenuator.
The intersection in the Tesla crash is pretty sub par by US standards. Most highway off ramps are as you described. In space restricted areas we tend to use tons of reflectors then water/sand barrels or metal crumple barriers instead of gently rising barriers and a grass infield.
In basically all cases where there's a lot of pavement that isn't a lane there's diagonal lines of some sort that make it very clear that there isn't a lane there. A good chunk of the time there's a rumble strip of some sort.
In rural areas there's less signage, reflectors and barriers but the infield is usually grass, dirt or swampy depending on local climate.
I wouldn't think wrong, but it seems to imply "AVs need great roads or bad roads, on mediocre roads they might kill you for no obvious reason, gee, too bad."
Such "pretends to work but actually doesn't", IMNSHO, would be far worse than "doesn't work there at all"
Granted my experience is my own and shaped by the areas I've lived in, but I'd say the crash barrier is pretty standard by US standards.
A few reflectors, a crumple barrier or some barrels and you've got a highway divider start! Certainly not as lengthy or as well marked as the Dutch example. This one I used to drive by in KC almost daily looks similar to the Tesla accident one (granted, this example does have some friendly arrows in the gore): https://www.google.com/maps/@39.0387093,-94.6774548,3a,75y,1...
I'm in the EU. This road does look a little bit hazardous to me; like it was designed 60 years ago and never updated.
- The area one is not supposed to drive in doesn't appear to be marked. Where I live, it would be painted with yellow diagonal stripes.
- On a high-speed road, there would be grooves on the road to generate noise if you are driving too close to the edge of the lane.
- Paint on the road is rarely the only signal to the driver (because of snow or other conditions that may obscure road markings). There would be ample overhead signs.
- Unusual obstacles would always be clearly visible: painted with reflective paint or using actual warning lights.
- We rarely use concrete lane dividers here. Usually these areas consist of open space and a shallow ditch, so you don't necessarily crash hard if you end up driving in there. There's usually grass, bushes, etc. There are occasional lane dividers, of course, when there's no space to put in an open area. However, the dividers are made of metal and they are not hard obstacles and fold or turn your vehicle away if you hit them (and people rarely do because of the above).
I'm sure there are some dangerous roads here, too, but a fatal concrete obstacle like this, with highway speeds, with almost no warning signs whatsoever, is almost unheard of.
This is a problem with old highways that were upgraded to substandard interstates. That isn't the case here with plenty of room to build a safer design.
I had never heard of crash attenuators before, but they seem really impressive! This one was the "SCI smart cushion" model. According to the manual[1] it can reduce the acceleration from a collision at 100 km/h (63 mph) to less than 10G, so even a direct frontal collision at highway speed would be survivable[1]. That's pretty amazing.
The crash attenuators are usually good enough to save people's lives. There are small grooves in the sides on some of the freeways in CA but they aren’t everywhere.
The problem with the attenuators are that they don’t get replaced fast enough which makes these accidents a lot deadlier.
Are these kinds of lane-dividers not painted with heavy yellow diagonal stripes in the US? Is the divider equipped with reflective material and/or lights to make sure drivers notice it? Is the divider actually necessary, or could it be replaced with gravel so people don't need to hit a concrete wall when they make a mistake?
I don't think a road like this would be possible in most of the EU. Autopilot needs to be fixed, but this road is also super dangerous and probably would not be allowed in the EU.
How often do non-autopilot cars fatally crash here? This does look like a bit of death trap to me!
> I don't think a road like this would be possible in most of the EU.
I've driven in almost all countries in the union and I'm sure that and worse is readily available in multiple EU countries. While it's true that the EU subsidizes a lot of road construction local conditions (materials quality, theft, sloppy contractors) have a huge impact on road and markings quality.
Hmm, "possible" was probably too strong of a word. Of course it's possible.
However, I've driven a significant number of hours in Spain, Switzerland, Germany, UK, Ireland, Sweden, Denmark, Norway, Sweden, Finland, Estonia, and I wouldn't say that this kind of concrete divider is "readily available" as a normal part of a high-speed road, lacking the high-visibility markings I outlined in the post above, year after year as a normal fixture. In fact, I don't remember ever once seeing a concrete divider like this in the EU, even temporarily, but please prove me wrong (and maybe we can tell them to fix it!).
At highway speeds, lane dividers are only used when there is a lane traveling in the opposing direction right next to you. There is no point in concrete dividers if all the traffic is traveling in the same direction. At highway speeds, opposing lanes should be divided by a lot of open space and metal fences that don't kill you when you hit them.
Try: Poland, Romania, Hungary, Bulgaria, Greece, Slovenia, Slowakia, Czech Republic. Those are also part of the EU and road quality there varies wildly from the countries you use as your examples.
How often do non-autopilot cars fatally crash here? This does look like a bit of death trap to me!
A non-autopilot car crashed at this exact spot a week earlier. This crumpled a barrier that is intended to cushion cars going off the road here, and contributed to the death of the Tesla driver.
The fact that a human crashed at this exact spot confirms that it really is an unsafe death trap.
I have noticed that, on two-lane non-divided roads, the dividing line (separating you from oncoming traffic) is yellow, while the right-side solid line is white. White means it's safe to cross the line, but yellow means it's dangerous to cross it.
You can see that yellow line at the accident site.
Now, why didn't they start a new yellow line where the lane split? That would give drivers (and software) an important cue: if you are driving down a "lane" with a yellow line on the right, something is seriously wrong!
In the United States, "A yellow line (solid or dashed) indicates that crossing the line will place a driver in a lane where opposing traffic is coming at the driver."
>> These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure.
A little off topic, but I'm curious: I usually use "by design" to mean "an intentional result." How do other people use the term? In this case, the behavior is a result of the design (as opposed to the implementation), but is surely not intentional; I would call it a design flaw.
> but is surely not intentional; I would call it a design flaw.
In fact, it is intentional! Meaning that the system has a performance specification that permits failure-to-recognize-lanes-correctly in some cases. This element of the design relies on the human operator to resolve. Once the human recognizes the problem, either they disengage the autopilot or engage the brakes/overpower the steering.
Now, you could argue that the design should be improved and I would agree. But we should perhaps step back and consider some meta-problems here. As others have stated, the functionality cannot deviate significantly from previous expectations without at a bare minimum an operator alert, training pamphlet or disclaimer form. Tesla's design verification likely should be augmented to more comprehensively test real-world scenarios like this.
But the real core issue is that the design approaches this uncanny valley of performing so terribly close to parity with human drivers that human drivers let their guard down. IMO it's the same problem as the fatality in Phoenix w/an Uber safety driver (human driver). When GOOG's self driving program first monitored their safety drivers they found that they didn't pay attention, or slept in the car. IIRC they added a second safety driver to try to mitigate that problem.
I tend to distinguish "Working as intended" and "Working as implemented."
Working as intended: the system works in a basic average-observer human sense
Working as implemented: there were no errors in implementation, and the system is performing within the tolerances of that implementation (but the implementation itself may be flawed, or may be to a design that violates average human expectations).
I think this is proper usage of "by design". The design part is that it uses a camera and low resolution radar and can see lanes and other cars and not obstacles. So this is an unintended edge case but the result is per the design.
I find it hard to think that a group of engineers didn't consider stationary objects when _designing_ an autopilot system. I would no longer consider it a flaw if it was considered.
Here is a more technical explanation of the limit.
You send a radar signal out, then it bounces off of stuff and comes back at a frequency that depends on your relative motion to the thing it is bouncing back from. Given all of the stationary stuff around, there is a tremendous amount of signal coming back from "stationary stuff all around us", so the very first processing step is to put a filter that causes any signal corresponding to "stationary" to cancel itself out.
This lets you focus on things that are moving relative to the world around them. But makes seeing things that are standing still very hard.
Many animal brains play a similar trick. With the result that a dog can see you wave your hand a half-mile off. But if the treat is lying on the ground 5 feet away, it might as well be invisible.
That is not accurate. In fact one of the key features of radar processing is to find objects in the clutter, usually by filtering on Doppler shifts.
As far back as the 1970s helicopter-mounted radars like Saiga were able to detect power lines and vertical masts. That one could do so when moving up to 270 knots and weighed 53kg.
Actually it needed to detect a crash barrier against a background of lots of other stuff that was also stationary.
The easier that you distinguish the signal you want from the signal that you don't, the easier it is to make decisions. For radar, that is far, far easier with moving objects than stationary ones.
Right. Besides, pulse doppler can detect stuff that is stationary relative to the radar anyway.
The real issue is false positives from things like soda cans on the road, signs next to the road, etc. Can't have the car randomly braking all the time for such false positives. As a result, they just filter out stationary (relative to the ground) objects, and focus on moving objects (which are assumed to be vehicles) together with lane markings. This is why that one Tesla ran right into a parked fire truck.
Interestingly, I've discovered one useful trick with my purely camera-based car (Subary equipped with Eyesight): if there is a stationary or almost stationary vehicle up ahead that it wasn't previously following, it won't detect it and will consider it a false positive (as it should, so it doesn't brake for things like adjacent parked cars), but if I tap the brake to disengage adaptive cruise control and then turn the adaptive cruise control back on, it will lock on to the stopped car up ahead.
The problem is not whether the object is moving relative to the radar. It is whether the object is moving relative to all of the stationary objects behind it that might confuse the radar.
It is a ROC curve/precision-recall issue basically. Radar has a terrible lateral and even worse to non-existing elevation measurement. Potholes, man holes, Coke cans and ground clutter look all alike and can in fact be detected as "having the negative relative velocity as my car's own velocity". You want to stop for only very few of all those stationary objects, otherwise you won't drive at all. The problem is you can't classify the relevant ones with radar. Which is why the camera helps, but obviously (for false positive suppression and a high availability of the autopilot) only if it positively classifies a car's rear.
No, since you need movement to create the along-track aperture, you would have already moved through the aperture, running over what you are trying to detect.
To follow on, there has already been reported autopilot drifting and mismanaging that spot on the freeway (although without this catastrophic result). That fact adds to this description/explanation.
Aside: the following another car heuristic is dumb. It's ultimately offloading the AI/decision making work onto another agent, an unknown agent. You could probably have a train of teslas following eachother following a dummy car that crashes into a wall and they'd all do it. A car 'leading' in front that drifts into a metal pole will become a stationary object and so undetectable.
Does it actually use the position of the car in front to determine where to steer? My Honda will sense cars in front of me but will only use that info to adjust speed when using cruise control.
It will if it can't get a lock on the lane markers. For example mine does this when navigating through large unmarked intersections which in my home town has many where the far side lanes don't align with the lanes before entering the intersection (due to added left and right turn lanes). My Tesla will follow a car in front as they slightly adjust position to the offset lanes on the far side (which is very cool) but if there is no car to follow I just take over knowing it'll not handle it well.
Although I don't work in self-driving cars, I do know a fair amount of ML and AI, and I have to be honest: if my bosses asked me to build this system, I would have immediately pointed out several problems and said that this is not a shippable product.
I expect any system that lets me drive with my hands off the wheels for periods of time deals with stationary obstacles.
What is being described here, if it's correct, is a literal "WTF" compared to how Autopilot was pitched.
I wouldn't be surprised if the US ultimately files charges against Tesla for wrongful death by faulty design and advertising.
Tesla's autopilot team is onto (if I recall) it's 5th director since the unveiling of hw2 in late 2016. The first director of Autopilot, Sterling Anderson, was dumbstruck when Musk went public with delusional claims about what autopilot would be capable of, and quit. A slew of top engineers left with him, or not long afterwards. Tesla, showing their true colours, followed up with a meritless lawsuit accusing Sterling of stealing trade secrets.
Makes me wonder about the future of self-driving cars. I would think that the first thing that should be programmed in is to not run into objects that are not moving.
IMO, maybe the roads need to be certified for self-driving. Dangerous areas would be properly painted to prevent recognition errors. Every self-driving system would need to query some database to see if a road is certified. If not, the self-driving system safely disengages.
Tesla claims their cars have all the hardware required for full self-driving, but their cars do not have LIDAR. Every serious self-driving outfit understands LIDAR to be necessary. Tesla's marketing department would have people believe otherwise of course.
Cruise, who uses the 32 laser velodynes avoids highway/freeway speeds because their lidar doesn't have the range to reliably detect obstacles at that distance.
SICK LMS 2xx. Weakest link was the java microcontrollers we had to use because we were majority funded by SUN. My favorite part was the old elevator relays we found in a junkyard and used for a bunch of the control system. You could hear the car 'thinking' and know what it was about to do.
I'm involved in a project using the LMS5xx.. but on a PLC, so I envy your Java microcontroller! Have you seen SICK's new MRS 6000? Range of 200 m, and 15 degrees vertical (15 degrees)! Pricing is similar to LMS5xx.
>Makes me wonder about the future of self-driving cars. I would think that the first thing that should be programmed in is to not run into objects that are not moving.
My understanding is that Telsa's autopilot is pretty different from other more mature self-driving car projects. I wouldn't read too much into it.
I wonder if we'll see it speed-limited. For commutes, if my time is freed up to read/work, then I don't need to be going 70 mph. If I'm travelling across a continent overnight, the gain I'd get from 10+ autonomous hours overnight would beat having to rush when driving personally.
And I agree with certified roads. But even then, if a truck drops cargo in the middle of the lane, you'd want to be in a car that can detect something sitting still in front of you.
Kudos to the NTSB. Events like this and companies like Tesla are exactly why we need regulation and government oversight. Tesla's statement that the driver was given "several warnings" is just a flat out lie.
> This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera
I'm sure Tesla's engineers are qualified and it is certainly easy to second-guess them, but it is beyond me why they would even consider a BW camera in a context where warning colors (red signs, red cones, black and yellow stripes, etc.) are an essential element of the domain.
The cameras in Tesla's non-Mobileye cars are grey grey red. Supposedly this gives you higher detail resolution while still being able to differentiate important sign colors.
You can have different cameras for sign recognition if you can't deal without color. But BW cameras have huge benefits from sensitivity in the optics all the way to processing power in the backend without much downside.
Oh, you can deal without colors, but you're intentionally depriving yourself of data, as you now have Safety Gray, Danger Gray, Warning Gray and Traffic Gray. Not to worry, those colors probably didn't mean anything important anyway.
Before it even entered the gore area, it likely centered between the actual lane of the highway and the exit lane. Bear in mind, when a lane forks, there is a time the lane is wider than average. And with the gore area lines worn, the car may have missed it entirely. Once it was centered to the gore area, presumably it didn't consider the lines under or directly in front of the car to be lane lines.
More relevant in this situation: Mobileye's stated reason for cutting those ties.
>On Wednesday, Mobileye revealed that it ended its relationship with Tesla because "it was pushing the envelope in terms of safety." Mobileye's CTO and co-founder Amnon Shashua told Reuters that the electric vehicle maker was using his company's machine vision sensor system in applications for which it had not been designed.
"No matter how you spin it, (Autopilot) is not designed for that. It is a driver assistance system and not a driverless system," Shashua said.
> Mobileye's CTO and co-founder Amnon Shashua told Reuters that the electric vehicle maker was using his company's machine vision sensor system in applications for which it had not been designed.
Props to Sashua for deciding to put safety above profit. We need people in leadership like this.
> These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design
Really? Really?? I mean if was designing a self-driving system pretty much the first capability I would build is detection of stuff in the way of the vehicle. How are you to avoid things like fallen trees, or stopped vehicles, or a person, or a closed gate, or any number of other possible obstacles? And how on earth would a deficiency like that get past the regulators?
Shortly after the accident, a few other Tesla drivers reported similar behavior approaching that exit.
But on the other hand, gores like that can also trick human drivers. Especially if tired, with poor visibility. In heavy rain or whiteout, I typically end up following taillights. But slowly.
Except that software doesn't get tired, and conditions were good. So even if that could trick human drivers those conditions weren't present so do not apply to the case at hand.
What's your source for the claim that Tesla's system "doesn't detect stationary objects"? From the reference frame of a moving Tesla, both globally stationary and globally moving objects will appear to be in motion.
Teslas on autopilot have collided with many stationary objects that were partially blocking a lane. Known incidents include a street sweeper at the left edge of a highway (China, fatal, video available), a construction barrier in the US (video), a fire truck in the SF bay area (press coverage), a stalled car in Germany (video), a crossing semitrailer (NTSB investigation), and last month, a fire company truck in Utah.
Yes, but the radar has poor angular resolution (but a good idea of relative velocity), so it cannot tell the difference between a stationary object at the side of the road and one in the middle of it, so it must ignore all stationary object (by ignoring all objects with an apparent velocity approximately equal to the speed of the vehicle) in order to avoid constant false positives.
It's good at determining whether something is moving towards it or away from it, but bad at determining where that object is; whether it's directly in front or slightly to the left or far to the left. Its "vision" in that sense is blurry.
Poor angular resolution means you can't tell if an object is at 12 o'clock vs 1 o'clock. It means you can tell there are things, but you don't accurately know what angle they're at.
The NTSB report said that the system didn't apply the automatic emergency braking. That either means it didn't detect the stationary object or did it on purpose.
You can argue something between those two options, but ultimately it is just a semantic argument (e.g. "it just chose to ignore it" which is effectively the same as a non-detection, since the response is identical).
> The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.
True, but not one into which the car should have merged. Although crossing a solid white line isn’t illegal in California, it does mean crossing it is discouraged for one reason or another.
I love seeing the advances in tech, but it’s disheartening to see issues that could have been avoided by an intro driver’s ed course.
While the anti-tesla news is bad, tesla needs to be more clear that this really is a failure of autopilot and their model - they can't expect a human to get used to something working perfectly, clearly is hands were close to the wheel in the very recent past (tesla is famous for not reading hands on wheel).
I'm hoping google can deliver something a bit safer.
I don't get it, the system relies on two kinds of sensors ? radars for rear-ends and optical cameras for broader decision making ? so that location confused the cameras and that was it ? the car has no ability to understand its surrounding outside crude parsing of the visual field (trained ML I suppose)...
MobileEye (EyeQ3 which is in AP1) does detection on chip including things like road sign recognition the “software” part is near non existent for them it’s more of a configuration than some sort of a software ANN model like what Tesla is using with AP2 and the NVIDIA Drive PX.
Wow, Tesla doesn't have auto emergency braking? Even my Kia has that - adaptive cruise for following cars, auto emergency braking for stationary objects (and when the car in front of you suddenly slows down). Yeah, I'll keep my Kia...
My 2015 S70D (AP1 presumably) does start complaining if it thinks I'm about to collide with a stationary vehicle. Whether it would actually stop the car is not something I have had the nerve to test. Perhaps I should find a deserted car park and put out some cardboard boxes and see what happens. The problem is that cardboard doesn't reflect radar well.
It does but it cannot detect stopped object which apparently is a standard limitation. Besides aeb is made to reduce crash speed the car will still crash.
As a side note, I find the lane markings in the US confusing. The "gore sections" in Europe are filled with stripes, so there is really no way to mistake them for a lane.
As a driver on both continents, I would respectfully correct you to "are supposed to be filled with stripes". Alas, even as seen here, markings deteriorate - on both sides of the pond.
"Unless the road marking is 105% perfect, it's never the fault of the autopilot, but look, autonomous driving!" is just pure marketing, without any substance to back it.
Wonder if the barriers can be modified to look like another car to these systems, but still remain highly visible and unambiguous to human drivers?
Part of the overall problem is that these roads were not designed for autonomous driving. This is much like the old paver roads were fine for horses, but really bumpy/distractive to car wheels and suspensions.
Overtime, we adapted roads to new tech. This needs to start happening now too.
Machine vision is perfectly capable of detecting these barriers, as well as LIDAR.
The fault here isn’t with road design. The fault here is with Tesla shipping Autopilot without any support for stationary objects, AND their delusional and (should be criminally reckless) decision to not use LIDARs.
A car without ABS or power braking is not legal to be sold today. We need to apply those standards here: anything more than cruise control (where the driver understands they need to pay attention) needs to have certain safety requirements.
I think they rely (more) on radar now since the "I can't see a white truck" incident. The camera is useless if the sun is shining against it, the radar will still work but be useless for stationary objects. I have no idea how Tesla intends to solve this "small" issue.
Agreed it's gross negligence resulting from pure arrogance. Makes the cult of Musk so much less savory and one has to wonder how much crap there is beneath the shiny exterior in all of his projects.
I wonder how soon radar reflecting paints will start being used on roadways and barriers, etc. That seems like it would be a general benefit to all brands of cars with auto drive.
At the end of the day we want to know if the autonomous systems are safe. Policy decisions will end up depending on that determination. That requires clear definitions for what constitutes failures and accurate gathering of data.
These cars aren't crashing into stationary objects that don't have a radar return. The software in the cars is deliberately filtering out the radar returns of stationary objects. It wouldn't matter how many radar reflectors you slapped on the barrier, the Tesla would ignore the radar return of the stationary barrier.
>That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design
I wouldn't say it's by design or expected behaviour because if a Tesla approached a stopped vehicle, the expectation would be the car would stop.
The cruise control was set to 75. 8 seconds before the crash, it was following a vehicle at 65 mph. It steered left, then was not following the vehicle anymore, and accelerated to try and get back to the set speed.
In CA traffic (and many other states) driving at the speed limit is likely more dangerous than driving with the flow of traffic (roughly 10 MPH over the limit - sometimes more than that).
One of the options for Autopilot is that you can tell it "Never go more than X MPH over the speed limit" with a common setting being a few miles an hour over.
This statement is not quite correct. The setting indicates the speed, as an absolute offset of the currently detected speed limit, at which the car will emit audible or visual warnings that the driver has set the cruise-control speed too fast. That setting is also the speed for a specific gesture that sets the cruise control to the maximum speed for the circumstances. E.g., if the posted speed limit is 45 and the setting is 5, then the gesture sets cruise control to 50 mph.
The car never adjusts the cruise-control set speed by itself, with one exception: if the current road has no center divider, then it clamps the current speed to no more than the posted speed limit + 5 mph. The term "clamps" is in the programming sense: if cruise-control is already set below that speed, nothing changes.
The car never increases the cruise-control set speed. Only the driver can do that.
In other words, the driver had already set the cruise control to 75 mph and likely had the setting at speed limit + 10, which is aggressive. The reason the car accelerated is because it determined there were no cars ahead of it traveling at a speed lower than the set speed. Unfortunately, that conclusion was absolutely correct.
"This is the Tesla self-crashing car"
That's a harsh and inaccurate statement. While I agree that Tesla didn't handle the PR well we should acknowledge that every system is prone to fail at some point. And the fact that we do not hear about non-autopilot car crashes due to system failure as much as we hear about Tesla's is not an indicator that it doesn't happen.
“Car firmware shouldn’t be open source, that’s dangerous.”
There’s no way in hell Tesla would have gotten away with selling this for so long if their users were allowed to read the unobfuscated source.
The fact that people are given no control over these things that can kill them while the manufacturers can just mess around with it without any real oversight is absolutely insane. I really don’t think the average IRC lurker who could figure out how to compile the firmware could be any more dangerous than the “engineers” who wrote it in the first place.
I think we need some more details in order to conclusively blame the Autopilot for this death. I would like to see the full report and really do hope that Tesla can see their way to coming back into the fold and cooperating with the NTSB on a deeply technical fact based analysis on what happened here. For one thing, we do need to know more about the broken crash attenuator and the impact that it had on a severity of the incident.
All of that being said, I still think Tesla has mostly the right approach to their Autopilot system. There is an unacceptably high number of crashes caused by human error and getting to autonomous driving as fast as possible will save lives. It is virtually impossible to build a self driving system in a lab, with all current known methods you must have a large population of vechicles training the system. The basic calculus with their approach is that the safety risks of not getting to autonomous driving sooner is more than the risks of failures in the system along the way to getting there. It is admittedly a very fine line to walk but I do see the logic in it.
I do think that Tesla could do more to educate the users who are using early versions of the software.
• At 8 seconds prior to the crash, the Tesla was following a lead vehicle and was traveling about 65 mph.
• At 7 seconds prior to the crash, the Tesla began a left steering movement while following a lead vehicle.
• At 4 seconds prior to the crash, the Tesla was no longer following a lead vehicle.
• At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected.
This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera and Mobileye (at least in early models) vision software. It also recognizes lane lines and tries to center between them. It has a low resolution radar system which ranges moving metallic objects like cars but ignores stationary obstacles. And there are some side-mounted sonars for detecting vehicles a few meters away on the side, which are not relevant here.
The system performed as designed. The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.[1] If the vehicle ever got into the gore area, it would track as if in a lane, right into the crash barrier. It won't stop for the crash barrier, because it doesn't detect stationary obstacles. Here, it sped up, because there was no longer a car ahead. Then it lane-followed right into the crash barrier.
That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure. It follows directly from the decision to ship "Autopilot" with that sensor suite and set of capabilities.
This behavior is alien to human expectations. Humans intuitively expect an anti-collision system to avoid collisions with obstacles. This system does not do that. It only avoids rear-end collisions with other cars. The normal vehicle behavior of slowing down when it approaches the rear of another car trains users to expect that it will do that consistently. But it doesn't really work that way. Cars are special to the vision system.
How did the vehicle get into the gore area? We can only speculate at this point. The paint on the right edge of the gore marking, as seen in Google Maps, is worn near the point of the gore. That may have led the vehicle to track on the left edge of the gore marking, instead of the right. Then it would start centering normally on the wide gore area as if a lane. I expect that the NTSB will have more to say about that later. They may re-drive that area in another similarly equipped Tesla, or run tests on a track.
[1] https://goo.gl/maps/bWs6DGsoFmD2