This isn't an issue specific to Tesla as all automakers are now making cars that are more and more dependent on software. So what is the right way to handle these updates? You mentioned a clear flaw with OTA updates, but there are also numerous advantages. For example, the recent Tesla brake software issue was fixed with an OTA update. That immediately made cars safer. Toyota had a similar problem a few years ago and did a voluntary recall. That means many of those cars with buggy brake systems were on the road for years after a potential fix was available and were driven for billions of potentially unsafe miles.
>This isn't an issue specific to Tesla as all automakers are now making cars that are more and more dependent on software.
Cars have been dependent on software for a long time (literally decades). This isn't something new. Even combustion engine cars have had software inside of them that controls the operation of the engine, and this software is vigorously tested for safety issues (because most car manufacturers understand a fault with such software could result in someone's death). Tesla seems to be the only major car manufacturer that has a problem with this.
>So what is the right way to handle these updates?
The way that other vehicle manufacturers (car, airplane, etc) have been doing it for decades is a pretty good way.
>You mentioned a clear flaw with OTA updates, but there are also numerous advantages. For example, the recent Tesla brake software issue was fixed with an OTA update. That immediately made cars safer.
There is no evidence that said OTA update made Tesla cars any safer. There is evidence that similar OTA updates have made Tesla cars more unsafe.
The brake OTA that you mentioned has actually potentially done more harm than good. Tesla owners have been reporting that the same update made unexpected changes to the way their cars handle/accelerate in addition to the change in braking distance. These were forced, unpredictable changes that were introduced without warning. When you're driving a 2 ton vehicle at 70mph, being able to know exactly how your car will react in all situations, including how fast it accelerates, how well it handles, how fast it brakes, and how the autopilot will act is crucial to maintaining safety. Tesla messing with those parameters without warning is a detriment to safety, not an advantage.
>Cars have been dependent on software for a long time (literally decades). This isn't something new. Even combustion engine cars have had software inside of them that controls the operation of the engine, and this software is vigorously tested for safety issues (because most car manufacturers understand a fault with such software could result in someone's death). Tesla seems to be the only major car manufacturer that has a problem with this.
The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles. I wouldn't be surprised if their Lane Keeping Assist (LKA) systems have similar problems.
>WARNING
When Pilot Assist follows another vehicle at speeds overapprox. 30 km/h (20 mph) and changes target vehicle – from a moving vehicle to a stationary one – Pilot Assist will ignore the stationary vehicle and instead accelerate to the stored speed.
>The driver must then intervene and apply the brakes.
This comparison just sold me on how morally wrong it is what Tesla is doing. Intentionally misleading and marketing to customers a feature called Autopilot that is only a marginal improvement on what other cars already offer. What if Volvo started calling their (clearly not independent) feature Autopilot and saying it was the future of hands-free driving? Seems inexcusable.
>Super Cruise is not a crash avoidance system and will not steer or brake to avoid a crash. Super Cruise does not steer to prevent a crash with stopped or slow-moving vehicles. You must supervise the driving task and may need to steer and brake to prevent a crash, especially in stop-and-go traffic or when a vehicle suddenly enters your lane. Always pay attention when using Super Cruise. Failure to do so could result in a crash involving serious injury or death.
Riffing off the parallel thread about Google AI and how "corporations are controlled by humans" and can have moral values - no, corporations are controlled primarily by the market forces. When Tesla started branding line assist as autopilot, it put market pressure on others to follow suit. Hence, I'm absolutely not surprised about this ad and the associated warning in the manual.
Ideally, yeah, every manufacturer would have to take all the puffery out of their marketing, or better yet, talk about all the negatives of their product/service first, but I doubt I'll ever see that.
This article portrayed Super Cruise as something qualitatively different, based on the maps of existing roadways. I'm not sure if they've also considered integrating the multiple systems involved in driver assistance. I'm curious if Tesla has either for that matter.
I’m opposed to over-regulation of any sort, however it seems obvious that vehicle manufacturers need to do a better job informing consumers of the driver assistance capabilities of modern vehicles. Something similar to the health warnings on cigarette packs.
> The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles.
Software should not be driving a car into any of them. I think that LIDAR would see the obstacle, but as I understand, the crashed Tesla car didn't have it.
LIDAR probably would have seen the obstacle and avoided it, but so would a human driver who was operating the vehicle responsibly and correctly. It sucks that people treat level 2 systems and level 3 or 4, but the same thing applies to many convenience features in a car (cruise control, power brakes, etc...). There's always going to be some bozo doing what they shouldn't be doing with something.
I'd love to see LIDAR on consumer vehicles, but AFAIK it's prohibitively expensive. And to be fair, even Level 4 autonomous vehicles still crash into things and kill people.
Last but not least, every semi-autonomous system all the way back to Chrysler's "AUTO-PILOT" has had similar criticisms. People in the past even said similar things about high speed highways compared to other roads WRT attention.
> The TACC offered by most (if not all) manufacturers can't differentiate between the surroundings and stopped vehicles.
Literally every car I have driven equipped with Cruise Control and Collision Avoidance (TACC) hits the brakes and slows down to 20-ish km/h if it senses ANYTHING moving slower (including stationary) in front of the car at possible collision path.
This really affects the nature of the situation. 20 years ago, cars contained microcontrollers with a tiny bit of code which was thoroughly reviewed and tested by skilled professionals. Today, all cars run so much code, even outside of the entertainment system, that the review and testing just can't be the same. (And there's way more programmers, so the range of skill and care is also much wider.)
When the Toyota electronic throttle "unintended acceleration" accidents were in the news, the software was described as a "big bowl of spaghetti" but the NTSB ultimately determined that it was not the cause of the problems. It was drivers using the wrong pedal.
I've long been curious about the "big bowl of spaghetti" comment (and all the other criticisms made by the experts who inspected Toyota's code). There were some extremely serious accusations which don't seem consistent with the fact that the vast majority of Toyotas on the road aren't showing problems caused by their MCU's spaghetti code.
AI takes it to a whole new level. Neural networks are all black box, they can't be reviewed. You feed in your training data, you test it against your test data, and just have faith that it will respond appropriately to a pattern that's slightly different than anything it's seen before. Sometimes the results are surprising.
That's my biggest problem with AI and neural networks. You can't really measure progress here. If you wanted the same safety standards as for every other automotive software you'd have to test drive for hundreds of thousands of kilometres after every change of parameters, because there's no way to know what has changed about the AI's behavior except for testing it thoroughly.
Compare this to classic engineering where you know the changes you've made, so you can rerun your unit tests, rerun your integration tests, check your change in the vehicle and be reasonably sure that what you changed is actually what you wanted.
The other approach to autonomous driving is to slowly and progressively engineer more and more autonomous systems where you can be reasonably sure to not have regressions. Or at least to contain your neural networks to very very specific tasks (object recognition, which they're good at), where you can always add more to your test data to be reasonably sure you don't have a regression.
I don't think we'll see too many cars being controlled by neural networks entirely, unless there's some huge advancement here. Most of the reason we see more neural networks now is that our computing power has reached the ability to train sufficiently complex NNs for useful tasks. Not because the math behind it advanced that much since the 60s.
> There is no evidence that said OTA update made Tesla cars any safer.
That particular OTA update significantly shortened braking distances. [The update] cut the vehicle’s 60 mph stopping distance a whole 19 feet, to 133, about average for a luxury compact sedan. That's a safer condition, IMO, and I'm uncertain how to argue that it doesn't make the car safer.
> being able to know exactly how your car will react in all situations
If one depends on intimate knowledge of his own car for safety then he’s likely already driving outside the safety envelope of the code, which was written to provide enough safety margin for people driving bad cars from 40yr ago.
I didn't say no car has ever relied on software. I said cars are becoming more reliant on software. I don't think that is a controversial statement. I also don't think it is controversial to say that other automakers also occasionally ship buggy code. The Toyota brake issue I mentioned in the previous post is one example.
Additionally, the argument that we should continue to handle updates this way simply because we have done it this way for decades is the laziest possible reasoning. It is frankly surprising to see that argument on HN of all places.
As for the evidence that OTA updates can make things safer, this is from Consumer Reports:
>Consumer Reports now recommends the Tesla Model 3, after our testers found that a recent over-the-air (OTA) update improved the car’s braking distance by almost 20 feet. [1]
That update going out immediately OTA is going to save lives compared to if Tesla waited for the cars to be serviced like other manufacturers. I don't think you can legitimately argue against that fact.
> That update going out immediately OTA is going to save lives compared to if Tesla waited for the cars to be serviced like other manufacturers. I don't think you can legitimately argue against that fact.
There is again no evidence to support this fact. There is evidence that Tesla's OTA software updates have introduced safety issues with Tesla cars. That's a fact.
Better braking distance is of course a good thing but if anything, the fact that Teslas were on the road for so long with a sub-par braking distance is more evidence of a problem with Tesla than it is evidence of a benefit of OTA updates.
The other factor in that brake story is that it took mere days for Tesla to release an update to "fix" the brakes. This isn't a good thing. The fact that it was accomplished so quickly means that the OTA update was likely not tested very well. It also means that the issue was easy to fix, which calls into question why it wasn't fixed before. It also highlights the fact that Tesla, for some reason, failed to do the most basic testing on their own cars for braking distance. Comparing the braking distance of their cars should have been one of the very first things they did before even selling the cars, but apparently it took a third party to do that before Tesla was even aware of the issue. This doesn't inspire confidence in Tesla cars at all.
I simply don't know what to say to you if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.
EDIT: The comment I was replying to was heavily edited after I responded. It originally said something along the lines of improving braking distance is good but there is no evidence that it would improve safety.
> if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.
Nobody is arguing that. We're arguing that there is no evidence the Tesla OTA update made the cars safer on net.
You're trying to set up some sort of "OTA updates are dangerous in general, but this one is clearly good, how do we balance it" conversation, but the problem is, this OTA update is not clearly good. OTA updates are dangerous in general, and also in this case in specific. You need to find a better example where there's actual difficult tradeoffs being made, and not just a manufacturer mishandling things.
> I simply don't know what to say to you if you are going to legitimately argue that shaving 20 feet off of braking distance will not make a car any safer.
If the car can’t see the obstacle, the braking distance simply does not matter.
And yet again, the same OTA update changed other parameters about the way the car drives that do make it less safe. I don't know why you're trying to ignore that fact. If I drastically improve the braking distance of a car, but in the same update I also make it so that the car crashes itself into a wall and kills you, is the car safer? Hint: no
As for your edit, you clearly misread the original comment, which is why I edited it for you. I said that there was no evidence that the OTA made the car safer. Please try to read with better comprehension instead of trying to misrepresent my comments.
If I drastically improve the braking distance of a car, but in the same update I also make it so that the car crashes itself into a wall and kills you, is the car safer? Hint: no
You don't have enough information to come to that conclusion.
It's quite common to have to brake hard to avoid a cousin. It's pretty uncommon to see the specific scenario triggering this crash behavior.
I never denied that. Your comment pointed out a problem with OTA updates and I agreed calling it "a clear flaw". I pointed out a benefit of OTA updates then asked an open ended question about how they should be handled. You responded be attacking the example I provided. I was looking to debate this serious issue, not getting into a pissing match about it.
I never said you denied it, I said you ignored it. If you wanted to debate this serious issue, then maybe you shouldn't keep ignoring one of the crucial cornerstones of the discussion. If you're unwilling to discuss points that challenge your own opinion, then it's clear that you're just trying to push an agenda rather than have an actual discussion.
> So what is the right way to handle these updates?
Avoid doing them in the first place? It's not like bit rot is - or should be - a problem for cars. It's a problem specific to the Internet-connected software ecosystem, which a car shouldn't be a part of.
So basically: develop software, test the shit out of it, then release. If you happen to find some critical problem later on that is fixable with software, by all means fix it, again test the shit out of it, and only then update.
If OTA updates on cars are frequent, it means someone preferred to get to market quickly instead of building the product right. Which, again, is fine for bullshit social apps, but not fine for life-critical systems.
Tesla does test the shit out of it before they release a patch. The problem is that users expectations of the systems performance suddenly get out of sync with what the car is going to do.
Part of me wonders if there should be a very quick, unskipable, animated, easy to understand explanation of the patch notes before you can drive when they make material changes to core driving functionality.
While using Autopilot (Big A), there should be a loud klaxon every 30 seconds followed by a notification "CHECK ROAD CONDITIONS" and "REMAIN ENGAGED WITH DRIVING" in the same urgent tone of an aircraft autopilot (small a) warning system.
Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.
I don't know why Tesla defenders keep repeating this FUD:
> Tesla did make a mistake calling it Autopilot, but only because regular folk don't understand that aircraft autopilot is literally a heading, altitude, and speed, and will not make any correction for fault. Aircraft autopilot will fly you straight into a mountain if one happens to be in the way.
This is beyond broken, it's a fundamental misunderstanding of how physical products are supposed to work. Software people have gotten used to dismiss the principle of least astonishment because they know better —and no user got killed because of a Gmail redesign—, but this is a car, it's hardware with its user on-board, a lot of kinetic energy and all of it relies on muscle memory.
I'd vote in favor of such explanation, though this alone may not be enough to cancel out possibly thousands of hours of experience with the previous system behavior.
The first thing about doing it right is to make sure it has been developed in an appropriate manner for safety-critical systems, which includes, but is by no means limited to, adequate testing.
The second thing is to require the owners to take some action as part of the installation procedure, so that it is hard for them to overlook the fact that it has happened.
The third thing is that changes with safety implications should not be bundled with 'convenience/usability' upgrades (including those that are more of a convenience for the manufacturer than for the user.) To be fair, I am not aware of Tesla doing that, but it is a common enough practice in the software business to justify being mentioned.
And it has to be done securely. Again, I am not aware of Tesla getting this wrong.
Great that they fixed the brakes OTA. But how exactly did the inferior braking algorithm get on the Model 3 in the first place? And what are the chances of a regression?
While I like Tesla, I find the praise for Tesla's fast OTA update for its braking problem to be freaking terrifying.
A problem with variable stopping distances is the sort of thing that should be blindingly obvious in the telemetry data from your testing procedures. Brake systems, and ABS controls in particular, are normally rigorously tested over the course of 12-18 months in different environments and conditions.[0] That Tesla completely missed something like that suggests either their testing procedures are drastically flawed (missing something that CR was able to easily and quickly verify in different cars), that their software development process isn't meshed up with their hardware testing and validation, or a combination of the two. Neither option is a good one.
The fact that Tesla was able to shave 19 feet of their braking distances is horrifying. After months of testing different variations and changes to refine your braking systems, shaving off an extra 19 feet should be impossible. There shouldn't be any room to gain extra inches without making tradeoffs in performance in other conditions that you've already ruled out making. If there's an extra 19 feet to be found for free after a few days of dev time, you did something drastically wrong. And that's completely ignoring physical testing before pushing your new update. Code tests aren't sufficient; you're changing physical real-world behavior, and there's always a tradeoff when you're dealing with braking and traction.
Tesla is being praised by consumers and the media because, hey, who doesn't like the idea that problems can be fixed a couple days after being identified? That's great. In this case, Tesla literally made people's cars better than they were just a few days before. But it trivializes a problem with very real consequences, and I hope that trivialization doesn't extend to Tesla's engineers. Instead of talking about a brake problem, people are talking about how great the fast OTA update for the problem is. Consumers find that comforting, as the OTA updates can makes what's otherwisea pain in the ass (recalls and dealer visits for software updates) effortless.
Hell, I'm a believer in release early, release often for software. Users benefit, as do developers. At the same time, the knowledge that you can quickly fix a bug and push out an update can be a bit insidious. It's a bit of a double-edged sword in that it gives you a sense of comfort that can bite you in the ass as it trivializes the consequences of a bug. And when bug reports for your product can literally come in the form of coroner's reports, that comfort isn't a good thing for developers.
At least you can rely on 99% of humans to try to act according to self-preservation instinct MOST of the time.
Nope. I see tremendous numbers of distracted drivers who don't even realize there's a threat. I also see many utterly incompetent drivers who will not take any evasive action, including braking, because they simply don't understand basic vehicle dynamics or that one needs to react to unexpected circumstances.
Updates should fix problems not create new ones. The tried and true method for silicon valley bug fixing is to ship it to the users and let them report any issues. This is wholly insufficient for car software. Car software should seldom have bugs in the first place, but OTA updates should never bar never introduce new bugs to replace the old.
> So what is the right way to handle these updates?
Require updates to be sent to a government entity, which will test the code for X miles of real traffic, and then releases the updates to the cars. Of course, costs of this are to be paid by the company.
Current development of cars is done with safety as a paramount concern. There is no need to filter everything through a government entity. However the automobile companies are responsible for their design decisions. This should absolutely apply to software updates. That does mean complete transparency during investigations, a complete audit trail of every software function invoked prior to a crash.
So, no filter, but government penalties and legal remedies should be available.
"Current development of cars is done with safety as a paramount concern."
That's exactly the impression that I don't get from Tesla very much. Instead I see the follwing:
Get that thing to market as quickly as possible. If the software for safety critical systems is sub-par, well, can be fixed with OTA updates. That's fine for your dry cleaning app. For safety critical software that's borderline criminal
Hype features far beyond their ability (autopilot). Combine this with OTAs, which potentially change the handling of something that is not at all autopilot, but actually some glorified adaptive cruise control. For good measure: Throw your customers under the bus if ineviteble and potentially deadly problems do pop up
Treating safety issues merely as a PR problem and acting acordingly. Getting all huffy and insulted and accusing the press of fake news when such shit is pointed out
I could go on. But such behavior to me is not a company signaling that safety is of paramount concern.
"That does mean complete transparency during investigations, a complete audit trail of every software function invoked prior to a crash."
Let's just say that Tesla's very selective handling and publication of crash data does not signal any inclination for transparency.
I agree. I think companies should be losing serious money and individual should be losing jobs over crashes like these, much like in the aircraft sector.
Testing is absolutely necessary. We're talking about millions of cars here, which are potentially millions of deadly weapons. You don't want companies pushing quick fixes, which turn out to contain fatal bugs.
That sounds like a great way to stall all further progress, which has a horrific human cost of its own.
Government has a valid role to play, though, by requiring full disclosure of the contents of updates and "improvements," by setting and enforcing minimum requirements for various levels of vehicle autonomy, and by mandating and enforcing uniform highway marking standards. Local DOTs are a big part of the problem.
Yeah, because we know governments are really good at giving certifications and doing tests that mean sonething. Lets put every design decision in the hand of governements then! or better, nationalize car companies! Problem solved?
Flying in an airplane is safe because of direct intervention by the government.
Cars have been made safe for us also by direct intervention by the government. From important things like mandating seat belts and crash safety to smaller things like forcing the recall of tens of millions of faulty air bag inflators.
These are just a few of the many things Uncle Sam has done to make things safer for us.
Isn’t flying mostly safe because of post hoc safety analysis followed by operating requirements? I don’t think the FAA tests every change made to aircraft before they can fly?
First, any change in design (or in configuration, in the case of repairs) is backed by PEs or A&P mechanics who sign off on the changes. Their career rides on the validity of their analysis so that's a better guarantee than some commit message by a systems programmer.
Second, the FAA basically says "show us what you are changing" after which they will absolutely require physical tests (static or dynamic tests, test flights, etc., as appropriate to the scope of change).
And I'd say flying is so safe mainly from the blameless post-mortem policy that the American industry instantiated decades ago and which is constantly reinforced by the pros in the NTSB. It's a wonderful model for improvement.
I think that the FAA's role is theoretically as you express, but in practice, there is significantly less oversight (especially direct oversight) than implied.
As an example, the crash of N121JM on a rejected takeoff was due (only in part) to a defective throttle quadrant/gust lock design that went undetected during design and certification, in part because it was argued to be a continuation of a conformant and previously certificated design. (Which is relevant to the current discussion in that if you decide to make certification costly and time-consuming, there will be business and engineering pressure to continue using previously certificated parts, with only "insignificant changes".)
If I, as an engineer, sign of on changing the screws on the flaps for cheaper ones and the plane crashes because the flaps go loose due to the screws being unable to handle the stress, my career can be assumed over if I have no good explanation.
If an engineer signs off a change they sign that they have validated all the constraints and that for all they know the machine will work within the specs with no faults.
If a software engineer commits code we may run some tests over it, look a bit over it. That's fine. But if the software ends up killing anyone, the software engineer is not responsible.
And yes, to my knowledge, every change to an aircraft is tested before flight or atleast validated by an engineer that understands what was just changed.
In any case, let a third party control the actual updating, so that we know when and how often cars are updated. Require at least X months of time between code submission and deployment to cars. We don't want a culture of "quick fixes".
This is a popular idea: Just put someone in charge! It ignores the incentives for those gatekeepers, who are now part of the system. In practice I don't think you're going to get better updates, you're going to get "almost no updates".
It took years for the FDA to investigate Theranos in case you are not aware. And they only did when the press started digging up. Poor, poor track record.
There's a lot of sunlight between letting pharma companies run rampant and having the FDA. One could imagine private non-profit testing and qualifications standards organizations along the lines of the underwriters laboratories
It is not completely out of this world to imagine multiple private entities involved in pharma dossier reviews instead of having the FDA. The FDA employs tons of private consultants anyway so they bring virtually no value.
Certainly communication of any changes to all drivers inexperienced with the latest version; ideally user interaction required for the update to be applied, and potentially even the ability to reverse them if they are unhappy with the changes.
At the very _least_ when you introduce a change in behavior have it to be enabled from the user through the dashboard. This creates at least one touch point for user education.