This is probably happening a lot more than the number of reports mentioned in the article indicates. It happens to me every other day in my Model 3. It's also not binary - sometimes it slows down more quickly than other times, or stops too suddenly when a car is stopped in front of you. Seems real easy to get rear-ended.
I've only got autopilot, not FSD. It's very conservative. It always considers someone to be in front of you even if they've mostly moved out of your lane into the next lane, or they swerve slightly into your lane. I suspect the engineers erred on the side of being rear-ended rather than rear-ending cars in front.
You continue using this feature despite these errors occurring daily? The way you describe it makes autopilot seem like an inexperienced teenage driver; I think whatever marginal safety benefit you get from it might be erased by the life-shortening stress such erratic car behavior must induce.
Can't speak for OP, but I'm supposed to be paying attention at all times while autopilot is enabled; as soon as I feel the vehicle may phantom brake, I apply more pressure to the accelerator to maintain speed.
I believe the NHTSA is going to require Tesla retrofit radar on all vehicles to mitigate their Tesla Vision functionality deficiency.
(submitted lengthy dash cam videos and accelerometer data to the NHTSA as a complaint)
I have an early release Model 3 with radar. I suspect they stopped using the radar in a software update once they started releasing Model 3s without radar, as it didn't have this jerky stopping behavior before that.
I’m convinced they ditched radar because they were having big supply chain constraints with radar, and then Musk convinced himself that Machine Vision with AI is the path forward anyway and decided to rip the bandaid off and address both issues at the same time. (& I really do think he believes that because he obsessed over the AI safety thing in the past in a way I think is unforced and a little silly and pretty clearly falsified at this point.) I do think radar ought to help, at least in principle. Although they probably can eventually get vision to be better than Vision+Radar used to be, although I STILL think it’d be better off also with radar (and perhaps a different, better kind of radar unit).
> I’m convinced they ditched radar because they were having big supply chain constraints with radar, and then Musk convinced himself that Machine Vision with AI is the path forward anyway and decided to rip the bandaid off and address both issues at the same time.
Yes, that and his attitudes towards LIDAR seem like motivated reasoning / sour grapes. "LIDAR is too expensive to put in cars I'm selling today, therefore it isn't good anyway." (I can't reach the grapes, therefore they're sour and I didn't want them anyway.) Why else would he criticize LIDAR with inane arguments about LIDAR not being able to read street signs, as though anybody is suggesting LIDAR-only rather than LIDAR+Optical?
I'm convinced they ditched radar because Musk wanted to make more money.
Sort of like how GM and Ford used to have actuaries work out the cost of a fuel tank that wouldn't incinerate the occupants in a crash vs legal settlement costs.
Except minus actuaries and instead just Musk figuring "Well I've gotten away with everything else, why not this as well?" and sadly being right.
From ignoring COVID health orders to flashing his dick to repeated stock manipulation to making billions in a company that stole people's money constantly "because we're not a bank", Musk hasn't faced any significant repercussions for anything immoral or illegal he's ever done.
I would imagine that multiple systems with a majority wins type of thinking would be ideal. Would probably need another system for it to be an odd number.
Happens often on my Y, does not happen on my legacy S and X (circa 2018 and 2019) that are equipped with Continental radar front facing radar. I believe even current S and X vehicles are still getting radar installed, so Tesla is only being cheap on the 3 and Y to keep moving them at volume as a supply chain mitigation.
vehicles from 2019 being referred to as "legacy" is the most infuriating thing I've read today.
Of course no offense to OP, i've heard it referred as such by others and I know it's not literal. But it's maddening to me though that a vehicle that is still under warranty has a moniker that traditionally means end of life/support in software.
(bit late on the reply, sorry)Caveat; I think EV are far superior to ICE vehicles in every measurable metric.
That said, I don't understand what you mean. 9.9/10 of new, non motorsports ICE vehicles are backed by warranty bumper to bumper for 60-100k miles.
Further, the parts/labor pipeline is massive.OEM/aftermarket keep producing parts for years after the vehicle's generation stops being sold as new.
In the case of Tesla and some Boutique EVs, there is no aftermarket, and there's litigation from Tesla if you try to repair/produce aftermarket parts. "At least tesla mostly keeps updating their older car's software" is not a positive. It should be guaranteed for 60k-100k miles, and the aftermarket should be allowed to repair it.
All Teslas are "vision-based" now. They no longer use the RADAR even in cars that have it.
This is probably so they can have only one software build rather than supporting one with and one without RADAR. And also so that they can say the RADAR isn't needed anyway.
>(submitted lengthy dash cam videos and accelerometer data to the NHTSA as a complaint)
Did you upload video to them, or post a link from a streaming site? It would be interesting to see the analytics that a streaming site can provide to know if anyone from the gov't ever actually looks at the video vs the blind upload to a black hole.
I had much worse results with radar several years than I do now with the FSD that doesn’t use radar anymore, so I don’t think radar is the necessary solution here
Tesla isn't going to take the vehicle back, nor would I sell the car to someone else, so I am left to complain loudly to regulators, which I admit is a guilty pleasure (especially when I have lucked into having standing).
I also opted out of arbitration for all of my vehicles [1], for obvious reasons. I am simply attempting to make lemonade from lemons.
Hah, one time my computer had periodic hiccups where the mouse cursor update rate would slow down and back to normal within a second, it almost turned me into the Hulk... I can't imagine driving a car with that problem.
You cannot turn this behavior off persistently -- you need to open car settings and disable it every time you leave Park. My guess is that some jurisdictions mandate this behavior.
When this "feature" triggers you have a short window in which to do _something_ to tell the car that you are paying attention. Your options are either: accellerate or brake. If you do nothing the car may apply full brakes and bring you to a full stop, regardless what is behind you.
> Reports of “phantom braking” first surfaced last fall
This has been happening for much longer than that -- at least as early as Winter 2019, when I unintentionally brake checked a Prius after merging while going over 101 @ Rengstorf.
What behavior are you referring to? We are talking about autopilot, and possibly FSD, neither of which are engaged by default (obviously!) What do you mean they can't be turned off?
> When this "feature" triggers you have a short window in which to do _something_ to tell the car that you are paying attention. Your options are either: accelerate or brake. If you do nothing the car may apply full brakes and bring you to a full stop, regardless what is behind you.
Again, what is this? I have a Tesla, and I'm not aware of any feature that requires me to accelerate or brake to prevent the car from stopping itself.
Automatic Emergency Braking. You cannot turn off AEB persistently.
> I'm not aware of any feature that requires me to accelerate or brake to prevent the car from stopping itself
Before the car emergency brakes, it will warn you via "Forward Collision Warning"; the screen will show the object it thinks you're about to hit in red. You can set this to "Late" to reduce false positives -- but if you do, you'll have less time to take action yourself. So now I set it to Early.
After a FCW, you have a small amount of time to do something before the car will do something for you. The M3 manual describes AEB here:
It also explains "Automatic Emergency Braking is always enabled when you start Model 3. To disable it for your current drive, touch Controls > Autopilot > Automatic Emergency Braking."
This article is about autopilot phantom braking, not AEB. I've never had AEB trigger, or heard of it triggering when it shouldn't, but maybe I'm unaware.
Phantom braking includes self-driving/cruise and ADAS, e.g. "What is phantom braking"
> Phantom braking is a term used to describe when an advanced driver assistance system (ADAS) – or a self-driving system – applies the brakes for no good reason
i had it trigger twice: once to prevent a frontal crash that would have been my fault, and once during a storm. i am very glad that it is there; works extremely well.
For what it's worth, I've tried and failed to reproduce this on my Model 3. I've driven next to trucks, motorcycles, under overpasses, under very-low overpasses, near pedestrians, and even passed camels and horses on the side of the road. No phantom braking.
That said, when using the lane-assist, I do prefer to keep my foot on the accelerator pedal anyway. To increase range, I try not to get into regen at all unless I actually do want to brake. I feel that the cruise control is to strict in keeping a particular speed in hilly areas, and I live and work in hilly areas. Let the vehicle speed up in the dips and slow down on the peaks, just like all the other cars (probably manually driven) are doing.
That's why I mentioned that I've tried to reproduce the issue. I would deliberately let the computer handle speed even though I personally prefer to, so that I could experience the phantom braking firsthand.
I was driving to Las Vegas with my father (in my Model Y) and I was telling him, "I've never experienced this so-called phantom braking..." A literally 3 seconds later I experienced it and few times later in the next hour or so.
Agreed this always bothered me about cruise control as well. Particularly for electric cars I am wishing for a dedicated physical toggle or slider to instantly disable or dial-back regen for the times I want to coast.
Prius will regen slowly when you take your foot off the gas. Just like a Tesla. The difference is that Teslas regen aggressively, so you really notice the deceleration, whereas a Prius will not start major regen braking until you press on the brakes.
If you have the energy flow view up on the screen, you can see the power going from the motor/generator to the battery when you release the gas.
>I feel that the cruise control is to strict in keeping a particular speed in hilly areas, and I live and work in hilly areas.
This is always an issue with underpowered ICE vehicles and cruise control, less-so with the 7 or 8 gear transmission, but still. They don't have enough power to maintain velocity, so constantly shifting, speed up, slow down...
Cool. Adding "are they trying to reproduce an Autopilot bug they read about online?" to the list of thoughts I have when encountering a Tesla on the road.
I don't drive a Tesla, but my car is a newer model with "smart" cruise control. It freaks out whenever there's a slight curve in the road and there's a car in the next lane over. Do Teslas have the same issue?
Besides the plows and grit, this CBC article claims the paint they choose has something to do with it:
> Most of the paint is environmentally friendly but being water-based, it rarely lasts more than a year, especially on the province's busiest roads and highways.
> "The paint used for highway striping eventually wears down as a result of the wear and tear from vehicle traffic as well as the effects of necessary winter maintenance activities," said a spokesperson with the Department of Transportation and Infrastructure.
> The provincial government only uses oil-based paint in cold weather. The bulk of road work is completed during the summer months, so it seems unlikely many fresh painted lanes this year will survive the winter.
I'm not sure about the paint composition matter, but at least I can attest to the road lines in Canada being very poor. I've driven most of the Trans-Canadian Highway (all of it west of Quebec) and the line paint was virtually nonexistent for much of even this road.
> Meanwhile, the province does have an alternative it has been using on roads where visibility, geography and weather make for dangerous conditions, such as parts of the Coquihalla Highway. Crews grind out a line-sized trench in the pavement and pour a clumpy epoxy-plastic paint nicknamed “golden oatmeal” that bonds to the asphalt using the same material as dentures. Because it sits just below the rest of the pavement, it better withstands wear. “The problem is to do that is extremely cost prohibitive,” said Stone. “It’s very pricey.”
This only happened to me in a Model S whenever it was cresting any kind of hill that it couldn't see over (which can be pretty minor rollers to big hills). It would usually start swerving to the right.
Happens to me all the time as well, I've got FSD (not the beta, the highway version, because apparently I take turns too quickly) and I have to be alert to bury the accelerator in case this happens on the highway. It's rare enough that I'm never quite prepared, but frequent enough that I know to be on my toes.
I've never seen it happen when I was following someone, I bet the software is biased to follow the vehicle in front even if one of these events happen. It happens to me when I'm headed up a hill and there is an overpass on the far side of the hill, or I'm passing a low overhead sign. I can understand why radar thinks it's about to slam into a wall, from the radar's low slung perspective I'm sure it just sees a barrier ahead. Considering Tesla's early penchant for driving right into the side of a truck, I do somewhat prefer the sudden braking.
I think the takeaway should be that autopilot cannot be generalized. You can't say it's better or worse under all situations than an average driver. Sure there are situations that it's clearly better or worse than an average driver like driving on a pretty straight freeway for hours long or driving on a twisty road without clear lane markings.
The problem arises with ambiguous situations like phantom breaking. Is it phantom breaking because the tree overhangs are too close to the surface of the road, the sun is shinning directly into one of the cameras or is there smudge on the camera lens? For this category of problems we're likely to see both improvements and further regressions. The ambiguity will bifurcate into clearly worse and clearly better than average human driver categories. This is the part we need patience for the technology to improve.
The clearly worse than human driver category may be an intractable set of problems and that's what we should judge viability of self-driving technology base off of.
Which, in this age of cheap ubiquitous dash-cams, isn't nearly as likely to be the "inconvenient waste of time but zero financial risk" deal it used to be
Edit: Since apparently this needs explaining, insurers will consider evidence and happily doll out blame to a party that does something dumb in proportion to their fault (with slight variations per differences in law from state to state). The fact that said dumbness results in the back of your car getting hit with the front of another only matters in the base case in which no other reliable information about the situation is available. In the case of a car that just stops in the roadway with no reason to do so some of the blame is likely to land on the driver thereof. Insurers are well aware that the rules of the road are built around redundant requirements so that crashes don't happen when one party can't or won't hold up their end. Infantile "but the other guy is responsible for.." arguments don't tend to hold much water with them when there's evidence (video or otherwise) that in the events that lead up to the accident your hand was no more forced than the other guy's were because both drivers are still bound by whatever the default responsibility to operate their vehicles in a safe manner is (wording varies by state).
I don't understand what you mean. You seem to be implying that if a driver rear-ends you he can use his dash-cam to show you were at fault? Most places I've driven it is the following car's responsibility to maintain a safe distance which includes being able to stop if the car you are following suddenly stops. Yes, brake checking is a thing and could complicate any civil suit, but mechanical (now software) failure wouldn't be that.
Yeah they should be following further back. There's no excuse to rear-ending somebody. If your car can't stop as fast as a heavy-ass Tesla, you need to be following further back. If you hit someone brake checking you, you were following too close.
You never know when someone might need to do a full, 100% brake slam to avoid something.
You never know when someone might need to do a full, 100% brake slam to avoid something.
I don't know about you, but I usually look beyond the car in front and possibly the one in front of that one too in order to anticipate braking. If it's all clear ahead and the one immediately in front suddenly brakes hard, I don't think it's clear that it isn't culpable.
> the rules of the road are built around redundant requirements so that crashes don't happen when one party can't or won't hold up their end.
Exactly; this is why brake-checking is illegal. The person behind you has a responsibility to keep a safe distance, but you also have a responsibility to not drive like a lunatic. If the dash-cam footage shows you brake-checking somebody right before the crash, don't expect to get off scot-free by blaming the guy you were fucking with.
> Which, in this age of cheap ubiquitous dash-cams, isn't nearly as likely to be the "inconvenient waste of time but zero financial risk" deal it used to be
Chance of permanent injury or death is not negligible. e.g. get rear ended by a semi, or just suffer from whiplash that doesn't go away.
Where I live (not the US) the only way rear-ending would be the tailing vehicle's fault is if the front vehicle was running in reverse. Otherwise, it's your responsibility to maintain a safe distance form the vehicle ahead, and one should assume this vehicle may stop unexpectedly. You're always at fault if you rear-end someone.
IIRC from law school (in the US), the driver of the back car would be rebuttably presumed to be at fault. This presumption could be rebutted, for example, if it were shown that the front car braked hard for no reason whatsoever.
In which jurisdiction would the rear-ending driver not be at fault? He was either inattentive or failed to maintain proper distance. Sure, driving too close is common, but that doesn't make it right
The car behind who rear ends the car in front has a presumption of being at fault, but if there is evidence to show the car in front caused the accident (say, by brake checking them) then that can change the outcome.
Before dash cams, it was very rare for the car behind to be able to show any proof otherwise, so they were usually at fault unless there were credible witnesses of the brake checking.
But no, you can't just randomly threshold brake in the middle of the highway for no reason and expect to blame the car behind for your actions (or the actions of your car).
2022 Model Y with FSD. I use Autopilot 50+ miles a day, and use it 98% of the way on trips from San Diego to SF. It infrequently phantom brakes, and more often brakes hard and late and sometimes people think I'm brake-checking them. It also is annoying slow to recover back to cruise speed from a slow down. Also if you're in the right lane when the two right lanes merge- it is totally unreliable, I've been sandwiched between an 18-wheeler and the gutter. But that said, once you learn the quirks it is extremely predictable and robust. I have used it on snowy mountain roads in Tahoe, a sandstorm at Salton Sea, and thousands of highway miles. If you have the minimum distance set to 2 or 3 car lengths, you should not be complaining about late and hard braking- that's the setting you chose. The phantom braking is annoying but infrequent, and you can quickly override it with an accelerator tap. My suggestion to Tesla would just be to have a debug button where you can report and elevate the past 10 seconds of driving.
If it often brakes hard and late to the point that people think you're brake checking them, then you shouldn't be using it. They don't think you're brake checking them, you ARE brake checking them.
Stop being dangerous on the road. This isn't a game.
>They don't think you're brake checking them, you ARE brake checking them.
This is the important part. Doesn't matter what the intent was, the fact of the matter is that the people behind you nearly rear-ended you because of your car's actions. And you, as the driver, never know when you might piss off the wrong driver - brake check the wrong person and suddenly you're in trouble with someone who might not care if you say, "But my car did it, not me! Honest!".
Even worse, the person behind you might fail the brake check. I can't believe people are knowingly operating faulty autopilots on public roads. If my car randomly braked I would take it to the mechanic - not continue to drive it.
Autopilot should be fixed, but the problem in your scenario is people trailing others too closely. If you fail phantom braking, you're going to fail real braking whether it's activated by a computer or not.
Safe trailing distance is determined by safe driving expecations. If everybody in the world started doing hard and sudden brakes, then the safe trailing distance would be extended.
While ideally everybody would always leave enough space for hard and sudden brakes (eg in case a deer runs across a road), the reality is that driving has a lot of loose rules, and you're expected to behave rationally and similarly to the other drivers on the road. For example it's usually fine to speed a little in the left lane if most people deem it safe to do so. And there's no strict definition for "reckless driving" despite it being illegal. Driving is an intricate social dance with millions of actors, and if you are the one acting way out of line, you are liable, and hard + sudden brakes for no reason, is definitely out of line.
(That's not to say this is the best system though. An autonomous system with well defined behaviors, less ambiguity, and less social guesswork, would probably be safer. But such a system would require full participation, not just one or two tesla drivers choosing to let Elon take the wheel)
> Safe trailing distance is determined by safe driving expecations.
Respectfully, it's not. A safe trailing distance is defined as enough room for you to stop even if the car in front of you stops abruptly (for example, if a kid or animal runs into the street), factoring in your reaction time and the stopping distance of your vehicle. The buffer isn't for normal variations in the speed of traffic, it's for anomalies.
> the reality is that driving has a lot of loose rules, and you're expected to behave rationally and similarly to the other drivers on the road
This isn't a valid excuse for unsafe following.
> Driving is an intricate social dance with millions of actors, and if you are the one acting way out of line, you are liable
I'm not sure what sense you mean "liable" here, but it's certainly not true in the legal sense--you're liable if you violate the law, for example, by following too closely (the law doesn't seem to care whether others follow too closely or not).
> An autonomous system with well defined behaviors, less ambiguity, and less social guesswork, would probably be safer.
It doesn't require full participation, it merely requires fewer fatalities caused per 1M miles than human drivers. That's the point at which we should legalize AI, and if it becomes significantly safer, that's the point at which we should mandate AI in the general case. And then something really interesting happens--we can start to do away with those ambiguities that make AI hard because human drivers become the exception rather than the rule. Rather than informal social systems governing driving, driving becomes a formal digital protocol. Maybe rather than relying on computer vision (or computer vision alone) cars are loaded with a national database containing the right-of-way rules, speed limits, scheduled maintenance, etc and/or they use other sensors. These things in turn make AI safer (discrete signals = fewer failures due to computer vision errors). Safe and ubiquitous AI could lead to much narrower roads, freeing up more space for pedestrian traffic.
Moreover, since cars are safer, they don't need to be so heavily armored--rather, they become smaller and lighter, which further increases safety by increasing survivability in pedestrian- or cyclist-involved accidents.
Moreover, those strict protocols could make transportation more efficient (a given road increases its throughput and latency for a without increasing lanes or compromising safety) because (1) cars are smaller and lighter as previously discussed, (2) AI can travel faster with less buffer due to its increased reaction times, and (3) because AI can coordinate so you don't have inefficient traffic patterns like "4 semis driving in quadruple file" or "one reckless ass hat changing lanes frequently, abruptly, and aggressively, causing everyone to slow down".
Without human drivers, maybe taxis become cost-competitive with public transit (or better) and fewer people need to own a car (and the cars they own needn't be so large since one can easily and cheaply rent a larger car for transporting large items). And for those who do own their own car, those cars can drop them off at their destination and park farther away (no need for so much parking downtown).
This is all just a sketch of some of the possibilities, but the point is that there's a horizon (a critical mass of AI cars) that could manifest in a cascade of other significant changes.
Accidents tend to happen when multiple things "go wrong". If a car accident happened every time some single thing was imperfect - well, car accidents would be more common than safe journeys. It's the confluence of multiple imperfections that result in car accidents (usually) - e.g. Driver 1 steps suddenly on the brakes AND Driver 2 is fiddling with the radio AND it's raining - or whatever, and you get a car accident.
So, yeah, maybe Driver 2 should be able to pass the brake check, but that doesn't mean it is good or safe or desirable for Driver 1 to be randomly brake checking people. Having a car that randomly brakes is like having a driver who randomly applies the brakes - you shouldn't do that, it's bad, and unsafe.
Brake checking is (usually) illegal in the US, albeit hard to prove without a dash cam or witnesses.
So no, it's not the case that people trailing too closely is the problem. It's the case of the Tesla requiring, without good reason, the car behind them to brake hard to avoid the collision. This is dangerous and (usually) illegal.
Teslas or people in general people should absolutely not break-check, but no, cars that drive so close to the car in front that they cannot safely stop in an emergency are tailgating.
I do agree there is some nuance, but it really saddens me that so many comments in this thread are supportive of a form of driving that, while very common, is unsafe for themselves and others. It's a form of normalisation of deviance and I'm sure it costs far more lives than teslas dodgy "autopilot" (even if that too should not be road legal).
In metropolitan areas (where I imagine most Teslas are driven), it often just isn’t possible to maintain a textbook-safe following distance. The instant you allow more than the average distance ahead of you, another car will jump in to fill the gap.
People sometimes fill the gap in front of me, and if they're particularly unsafe about my induced following distance I flash my lights, flip them off, and back off. In a typical 30 mile drive in one of those metros it costs on the order of tens of seconds. People usually aren't that aggressive about filling gaps until they have something to gain, so the frequency of people causing you to slow down winds up being pretty low.
I would do pretty bad things to any person who would severely harm or kill my kids for example, and no amount of blaming car electronics or recent update would change that...
In this circumstance (crashing into the back of a tesla breaking hard) you would have been the person primarily to blame. Yes tesla sells a dodgy "autopilot" that does unexpected things (and in my opinion should not be road legal until fixed), but you were tailgating to the point where you couldn’t stop in time. The car behind has to be responsible for maintaining safe separation to the car in front (under reasonable circumstances).
edit: The "you" in the above being the hypothetical driver.
You and the Tesla aren't the only ones on the road. You can't control people behind you being just close enough to not be able to stop. Or having another person behind the Tesla swerve into you when trying to avoid hitting the Tesla.
There are plenty of scenarios where you are still impacted by the brake checking Tesla without you being at fault.
I kind of agree, however at least from my experience in Germany keeping at least the minimum legal distance is an exception rather than the norm. It kinda makes me mad and I wish it was mandatory for all cars to have sensors which would either not allow you to get closer given a certain speed to be in line with the regulations or at least make a really annoying sound so that you simply wouldn't want to. I would bet this would save so many lives.
There's a difference between doing bad things to prevent someone from hurting your kids and doing bad things to someone who hurt your kids in an accident. Acting in defense is justified, but acting in revenge should rightly land you in legal trouble (including prison depending on the severity of the "bad things").
My first thought was, what if it's a sheriff, police officer, or state patrol? Now you've got a couple nights in the county lockup, a reckless driving charge (this can be a misdemeanor or felony depending on the severity), and $thousands in towing and impound fees.
I don't think abrupt braking constitutes reckless driving (maybe it varies by location), and it seems like it would be pretty easy to fight by claiming you saw a deer/etc in the road. If the cop says he didn't see it, you respond with some variation of "unsurprising because you were following me too closely to see anything else". Of course, none of this absolves Tesla from fixing their software.
I can't imagine a cop anywhere in the world responding well to the phrase "unsurprising because you were following me too closely," especially if s/he already thinks you were brake checking them!
It doesn't make sense to argue that the responsibility for a violent or aggressive interaction stops with the Tesla/etc driver rather than the maniac who responded aggressively. The Tesla/etc driver is responsible for his own driving, as is the person reacting to that driving. Further still, since we're being pedantic about litigating responsibility, the person trailing the Tesla/etc should be leaving plenty of room such that he can comfortably react to abrupt braking (the leading driver might have a good reason for braking quickly!).
>It doesn't make sense to argue that the responsibility for a violent or aggressive interaction stops with the Tesla/etc driver rather than the maniac who responded aggressively.
I never said that the responsibility stops with the Tesla driver. I simply said that it's wise to not put yourself in a situation where you may accidentally piss the wrong person off - this is a general sentiment that applies everywhere in life, really. You don't know if the person you're dealing with is the nicest, most understanding person in the world, or an absolute maniac who's been dealing with some heavy stuff and is one step away from losing their absolute shit. Yes, you're not responsible for someone else's emotions, but you are responsible for how you interact with them.
>Further still, since we're being pedantic about litigating responsibility, the person trailing the Tesla/etc should be leaving plenty of room such that he can comfortably react to abrupt braking (the leading driver might have a good reason for braking quickly!).
Of course. Surely you'd agree that nuance plays a large role here, though? There are often times where you'll find yourself closer to the back of a car than you'd like, despite your best efforts to maintain plenty of room (eg, someone cuts you off or moves into your lane with less distance than you'd like).
It's not a brake check if the person in front of you is slowing down and your braking is to maintain safe distance. A brake check is braking for the purpose of affecting those behind you. It seems clear he means it is braking due to the slow down ahead.
> A brake check is braking for the purpose of affecting those behind you.
Intent doesn't matter to the person in the other car, who likely is relying on your behavior to some extend to understand traffic ahead. If your driving seems like a brake check and brings in all of the unsafe conditions associated with brake checking, then it's a brake check.
Of course, the unsafe conditions aren't associated with braking, but with the failure of the trailing car to leave adequate room for their own braking. The accident here would happen whether the leading car had a justifiable reason to brake or not, and the trailing driver would be responsible in either case.
Unfortunately that just shows the person following was not doing so at a safe distance - any car you're driving behind could perform emergency braking at any moment.
^ It's this. I haven't been honked at yet, but I have felt a little apologetic to people behind me. I recently updates my follow distance from 3 to 4 and its helped tremendously.
Sure, but if you're doing it far later than a human would have done it, then you're really screwing with the person behind you who isn't an AI with instant response times.
This is why the law requires people to follow at a safe distance. If you're only leaving room for "instant response times" then you had better have instant response times because you're liable.
The question isn't who's legally liable. The question is are you doing things more unsafely than you need to be. You don't have to be legally at fault to be doing something that's unsafe for other drivers to deal with. Sudden breaking at highway speeds, especially for no actual reason, even if the person behind you is following at a safe distance, is rolling the dice that every car behind you for a while is driving at full attention.
"I require every other human driver to be driving correctly so that my AI-car may drive unsafely" seems like a bad bet. I'd also imagine that, if as a human, I just slammed on my breaks for no reason on the highway, I would be found to be at fault for an accident assuming the car behind me wasn't directly tailgating me.
You seem to view safety as some binary that doesn't account for frequency or severity of incidents. Your framing suggests you think it's better to continue with human drivers and the commensurate 40k lives lost each year than to use an AI that has even the slightest possibility of causing even the least significant accident irrespective of whether or not any accident occurs in practice.
I suggest it's better to compare a given AI with humans in terms of fatalities caused per million miles driven. If an AI performs a little better than humans it should be legalized and if it performs dramatically better than humans, it should be mandatory.
Of course, this is where we need more data and greater transparency so we can answer these questions.
I'm not trying to say that at all. What I'm saying is that we're at an awkward time now, where this sort of quick AI-assisted breaking is especially dangerous because fallible humans are most of the rest of the drivers on the road. In an all FSD world, this wouldn't really be a problem.
You're putting a lot of words in my mouth and assuming I'm against working on AI driving because one person might ever die. All I'm trying to point out is that it's pretty worrying to have a system that could cause a highway-speed accident because of a well-known and decently common bug. I'd be equally worried if it came up that some other decently selling model of car would randomly have the ABS system engage.
I wrote my OP here because the parent poster was casually talking about "not really break checking people" as if that's just normal behavior that's a part of R&D, instead of an AI accidentally emulating dangerous aggressive driving patterns that FSD is supposed to do away with. I'm not trying to ban FSD research or anything. I want this improved! It's just scary when people excuse dangerous behaviors by FSD systems because it's otherwise safer.
The other issue is that more data and greater transparency are both not things Tesla seems to have any interest in providing anyone, so while this may get fixed, it's not really pushing the industry forward all that much if no one other than Tesla is going to benefit. There's plenty of mentions in this thread of this sort of issue happening on other cars and adaptive cruise control systems that could benefit from an improvement for the betterment of all drivers, but instead "not breakchecking people" is going to be a unexplained feature improvement in some FSD patch probably.
if a car is tailgating me (less than one car length of space behind me above 30 mph), and my Tesla sees a ghost and brakes hard, and the car behind me rams into me, then that is completely on them. it's on me if they were further away than that.
Sure, but fault aside, you still just got into a highway-speed car accident. I don't get to decide if the person behind me is a reckless driver or not and "but the law says it's not my fault" doesn't do away with any injuries or damage to my car that happens because of it. There's plenty of completely legal things you can do that will create unsafe situations on the road, and you get a mark on your insurance for getting in the accident whether or not it's your legal fault that it happened (as I found out when I lost the front half of my car to flying road debris.)
> But it's cool technology. A death or two is worth it /s
You're clearly trolling here, but maybe we can reframe this into something worth discussing: does autopilot result in more or fewer deaths per 1M miles driven than the median human driver? That seems like a reasonable threshold/criteria for legalizing the technology.
This sounds dangerous enough that I'd reconsider even driving the vehicle. I have a 2019 model 3 and I've experienced phantom braking twice. Both times I was on the highway with autopilot enabled and was passing a truck that cast a shadow over my lane. After the first experience I refused to even use autopilot for months afterwards.
Just recently I was driving in the right lane of a suburban road (with no assisted driving features enabled) and some trees on the curb cast a shadow over my lane and the car absolutely freaked out with collision warnings. Thankfully I was accelerating so it didn't force me to stop, but at this point I'm seriously considering going with another vehicle. Having the car scream at you as you're driving down the road is nerve wracking.
Most (all?) other cars have radars for this. They do not seem to experience as many problems as Teslas, which is especially telling considering how much more other cars are on the road.
I mean, plenty of people drive dangerously anyway. I'd rather most people brake abruptly than drive 20mph above or below the flow of traffic, weave dangerously, pass on the shoulder, run red lights, fall asleep, drive drunk, etc. If they get in a car that's merely "too cautious", that's a pretty big net win for society.
I have a 2019 and I've used autopilot extensively and i've never had it break when it shouldn't. I've had it beep at me erroneously maybe twice, so I think this is something in the settings that is causing the issue.
I have a 2022 and it phantom brakes all the time. We haven’t touched any settings and if it were a settings issue, I’m sure someone in the Tesla community would have figured it out (there are a lot of Tesla’s out there and yours seems to be the only one that isn’t affected). Moreover, a phantom braking setting would be pretty insane—I have a hard time believing they prioritized a phantom braking feature over, say, more media app integrations.
Lol I can sell my Tesla for $20K more than I paid for it when it was new because Tesla can’t make new cars fast enough. The sky is absolutely not falling any time soon.
It's honestly like a black mirror episode. "I have to have maintain a mental model of how my black box ML car AI will act so I don't die or kill other road users... and it can be updated daily."
Autopilot in aircraft have a similar problem, which led to the Air France Flight 447 crash in 2009. The airspeed indicators froze over, the autopilot switched from "normal law" to "alternate law", and the pilots failed to understand that the autopilot's protection from stalling no longer applied. They did not have enough training (or were too confused) to fly without the help of automated aids and flew the plane right into the ocean.
Boeing's planes were known for having fewer of these automated aids and thus required more skill to fly the plane, but the 747-MAX debacle threw cold water on that idea.
Of course, (unlike some commenters in this thread) the air travel industry thankfully had the good sense to look at autopilot on balance rather than fixating on a single fatal accident (or in the case of Tesla's Autopilot, the mere possibility of a fatal accident). The salient question isn't whether or not an autopilot allows for any fatalities, but rather whether or not it reduces fatalities relative to manual piloting/driving.
Yep, autopilot is a good helper but a bad master. In the air there's a lot more space and fewer objects to avoid than on the roads, yet nobody has managed to build a perfect autopilot that could really replace pilots in every situation. I remain skeptical that we'll ever see one in any dimension, let it be air, sea or land.
> "I have to have maintain a mental model of how my black box ML car AI will act so I don't die or kill other road users... and it can be updated daily."
Not defending Tesla here, but this is exactly how I think of other drivers on the road, whether I'm driving, cycling or walking.
Yes, we are literally trained for that - to keep distance with cars ahead of us, expect the unexpected and so on. We are also told not to drive cars without proper maintaince (and inforced by law in most places) so our cars are reliable.
I wonder if future driver training will factor in spotting problems with self-driving or related functions.
But Autopilot is not really a self driving vehicle though - it's basically just lane keeping + adaptive cruise control. It will gladly blow through a stop sign or a red light, it can't change lanes, or make turns, or do any other completely normal driving things that one would expect from a self driving car.
Navigate-on-Autopilot, and some other features which are part of the FSD package that is already publicly available, absolutely will stop at stop signs and lights (although the process is often annoying and buggy), change lanes (sometimes into a trailer if it doesn’t see it quite right), and make turns (takes exits/ramps), although these are things that can be individually toggled on and off. They’re there, but the implementation is kinda buggy shitware.
I'm pretty sure Tesla's with FSD get in less accidents when compared to the average driver. Mine tends to be frustratingly overly cautious in most cases.
If you’re not tailgating someone, you’ll be fine, and anyway “the rest of us drivers” includes plenty of people who drive much more dangerously than an overly cautious AI. Autopilot has many millions of miles of driving time and as far as I know there hasn’t been even a single fatal accident due to phantom braking.
> It infrequently phantom brakes, and more often brakes hard and late and sometimes people think I'm brake-checking them.
Honestly, it sounds to me like this technology should be made illegal until it's fixed. "Infrequently" isn't good enough, especially if it brakes as aggressively as you say. I want self driving to succeed, but Tesla's self-driving tech is just too immature to use on public roads.
The right question to ask is how many rear-end collisions does it produce and how many accidents it prevents.
If it infrequently phantom brakes (especially when there is no vehicle behind - an AV can sense that), that doesn't cause any collisions, this might be optimal behavior, given the current level of technology.
Agreed. People seem to think that anything that allows for any accident at all (irrespective of frequency or severity) is better than the status quo of 40k fatalities per year, which is pretty wild considering how otherwise rational this forum tends to be.
Why do people think the goal is “perfection” rather than “better than the average driver”? If Autopilot is 10x safer than human drivers, we would have to be stupid not to mandate it much less tolerate it, even if it’s crash rate is non-zero. Human drivers kill 40k people a year in the US alone—that’s almost twice as many people as all homicides combined.
While the erratic behavior of Tesla’s autopilot might cause other drivers to be more cautious are them, I don’t think you should count this in Tesla’s favor. Imagine if every self thriving car company started adding code to scare other drivers, that would definitely have unwanted side effects.
Teslas have one crash for every 4.31 million miles driven with autopilot engaged versus one crash per every 480k for non-Tesla drivers, so autopilot is averting almost 90% of all crashes. There’s no evidence at all of any “it scares other drivers into safety” mechanism, which is to be expected because such a mechanism seems absurd on its face.
This article is specifically referring to fantom breaking rather than overall performance of the system.
I know I personally have started to stay further from Tesla’s when possible on the road because they behave strangely. But the same applies to other cars which do this odd weaving from side to side within their lanes presumably because of poorly implemented lane keeping systems.
As to accidents rates, Autopilot shouldn’t be compared to overall driving accident rates when it’s not being used in all situations.
Probably. Tesla Autopilot gets in about 90% fewer accidents per hundred million miles than US drivers. It would be shocking if this didn’t manifest as lives saved.
It's just standard Tesla fanboy cope. The bottom line is that Tesla is shipping out alpha/beta software to the masses and relying on the masses to absorb the risks of crashing, causing accidents, dying, etc. in order for Tesla to iterate and possibly improve its software. I own an 2022 Model 3 Performance, so I'm not just talking out of my ass. Autopilot is unnerving to use. The car itself is just an appliance devoid of any emotion or character; the exact opposite of a performance focused driver's car. I plan on selling it very soon.
Juxtapose Tesla's Autopilot with BMW's Driving Assistant Professional (I also own a BMW X5 PHEV Hybrid). I drove from Chicago to Oklahoma and back with all the autonomous features engaged 95% of the drive and it was an incredibly relaxing experience. Lane change works and doesn't cost an extra $10K (you need FSD for a simple lane change otherwise you effectively need to disengage AP, change lanes, and reengage AP), zero phantom braking (the BMW has radar...), the eye tracking camera works great (no falling asleep at the wheel vs. Tesla's interior camera which does not even work and the steering wheel tracking can be defeated with a tennis ball), and best of all there's fully autonomous driving (no need to look at the road) if stuck in traffic on a highway and you're going less than 40MPH (i.e., bumper-to-bumper traffic where most accidents tend to happen).
The fact is that Tesla is not shipping game-changing software, and I would strongly argue that it's not even shipping out the best software in the business. It's a hyped up car with hyped up features peddled by a hype man. I would not be surprised if Tesla is not even a top 5 or 10 EV seller in 2032.
May I ask you why you bought your M3 Performance in the first place, then? The "appliance" feeling can be seen already in photos, you don't even need a test drive.
About software of our car makers being superior, I'd like to have some harder evidence. Reading a bit here and there Tesla seems the less bad option wrt software quality in a car.
I find your attitude terrifying. I am glad you can handle your car now, but there are many thousand people driving those cars and they might not know how to deal with those "bugs".
I am curious, are you currently holding Tesla stock?
Honestly it’s wild that so many people on this forum are making arguments about theoretical accidents that could happen when Autopilot has billions of miles driven and gets in 80% fewer accidents than the average US driver. The idea that we have to wait for Autopilot to be perfect means accepting hundreds of thousands of millions of accidents (including thousands of preventable deaths) that could be avoided. I find that attitude terrifying.
I’m sure you wouldn’t make a claim like this without some data to back it up. Please share with us the crash rate for autopilot versus the average US driver, so we can see what your idea of “intoxicated driving” looks like!
2018 Model 3 with Enhanced Autopilot. I took a weekend trip from LA to Phoenix a few months back. I was in the left-most lane for a majority of the trip, and had ~10 phantom braking occurrences each way. It seemed to think the shadow of a car in the adjacent lane (where the shadow fell well into my lane) was something that needed to be avoided.
It also doesn't infrequently highlight a car in an adjacent lane (or even exiting the freeway) as the lead car and slam on the brakes.
I have follow distance set to 7 and it still slams the brakes way later than comfortable.
I like driving, so I don't mind not using Autopilot as frequently as I have the opportunity to. And it's the most fun and best car I've driven.
But honestly I hesitate to recommend it. Not because it isn't a great car, because it is, but because I'm growing more skeptical of Tesla as a whole in the past year:
It all adds up and I can't help but wonder if Tesla's initial market advantage is waning. I'm not sure there's another competitor that's caught up to them, but it isn't as clear cut as it was just a couple years ago.
I mean, I pay full attention while the car is on autopilot, keep both hands on the wheel, and my foot over the pedals. Sure, that particular trip was pretty annoying, but my daily commute (~80 miles roundtrip) rarely, if ever, has any phantom braking occurrences, so I don't know that I would call it a "way of life".
Lol. Tesla is really just beta testing in production. When software companies do that we say it’s bad. When Tesla dies this it’s ground breaking innovation.
When my internet browser crashes no one dies.
It’s because people are bad at reasoning. You don’t ban Autopilot for being imperfect, you ban it if it’s less safe than the average driver. However, Tesla’s crash rate is on the order of 10x better than the US average—assuming that carries over to the fatality rate pretty linearly, it would mean we could save 36k of the 40k lives lost each year to traffic fatalities if Autopilot were ubiquitous and mandatory. The idea that we should just sacrifice thousands of people until Autopilot is flawless is abhorrent and foolish.
Damn, lots of strawman rage in the responses. Remember that I'm still gripping the controls and looking out the window. I don't allow the system into any kind of corner case unless I am the only one on the road. Letting the computer attempt to drive straight on the highway while a practiced and skeptical driver is ready to take over does not feel like a blatant compromise of public safety. Every incident I've encountered was very easy to handle, with lots of margin for reaction. Hopefully everyone practices this kind of regard for others while still allowing emerging technology to have a path for safe testing in the real world.
I have a 2021 Y with FSD and more or less echo the I've got about the same sentiment. It's nice use it, but does weird stuff. A recent update seems to have resolved many of the FSD issues. It no longer phantom breaks at flashing yellow intersections, and a few stop signs for diagonal merging roads.
And btw, they certainly have the data whether or not you report it.
> But that said, once you learn the quirks it is extremely predictable and robust.
I'm sorry, but this sounds like Stockholm syndrome to me. You admit that the car you're driving behaves unpredictably if you don't know it's quirks. But it still sounds like you're defending it.
This would be intolerable for most other cars, of not outright dangerous. I'm not trying to attack you, so I'm sorry to be this direct:
Why are you still keeping this anti-feature enabled? This sounds really dangerous.
It’s strictly safer than other cars. Autopilot’s accident rate is one per every 4.3 million miles versus 1 per 480 thousand miles for the non-Tesla US average. Autopilot is nearly 10x safer than the average US driver.
It’s really a bummer to see how many commenters are making strong statements informed by neither data nor experience. This whole thread is well below this forum’s normal level of discourse.
2022 Model Y with regular Autopilot. Phantom braking is rarely an issue on divided highways; however, it's definitely an issue on highways without generous medians. If a semi is coming the other direction (especially over a hill) it will panic and slam on the brakes more than half the time (this has been the case since we bought the car in November). I've also had problems with the shadows cast by bridges and adjacent buildings as well as changes in road surface (e.g., the boundary between a section of road that was recently resurfaced to an older section of road). Further still, the problems seem to be worse in dim light or at night (it also insists on running the brights even when they're not necessary for a human, and even if it means blinding oncoming traffic) where it will occasionally phantom brake even on an empty divided highway. Moreover, sometimes it decides to abruptly drop the cruise control set point speed from 75mph to 40mph (even though it knows the speed limit is 70mph) which results in abrupt deceleration (not sure if this is the same as phantom braking or not).
Additionally, if you interrupt a phantom brake by manually accelerating, it often won't resume speed until it has decelerated to whatever it thinks its "safe speed" was at the time of the perceived danger, even if the perceived danger is well-past. So if you were going 60mph when it got spooked by a shadow and manually compensated for the phantom braking, when you let off the accelerator the car will still slow down to ~30mph even if you're a quarter mile past the shadow.
This is all frustrating, but the most frustrating thing is that I don't even know for sure what the car's reasoning process is--why does it sometimes sporadically set the speed to 30mph below the speed limit on a busy highway? What threat is it perceiving when it phantom brakes?
> and more often brakes hard and late ... If you have the minimum distance set to 2 or 3 car lengths, you should not be complaining about late and hard braking- that's the setting you chose.
I disagree here. As a human driver, I can prefer to follow someone "closely" (pretty sure I could fit 3 or 4 cars into Tesla's "2 car lengths"), but that doesn't mean I wait until I'm that far from them before I begin decelerating. There's no reason fundamental reason Autopilot can't do the same.
> It also is annoying slow to recover back to cruise speed from a slow down.
Fully agree. I think this pisses people off behind me more than the brake checking.
> The phantom braking is annoying but infrequent
I suspect this depends on what share of your driving is on divided versus two-lane highways.
>My suggestion to Tesla would just be to have a debug button where you can report and elevate the past 10 seconds of driving.
Wait, do you think that Tesla are actually obtaining all driver video and manually adjusting the model based on reports like this? I've seen a lot of fanatics with no technology background claim this, but I'm...skeptical to say the least.
My understanding is that the driving logs are only sent to Tesla when you take your car in for service. In addition, I imagine not a lot of people use the bug report feature, so most of the phantom braking in the logs and footage aren't even identified/used.
yea why not? watch the automation day presentation from 2019. they're pretty clear about how much telemetry goes home in order to re-train the networks.
No. It doesn't use distance but time. At 70 mph on cruise control in my 2015 Model S (AP1.5) the distance to the car in front is considerably more than three car lengths.
Brake checking is an exclusively American thing. I can’t recall anybody in Europe brake checking anybody in the 20 years I drove there, across all countries (Italy, France, Switzerland, Austria, etc), people just get out of the left lane naturally, and who cares if somebody goes faster?
I will never understand this hostile tension in US roads.
This happens to me just using the adaptive cruise control, no Autopilot or FSD enabled and it's super annoying. Can happen on a completely empty road driving in a straight line.
That's... appalling. My 2019 Leaf has absolutely zero problems maintaining distance with adaptive cruise control, using the ancient technology formerly known as radar. It can somehow even see stationary vehicles, something I thought wasn't feasible.
>It can somehow even see stationary vehicles, something I thought wasn't feasible.
Radar seeing stationary objects is trivial, it sees them just like it sees any other. The trick is seeing only the stationary objects which are actually relevant; e.g. not seeing a street sign suspended above the road, or a tree next to the road, but only the stationary objects in the path of the car. The radar configuration that Tesla use[s/d] apparently has awful angular resolution, making this differentiation difficult if not impossible. But this isn't inherent to all radar technology; some radars can have good angular resolution. For instance with SAR or ISAR, you can produce high resolution 2d images. But I don't know what kind of radar systems your Leaf has.
Doesn't it also have to handle things like windy roads and steep angle changes when going uphill? it seems you need both vision and radar to confirm things, eg:
- when on level ground approaching a steep hill, the radar might not know if the object directly in front of the car is just a continuation of the road or not
- when approaching a sharp curve in the road, is the object directly in front of the car actually in its path, or is it just on the side of the road
Are you sure about stationary? My 2018 leaf does not do that. I replaced the system with Open Pilot and never looked back. I also have a Tesla Model Y sometimes for testing. It is amazing how much better OpenPilot is. On the highway it is unbeatable and also able to recognize stationary objects like cars and motorbikes.
My Hyundai doesn’t seem to recognize stationary objects though. Also, the seconds after a car has merged into my lane in front of me are scary - my car doesn’t immediately recognize the new situation.
My 2019 Subaru Crosstrek has adaptive cruise control, and i've had it hit the brakes pretty hard when there's nothing in the way. Luckily it doesn't happen often.
I have a 2021 one and it never hit the breaks with nothing standing in the way. I don't have the manual in front of me but IIRC, it should only activate without an actual obstacle if the circumstances are such that something may be confused for an obstacle (e.g., driving down a steep enough hill that the end of it may be detected as a wall, never happened to me but I've never done more than 25 deg slopes).
I know you're not asking for advise but I'd recommend to have that checked, my recollection is that the manual spends far more pages listing the circumstances under which the system may fail to apply the brakes than those in which it may break when it didn't have to, so it seems odd enough for me that I'd get that checked if it happened to me.
I've never had this happen on my 2017 Forester either in ~30,000 miles of highway driving with the adaptive cruise (the Subaru system is fully vision-based with no radar).
The one issue I've noticed is that it sometimes hits the brakes for cars that are obviously exiting the highway because a tiny sliver of the car is still technically in the driving lane, so I've learned to keep my foot over the gas pedal to override it in that case. The amount of braking isn't scary though, it's just annoying because there's no need to slow down at all.
I have a 2021 Subaru Legacy and I've never had the phantom breaking as well. I've put about 10k miles on the car with at least 7k of it being on adaptive cruise. All of the miles have been on country roads with hills, sharp turns, traffic, inclement weather, etc.
My Outback has never done it in over 20k miles, but a prior Chrysler vehicle warned but seemed to correct itself before engaging the brakes when going around a curve on many occasions.
Sadly, Subarus don't make headlines though. A Tesla does just about anything and it's local news and Facebook fodder. I don't get people texting me about typical ICE cars on fire after accidents ~ for some reason people think Tesla's are noteworthy in that regard? Like other cars don't have sketchy LKAS/cruise control systems or crash?
>> My 2019 Subaru Crosstrek has adaptive cruise control, and i've had it hit the brakes pretty hard when there's nothing in the way. Luckily it doesn't happen often.
> Sadly, Subarus don't make headlines though. A Tesla does just about anything and it's local news and Facebook fodder.
Why sadly? Subaru hasn't been loudly making empty promises about self-driving capability and hasn't been aggressively pushing not-fully-cooked self-driving software onto the road.
I think people should know that EVs come with the same risks as ICE cars. It's old news that ICE cars catch on fire. They've been around 100+ years in one form or another, why would we report on it? I mean there's a tank full of volatile fuel in the car.
EVs are new still. They're hyped and covered like crazy by the media in a lot of positive ways, so why should they not get equal press coverage when there is negative news? Once EVs hit a certain mass, they won't be novel anymore, and nobody will care.
More anecdata: In over three years my Volvo has never phantom braked to the best of my knowledge. ACC is far and away my favorite and most used Pilot Assist feature too.
I have a "legacy" Model X which still has the radar fitted. For now, autopilot still uses the radar on these models, and I get little or no phantom breaking. Probably only once a month, or less, and I drive on autopilot most days.
I fear the update that disables my radar. (And actually I'm planning to replace the Model X this year for something other than a Tesla, because of the awful service experience.)
Boosts confidence in the company if the only way to get any comment is the social media account of the guy who also tries (tried?) to buy said social media.
I rarely have this issue, but I have a 2019 Model 3 with radar and Autopilot (not full self driving) ~ although I was admittedly extremely slow to upgrade to the latest v11 software and have experienced it more recently. Seems to be a major problem with cars that shipped in the past year or two without radar. Seems like a blunder that Tesla made not to include radar because of the parts shortage, that they're struggling to solve with software only based on cameras/vision.
They should give drivers a simple way to notify Tesla exactly when phantom breaking happens.
E.g. use the mic and when Tesla users say ‘Tesla phantom’ it would send the flagged data to a central repository to train against.
Regardless of your feelings towards Tesla, we need someone to figure out safe self driving before the effort is halted and humanity has to wait a century before a new generation tries again (like with nuclear tech today).
> They should give drivers a simple way to notify Tesla exactly when phantom breaking happens
Even assuming that Tesla will review all the cases, it’ll be likely useless. They just try to throw ML at whole problem e2e, and that’s what you end up with. Solid performance for 90% of the cases, ok for 5% and absolutely bonkers for last 5%. That’s just how modern ML works, you’ll never get complete solution, and will be always chasing edge cases. That works ok for showing ads, less ok for driving deadly machines.
So I have to take my eyes off the road after my car just phantom braked in order to touch the 0.5x0.5 inch "Report" button on the 15" touchscreen outside my field of view? Oh, and the button will 100% move around on the screen after software updates, so my muscle memory from pressing it for the 1,000th time gets reset each time a new update rolls out...
the players in the early days of nuclear power weren't criminally negligent. self-driving isn't going anywhere while the states are still car-obsessed wastelands (which is to say never). it's climate cope. instead of changing the urban fabric, build some li-on powered self-driving cars and convince ourselves we're doing the work while we continue to hurtle to our doom.
> we need someone to figure out safe self driving before the effort is halted and humanity has to wait a century before a new generation tries again (like with nuclear tech today).
That's quite the leap.
A far more reasonable statement would be 'we need someone like Ralph Nader to spearhead initiatives to regulate self driving cars to prevent the disasters we saw in automobile safety in the 1950s-1960s.'
I’d be interested in seeing if there statistics of automobile safety in the US before and after Nader, compared to the equivalent statistics in Europe. I have a suspicion that the progression in car safety is a more global phenomenon
No disrespect to Ralph Nader, but cars claim about the same number of lives (more in most years) than they did then. Sure, many _many_ lives are saved by seatbelts, but car-centric inner-cities combined with the psychology of invincibility for the people inside the vehicle are a recipe for dead kids in city blocks, and it's an absolute blood bath out there.
Sure, there are fewer deaths per capita, and fewer deaths per mile traveled, but the chances of a fatality occurring at a given intersection haven't budged except in places where car-centric infra has been removed or scaled back.
I don't think a think of the children rhetoric is justified by the data. Only 18% of pedestrian deaths occur at intersections, and only 21% of pedestrian deaths occur during daylight. Of the 6205 pedestrians killed in 2019, only 424 were under the age of 20, and half of those were in the 15-20 year old bracket. The pedestrian fatality rate for adults is 10x higher than that for children (0.46 for 10-14 vs 4.6 for 55-59.) Furthermore, about a third of pedestrians killed were drunk.
Telling kids to stay inside during the day seems like fear mongering, but telling them to get home before dark is probably prudent. And don't let them drink (without supervision.)
> I don't think a think of the children rhetoric is justified by the data.
I'm not a proponent of "think of the children", but I also think that urban areas in the United States are largely designed in ways that put them at substantial risk of traffic fatalities. And I think that your data don't paint the rosy picture you seem to be using them for.
> Only 18% of pedestrian deaths occur at intersections,
You can also say that about 1-in-5 deaths occur at what are supposed to be the safest places for pedestrians to cross roadways. I'd expect it to be far more remote if in fact intersections were designed for people instead of cars.
> only 21% of pedestrian deaths occur during daylight.
I'm not sure this is particularly meaningful - are we to suppose that a civilized society is one in which death can take us, arbitrarily, at night for no good reason?
> Of the 6205 pedestrians killed in 2019
To be clear: this ICD code is very literal - it means people on foot. It does not include cyclists, people in mobility devices, people entering or exiting vehicles, and many other codes which are preventable by producing less car-centric infrastructure. It also does not include deaths from causes other than "traffic", such as those that occur in parking lots.
> The pedestrian fatality rate for adults is 10x higher than that for children (0.46 for 10-14 vs 4.6 for 55-59.)
I have no problem with the idea that adult pedestrian fatalities are a good target for reduction as well, but, as I've noted above, I can't help but notice that you (the report you cite) has focused on codes which are responsible for a higher proportion of adult deaths and which ignore substantial causes of pediatric mortality from cars.
> Furthermore, about a third of pedestrians killed were drunk.
I had a feeling the victim-blaming was coming.
> but telling them to get home before dark is probably prudent
"Prudent" is to stop subsidizing lethality by building cities for cars. Then kids can enjoy the stars without a taxpayer-subsidized death lottery for petro profit.
> You can also say that about 1-in-5 deaths occur at what are supposed to be the safest places for pedestrians to cross roadways. I'd expect it to be far more remote if in fact intersections were designed for people instead of cars.
Considering way more people use crosswalks than walk across the middle of the street, only 1-in-5 pedestrian deaths occurring in a crosswalk seems like a very strong signal. Ignoring such an obvious consideration screams 'motivated reasoning'.
Following the traditional road safety advice keeps kids very safe: Use the crosswalk, look both ways, be home before dark and don't get drunk. Even accounting for all the failures to follow that advice, there are still only a few hundred child pedestrian deaths a year in a country with hundreds of millions of people. Calling it an "absolute bloodbath" is just sensationalism. I think you're just trying to scare people because you have an axe to grind against car culture for other reasons.
> Considering way more people use crosswalks than walk across the middle of the street
It is impossible to find good data on this, so I don't understand why you're using a tone of certainty in this assertion. Anecdotally, I find this difficult to believe; it seems to me that the opposite is true (at least here in St. Pete, Florida).
> Following the traditional road safety advice keeps kids very safe: Use the crosswalk, look both ways, be home before dark and don't get drunk.
"Traditional"? In what tradition? My tradition is not to put cars first and make kids avoid death by dancing around them.
Streets are for people. The idea that people need to be limited to crosswalks, exercise paranoia, and avoid them after dark suggests strongly that the points I'm making are very sound.
> there are still only a few hundred child pedestrian deaths a year in a country with hundreds of millions of people.
I don't think a few hundred deaths - among a cohort extremely unlikely to die of any cause - is anything to sneeze at. And again, you are ignoring other causes of death-by-car by focusing on this one ICD code. Why?
> you have an axe to grind against car culture for other reasons.
You got me. I'm a wealthy lobbyist for big bike. I'm sorry I didn't disclose this earlier.
> You got me. I'm a wealthy lobbyist for big bike. I'm sorry I didn't disclose this earlier.
Bike lobby huh? Usually the anti-car position is motivated by environmentalism, or the social/economic inequalities inherent to car culture that have been baked into our public infrastructure. Those would all be much better reasons than the number of children killed by cars, but bike lobby... I guess that works too.
While it's not impossible that you're telling the truth, I just don't believe that children killed by cars is really the root of your beef against cars. Rather, I think you're cynically using a "think of the children" argument to advance some other agenda (likely some combination of environmental, social and economic.) By the data, 'think of the children' is the lamest of all the plausible motives to oppose cars.
In case my sarcasm fell short of the obviousness for which I had aimed:
I do not work for a bike lobby, nor do I have any direct interest in the outcomes of these policies except extending the life of my kiddo and his friends.
I have no specific axe to grind. It's just absolutely obvious to me, from extensive travel through 49 of the United States, that children face dramatic and unprovoked hostility from the infrastructure through many of their cities.
The streets feel like they are literally made for cars instead of people. If you don't get that impression, then I really wonder what makes our perspectives so different.
Take your pick among credible urban planning organizations whose suggestions are more fleshed out than mine, but I'm happy to suggest some of the greatest hits:
* Stop widening roads
* Curb extensions
* Circles
* Diverters
* Humps
* Leave more roads unimproved
* Prohibit cars from city centers entirely, as is often the case in European cities
I'm saying Nader did some good, but that regulation hasn't caused cars to stop being the most deadly threat people face in life before they age into the heart disease bracket.
Or perhaps more to the point for the autonomous vehicle discussion: that no amount of hacking on the concept (which I fully support, and think is amazing and important work!), whether prompted by regulation or not, is likely to finally end the era of petro-carnage. For that, we need to stop using our tax dollars to build cities (and other roads) that are for cars first and people last.
An anecdote in a sea of data - My 2022 Model 3 had a pretty serious phantom braking problem until 2 or 3 software releases ago. Now it seems to be entirely resolved in my use case. For context, I drive 200-400 miles a week, 95% of which using autopilot on divided highways and well-marked two or four lane roads.
I don't have AutoPilot: I have a 2013 Model S with zero smarts, not even parking sensors.
Every few months, while pulling out of a parking spot, my car will just SLAM on the breaks violently, sometimes when I'm in reverse and sometimes when I'm inching forward. Even at low speeds it's super unsettling.
Most of the “driver aids” available today have the competency of a drunk or highly inexperienced driver. Just maintaining lane is a problem for all the systems I have had the opportunity to try. They are easily flummoxed by worn or moved lane lines (like you would see in a construction area). I view it as a definite upgrade over traditional cruise control, but it’s so far away from self driving it’s laughable to even think about.
My 2018 Honda Civic variable speed control was excellent. Its lane assistance was crap. My 2022 Honda Civic variable speed control is even better, and the lane assistance is outstanding. The thing basically drives itself in 90% of the conditions I find myself in.
The experience has convinced me that iteration is going to get us to where we want to be given moderate timelines even if Tesla goes away, which, I wish they would because, like everything Musk is involved in, it's a dramatic shit show.
i recently rented a late model rav4 and described the experience of radar cruise (on an admittedly challenging road) to be "like having a drunk lead-footed teenager at the wheel."
lots of overaggressive acceleration and deceleration, with all too frequent state changes.
personally i'd rather just have regular cruise and something that will drive the car competently at low speeds in heavy congestion.
it was apparent that it was a very simple system that did not attempt to do any estimation beyond what was immediately visible to the sensors. so if there was a change in the pitch of the road, it would immediately react as the sensors became occluded and then regained their field of view.
maybe in situations that challenging, it should complain and force manual control.
rav4 primes look like awesome vehicles by the way... if i were going to get a late model green vehicle, it would definitely be on my shortlist.
Why or how people deem "phantom braking" to be even remotely acceptable is completely beyond me. You are putting your life in the hands of a multi-ton steel box with a mind of (seemingly) its own and it "phantom braking" at highway speeds does not alarm you? In a normal world all of those cars would be immediately recalled.
As with everything, a tragedy must happen before something is done. I'm not condoning this behavior, but that's the way things are -- I'd be surprised if the opposite happened, and preventive action was taken. It took decades for mass adoption of simple security devices such as seat belts and third brake lights.
I had to laugh at this statement - imagine if I had to tell to a patient that he had to learn the quirks of his defib - so it "might" randomly just restart your heart at inopportune times but just hold on while it happens :).
2021 MYP w/o FSD. Happens often. Usually related to a corner while on cruise control, but can reliabily make it happen cresting a local hill. Have also had it happen while passing a flat deck trailer on the interstate a few weeks ago.
I've completely stopped using Autopilot whenever there is a vehicle behind me (unless it's a highway traffic jam and I'm going <25 mph) because of phantom braking. I only use it when it's completely empty, which of course doesn't happen very often. It's just too unpredictable and I have no interest to be dangerous to others because I want to play with some tech.
All of these issues with Tesla cars and the stock still trades at 100x earnings. What dumb shit will old Musky Elon do to tank the stock price next I wonder? Is the halo of being the face of all the SpaceX and Tesla engineers' work going to fade away? Probably not.
On an unrelated note, just a friendly reminder that the Chevy Bolt is only $26k with 250 miles of range. How much is the most affordable phantom braking, Full-Self-Driving-driver-should-always-pay-attention, pedestrian-squishing Tesla again? Model 3 Rear-Wheel Drive: $44,990 to $46,990. Huh.
Yes but, "Tesla". If it’s named after the best inventor the world has ever known, and it’s been guaranteed to be 1 year from Full Self Driving For Real by Elon The Great himself, it has to be true right?
All those stupid car makers can’t keep up with so much bullshit, I mean, ground breaking innovation.
Last week I had a Model 3 cross an intersection in front of me causing me to slam on the brakes to avoid hitting it broadside. It was stopped at the stop sign, I did not have one, and it went anyway. I have no idea if the driver was in control or it was FSD Beta.
Couldn’t help but wonder if there should be a way for other drivers to report bad behavior by self-driving vehicles. I don’t see how it could be implemented or how it would work though.
If something obstructs the camera, autopilot will signal as such and disengage. I assume there's _some_ overlap in camera coverage but it won't work if they're disabled. Also won't work if there's too much glare.
My parents-in-law own a late-model VW. It’s not my car, but I drive it regularly when we visit. Despite limited use, in the last year or so I’ve had 3 phantom braking incidents - one with my son in the car that almost resulted in an actual collision with another vehicle.
I hate this car so much. I cannot believe that VW thinks this junk technology is a good idea and I will never, ever buy a VW.
I still don't understand why this is a hard problem. You can make radars and lidars cheap. You can confirm radar findings with an ultrasound.
If the global priority was safety at all costs, a lot of this stuff could probably be solved with deterministic algorithms, and some mandatory changes to manual cars(Like an unsafe distance warning radar).
How much better than a human driver does a self-driving system need to be before we as a society will accept fully autonomous vehicles? That seems like the key question which we are spending very little effort trying to understand.
Until we have a national conversation about this, I suspect self-driving will be an intractable problem.
A human driver is no longer the null hypothesis they have to beat. Most vehicles in the US are now sold with collision avoidance systems. The alternative to an autonomous vehicle is a vehicle with the combined efforts of humans + automated systems.
Also, until there's a SAE level 5 vehicle in existence, we're comparing different sample sets. The situations humans crash most often in are many of the same situations where current autonomous vehicles simply won't operate at all.
We may see these sample sets get even more skewed as we partially automate driving in levels 3 and 4. Because humans will have to drive in the more demanding scenarios where automated systems refuse to do so.
Well, there are different failure modes that we care about too. People think about first-party losses much differently than third-party losses. And, people currently take actions to mitigate those losses. Preventable crashes can be avoidable, by avoiding the situations in which humans engage in dangerous behavior. And the factor of control is why people are more comfortable with that failure mode, than a failure mode which may be more unpredictable.
For instance: If I'm worried about killing my child in a first-party drunk driving crash, the average crash rate is irrelevant, because I can decrease the risk to zero by not drinking and driving. If I'm worried about third-party risk of a drunk driving crash, I can greatly mitigate that risk by choosing when and where I drive with my child in the car.
> No amount of "Well, the number say this truck is statistically safer when driven by a computer..." will move the needle on that
And I think the reason for that is that people subconsciously understand that the people who would be interested in (or financially capable of) purchasing an autonomous vehicle for the safety features are the same people who have lower accident rates than average.
Nah, I think it’s more that people like to be in control. Planes are safer than cars, but plenty of people are afraid of flying. Maybe there’s something psychological about going hundreds of miles per hour thousands of feet in the air, but what I usually hear as the reason is that people don’t like that they’re not in control of the systems.
(Also that media plays up plane crashes, but that would almost certainly be true of autonomous crashes as well.)
> the factor of control is why people are more comfortable with that failure mode, than a failure mode which may be more unpredictable.
There are situations in which control mitigates risk, and there are situations in which it doesn't. While the average crash rate is relevant to public health, it's really irrelevant to someone who is dead.
Beating the national average never should have been seen as the finish line. The average is dragged down into the gutter by a minority of drivers who choose to engage in very poor behavior, like driving drunk, speeding, making unnecessary trips at night and bad weather, street racing, etc. By simply avoiding these few behaviors, it's easy to put yourself well above the average. (inb4 "everybody thinks they're better than the average"; most people who think this are correct. Bad driving doesn't have a normal distribution.)
If you want to impose self-driving on everybody, it should be at least as good as the best drivers. It would be a grave injustice to force it on the best drivers if it can't match their performance.
As for the bar to pass for simply allowing, not mandating, self driving: I think they should first exceed the skill of a student driver. If they can't then they don't belong on the road without supervision, just like a student driver.
You can see this in actuarial data that the IIHS posts. Often, cars that perform comparably in crash tests have wildly different real world loss rates. The difference is likely explainable by differences in driver behavior.
For example, look at any car also sold under a hybrid trim model, the hybrid almost always has lower rates of injury. For an identical chassis.
Or, consider the brands that have good crash test rating, but target subprime buyers.... in comparison to brands that have similar crash performance but target higher incomes and/or higher credit profiles. Nissan and Kia vehicles often have good crash test ratings, but they consistently have bad real-world crash data.
I noticed the same thing. It seems pretty reasonable that the kind of person who drives a 350hp Volvo hybrid wagon is most often not the same kind of person who drives a 350hp Mustang.
I don't disagree, but that's not quite the problem space I am curious about.
My thesis is that as a society we've accepted that humans make mistakes and will kill people on the road. I suspect that society will be much less willing to accept that sort of thinking when it's an AI-driven vehicle making the mistakes, no matter how much better at driving it is. I suspect we will always want, or need someone (as opposed to some
thing) to blame when the worst happens.
It's also an exotic, scary risk, as opposed to a familiar risk we've come to accept, like the risks of nuclear power versus the risks of fossil fuel power. We'll never compare a death from one fairly with a death from the other because one feels extra and the other feels normal.
A future in which self-driving cars kill 3,000 people per year in the U.S. sounds unimaginably grotesque to most people, even though that's less than a tenth of the number of people killed by human-driven cars.
> Until we have a national conversation about this, I suspect self-driving will be an intractable problem.
I hate to be a pessimist, but I'm not sure "national conversations" are all that useful in the U.S.
Consider the "national conversation" leading up to Obamacare. NPR (at least) reported on the known pros/cons of various national healthcare systems around the world. AFAIK some of them were simply better than what we had, regardless of whether one looked at them through a conservative or liberal lens. And yet we still ended up with a system that was only marginally better IMHO.
I think it's easy to be biased towards agreeing with your point here because failures are much more visible in this realm than the successes.
There's a whole host of things that we've 'worked out' on a national level. Schools, National Parks, Interstate Highway Systems, Fire Departments. All these things either didn't exist, or took hugely different forms only a couple generations ago, and now they are almost entirely accepted in their present form. We just don't think much about it now because this stuff is so accepted, even though at one point it was a whole big complicated thing that needed to get worked out and agreed on.
For the ACA, whether it's better for you or worse for you depends on what income bracket you're in, whether you have preexisting conditions, and what kind of health care your job previously offered.
Whether it's better or worse for the country is a complicated question that I don't have an answer to, and no one does. Because it's a nuanced answer, if you're being serious about quantifying "better". Unfortunately most people, across both political factions in the US, would rather adhere to their ideologies than think critically about things in this era.
I don't think real L4+ self-driving will be feasible until we change how the roads are designed to support such vehicles. Having clear indicators and markings that are maintained properly will not only help L4 but also L2 and even human drivers.
So yes, it would require a standards-based solution - and that's where government involvement is essential.
Requiring dedicated infrastructure eliminates the point of self driving cars - you might as well just build public transit infrastructure.
On top of that, this won’t eliminate any of the real edge cases - humans and other cars will still violate these rules and drive or walk across these roads, at which point the AV will still need to know how to handle this novelty.
These are the main reasons AV companies don’t consider approaches like this.
Nothing of what I said indicates dedicated infrastructure. In fact, I don't think that would work (as you say why not
I'm thinking more like expanding priority bus-lanes (if you have visited regions outside the US these are common) and ensuring there are markers that are abundantly clear for everyone and more usable for self-driving vehicles.
I mean I’d count adding lane markers as building infrastructure - and the same caveat of people disobeying them applies. Not to mention you wouldn’t be able to add dedicated lanes outside of main thoroughfares and highways.
Personally, I'm kind of done having "national conversations", I don't think having all these fruitful perspectives has really driven us anywhere but mad. The people at the NHTSA need to be non-appointed SME's that heavily prioritize safety and balance efficiency and innovation below the safety threshold. The government then needs to respect them as an institution that protects people within its domain (drivers and pedestrians) and they need to either guide, force, or destroy violators.
The problem with this approach is that the average human driver includes teenagers, very old people who shouldn't be driving, drunk drivers, distracted drivers, speeders, reckless drivers and lots more categories whose accident risk is a LOT higher than the baseline. As someone who doesn't fall into any of these, I wouldn't accept a self driving system that's merely better than average, it has to be better than ME.
It also isn't a problem that's binary, I think we could very easily end up in a situation where the AI is a better driver 99% of the time but will occasionally cause an easily preventable accident. Who will be liable once your self-driving car runs over a kid because it mistook them for road markings? I have the feeling that people are going to be more OK with explainable human accidents than unexplainable AI accidents.
I agree with this 100%. To compound this issue is the fact that when I, as a person, make a mistake on the road, the problem space is limited to me.
When a self driving car makes a mistake, every other vehicle on the road running that software/hardware is implicated as well. Imagine having to decide if all humans should have their license suspended until root cause analysis can be completed every single time a human is involved in a car crash. That is the problem we will need to grapple with when it comes to self driving cars.
This is something that robotaxi companies have already spent a ton of time figuring out - there are a ton of different metrics the cars have to hit before they remove the driver (as companies have done in Phoenix, San Francisco, Miami, and Austin).
There are already robotaxis on the road right now.
yeah gotta agree. I suspect it is software version dependent - the most negative comments here always seem to be from folks who don't own teslas - whereas people who do seem to be mostly saying that the issue is getting better not worse.
Tesla topics seem worse than political topics on HN for triggering blind rage
Oh dear. Much worse than I expected by unexpected breaking; and FSD (Fools Self Driving) is involved in these problems, once again putting the lives of the driver and others at risk on the road.
> Tesla has drawn scrutiny from safety advocates and regulators for its willingness to allow its customers to test what is essentially an unfinished version of a software product that Musk has long promised will lead to fully autonomous vehicles on the road.
Now here's a similar recent scenario with 'blockchain'.
Suppose an unfinished 'beta' blockchain with multi-billions at stake going down or gets hacked and the price falls, is the excuse 'it's beta software', 'sorry you lost your savings'? Hence if many users of FSD beta are putting their lives at risk or getting themselves killed over faulty safety-critical software, is the excuse also going to be 'It's beta software'? 'Sorry you lost your life, you knew the risks!'
It really sounds like a way to push all blame on to the driver rather than the developer, despite little to no protections to the users and screams more tighter regulations to rid of that public 'beta' responsibility with software that has an incredible amount of risks.
I'd like to know when "beta test" entered the layman lexicon and came to be an accepted justification for a product under-performing expectations. Google's ngram viewer suggests usage of the term in literature began around 1980, but that doesn't really capture the use and perception of the word by consumers.
I've only got autopilot, not FSD. It's very conservative. It always considers someone to be in front of you even if they've mostly moved out of your lane into the next lane, or they swerve slightly into your lane. I suspect the engineers erred on the side of being rear-ended rather than rear-ending cars in front.