Friendly reminder that this system is HEAVILY limited, with the following restrictions:
- Must be under 40 mph
- Only during the daylight and only on certain highways
- CANNOT be operated on city/country streets
- CANNOT be operated in construction zones
- CANNOT be during heavy rain or fog or flood roads
Tesla FSD navigating the complex city streets of LA for 60 minutes with zero human intervention.
This seems like a marketing gimmick by Mercedes at best; the two technologies aren't even in the same league. Any comparison is laughable. They are outmatched, outclassed, and absolutely outdone. Karpathy (now @ OpenAI) and the Tesla FSD team have really done an incredible job.
Working in the AV space, it's really frustrating how confidently people who have no idea about what's hard and what isn't go off about Tesla right now.
Mercedes has soundly beaten the last decade of Tesla efforts by reaching L3.
I've personally watched FSD go off the rails and into a crash situation within 60 seconds of being turned on three times this month (I have a friend who loves to try it in San Francisco)
Had it crashed it'd be on my friend, not Tesla. The fact Mercedes is taking responsibility puts it in an entirely different level of effectiveness.
-
People also don't seem to understand Mercedes has a separate L2 system that works above 45 mph that already scores better than AP by consumer reports
People don't go off on Tesla because of its lack of technical prowess. People go off on Tesla because their marketing department is a bunch of pathological liars.
Teslas with FSD are disengaging all the time. And even worse, it's not always FSD initiating: often times it's FSD not realizing it's about to do something wrong, the humans take over, and then them brushing it off as "well it's a beta" or "I bet it would have gotten it but I wasn't so sure".
Getting to a point where Mercedes can guarantee their system will correctly initiate disengagements and do so with a 10 seconds of response time while putting the liability on themselves is a massive leap of anything Tesla has ever put out.
Again, this is what happens when you have literally no clue how the space works (and then turn around and accuse people of FUD.)
If anything, Tesla was found to be deactivating the autopilot mode at the second before a crash [1], embellishing the statistics of miles driven between crashes...
> And yet it’s been used on several orders of magnitude more miles than any other offering.
Based on what? ACC/LKAS has been a feature since before Tesla even existed as a corporation. You might be confusing their claims about FSD with AP (which they very much would like you to)
Similarly, they love for people to compare AP to all driver crashes and fatalities when AP only works in the situations that are otherwise least dangerous to drivers, in cars that are newer than the average car, driven by a demographic with higher than average safety records, etc... which means AP would need to kill a lot of people to not have better numbers than average
For point of comparison, the entire rest of the auto industry, worldwide has fewer advanced driver deaths than Tesla, despite millions more cars on the road with advanced driving functionality.
On an absolute, and relative, basis Tesla advanced driving is the most dangerous advanced driving system in the world.
Where in the first few paragraphs it gives the impression Tesla has 5:1 deaths, but later you learn that they are including simple lane keeping and cruise control in the comparison, and
> None of the cars using the automated systems were involved in fatal accidents
None of the cars using the automated systems were involved in fatal accidents
If you mean Tesla, that is definitely false. Mostly famously, the Bay Area crash where autopilot was conveniently disabled seconds before the car crashed into a barrier, killing the driver. In the aftermath, the driver's family, and other Tesla drivers noted that a recent update to FSD had caused a regression, whereby Teslas would swerve toward the divider instead of away from it.
simple lane keeping and cruise control in the comparison
Some of Tesla AP's worst fatalities are instances where the car was just replicating simple lane keeping functionality and yet the car still crashed into a big rig or emergency vehicle the Tesla didn't see but which any other carmaker's system would have seen.
Notably, in instances where the Tesla was at fault Musk refuses to discuss the accident, even though he otherwise has no issues releasing information about the driver when the driver is at fault. It's telling that Musk rarely (but still) does this, which means that every time he doesn't it's very strong circumstantial evidence that the car was at fault.
At some point you need to stop making excuses for Tesla's atrocious safety practices and abysmal safety record and just except that they're fundamentally dangerous cars.
It's a review of a particular 10-month dataset. I'm making a comment about the framing of the article vs reality which might lead to misleading conclusions. Not claiming that Tesla (or any automated driving system) has never had fatal crashes, which is obviously not the case.
You said that "the entire rest of the auto industry, worldwide has fewer advanced driver deaths than Tesla, despite millions more cars on the road", what are your sources for that? Do you have numbers on non-Tesla L2 cars on the road to compare?
Not op but there are several groups that are independent but essentially aligned on anti-Tesla motivation that have reasons to support campaigns against Tesla: TSLAQ, union supporters, oil producers, legacy auto, to name just a few. There are more. And FUD is well known and easy enough that even individuals can practice it, so I wouldn’t read too much into that.
I can understand anti-Tesla motivation considering that Tesla doesn't act in a responsible way: they call their system "Autopilot" when it really isn't that; and they don't take responsibility in case of a crash. Both of these things should imho be illegal but somehow they get away with it.
I was thinking hedge funds paying troll-farms to bash anything TSLA related when they're short the stock (yes I actually believe that is a real thing that happens all the time), but sure those too.
> People also don't seem to understand Mercedes has a separate L2 system that works above 45 mph that already scores better than AP by consumer reports
Isn't Mercedes' L2 system non-upgradable, though? i.e. people with Merc's L2 system today won't be able to upgrade to L3+ later on, they'll need to buy a whole new car - right?
I bring this up only because I bought my X in 2018 with FSD prepaid - it's 5 years old now, and thanks to the gratis retrofitted computer upgrades it has the same level of FSD functionality as a factory-new Model Y purchased today - and I value that level of commitment to their carbuyers - whereas all the other carmakers I've ever dealt-with are the type that gladly charg $lots just for annual updates to the navigation maps SD card.
(I don't want to be seen defending Tesla, let alone Elon Musk here, but Tesla has avoided plenty of the worst practices of traditional automakers (okayokay, excepting the heated-seats thing).
According to Elon HW V2/2.5 was going to be FSD solved in 2016. Now HW V4 will be FSD solved. There've been multiple paid upgrades along that path that depending on exactly how Elon convinced a given buyer to pay for FSD previously.
The fact that you're going to claim that Tesla, the company that famously discounts vehicles wholesale before unannounced upgrades, silently downgraded their CPO program to a used car program, removed ultrasonic sensors and didn't tell buyers if they'd be missing basic features like Summon temporarily (read: 4+ months) until their vehicles were delivered... is showing a commitment to their car buyers... really says something.
It’s painful to see your matter-of-fact tone regarding things you have absolutely no clue about. You seem to know next to nothing about the auto space in general. Every carmaker raises and lowers their prices according to supply and demand. And no, there is no driver assistance system among any of the big manufacturers that come anywhere near Tesla’s system. And yes, exploring a problem space that hasn’t been explored before will face setbacks and require many iterations. Those iterations are free for the customers, however.
Elon makes a lot of “promises”, you’re right in that. Although, he really doesn’t. He makes educated guesses that you take for promises. Once Tesla’s own website and news outlets deliver those “promises”, should you start taking them for actual ones.
I work on autonomous vehicles, what on earth do I know about LKAS or exploring new problem spaces?
It's not like a basic Corolla has had features like drowsiness detection for close to a decade.
And you seem to have forgotten that in 2016 Elon claimed _all_ Teslas had FSD capable hardware, never contingent on if you had paid for FSD. That means the upgrades to 3 and 4 were paid upgrades for people who thought they were paying for a car that could already do FSD.
Overall you're upset and barking up the wrong tree (hardly even in the right forest)
> I bought my X in 2018 with FSD prepaid - it's 5 years old now, and thanks to the gratis retrofitted computer upgrades it has the same level of FSD functionality as a factory-new Model Y purchased today
i.e. neither your 2018 model X nor a factory Model Y is anywhere close to "Full Self Driving". Remember what Musk promised in 2017: "In November or December of this year, we should be able to go all the way from a parking lot in California to a parking lot in New York with no controls touched in the entire journey."
We're in 2023. You got scammed, and you're still defending the scammer.
Working in the space appears to be misleading you. It is quite obvious that Teslas can already drive themselves - the thread ancestor literally linked a video.
Your comment has complaints about an academic engineering definition/legal standard that, realistically, is probably going to be paid for in lives by delaying the roll-out and development of self driving vehicles. We know from the march of AI in other fields that the potential for cars to drive to superhuman safety standards is just on the cusp of technologically achievable and will likely be achieved with some sort of evolutionary approach of trying a bunch of things and seeing what works.
This is exactly the wrong time to be making automakers liable for experimenting. If anything, we should be lowering the standards automakers are held to to push through the last technical barriers and break into the oasis where less people kill themselves on the road. I've already lost too many tired friends to car accidents; we should be sparing no effort to get humans away from the driver's wheel. In a fair world, people advocating raising standards should be held liable for the lives they are throwing away on net.
Nobody is being forced to drive a car with autopilot if they aren't comfortable. The risks are obvious and understood. Personal liability is fine.
> Nobody is being forced to drive a car with autopilot if they aren't comfortable. The risks are obvious and understood. Personal liability is fine.
What about all the others participants on the road? Should pedestrian and cyclist also take personal responsibility for getting hit by a self driving car?
The "driver" should be responsible. They were the one with most control over what brand of car was used, what conditions it was driving in and the choice to let it drive itself. They have the power to choose cars that don't run people over. Hold them liable and the situation will sort itself out as quickly as is efficient.
The situation would be very similar to how it is now, except a lot less people would be dying in car accidents.
Took me less than a few minutes, to find in the same channel, FSD failing and the human having to intervene. Here it is at the proper time and for a video uploaded 10 hours ago... - https://youtu.be/KG4yABOlNjk?t=995
And even worst failures...looking at all these I simply do not believe the videos posted are not edited or a selection of a success out of many failed ones.
Since the video was live on YouTube no editing was done...
These videos are only posting the successful events, we need to see them all. That is why probably Tesla is not putting their money where their mouth is.
Anecdotally, in LA, I turn off FSD basically every minute while trying to use it, due to it doing something slightly “inhuman” or not ideal/too to the letter of the law, signaling incorrectly while staying in a lane that curves slightly, etc.
I can’t imagine letting it go for a full hour without causing some road rage or traffic blockage.
To be clear there is a definite driving “culture” in LA that is very permissive and aggressive (out of necessity). FSD doesn’t follow this culture.
Well I was trying to still give a chance to FSD, but as you are raising the stakes...Let's agree to have Marrakech as the baseline - https://youtu.be/SsZlduEIyPQ
And we can also test it in Ho Chi Minh. Tesla says it has superhuman abilities... - https://youtu.be/1ZupwFOhjl4
I only started noticing it since the Autopilot/FSD fusion update a month or two ago. I haven’t been driving much the past few weeks so maybe it has been fixed.
Every time you disengage it invites you to leave immediate voice feedback as to why, and presumably they are using all this feedback in conjunction with camera and data feeds from cars that are opted in (which includes all FSD beta cars I believe).
So, they are getting what they need to make it better.
The problem with Tesla is the lack of LiDAR not training data.
You need to be able to accurately do bounding box detection in order to determine whether that billboard of a person is real or not or if that dog with a hat should be avoided.
Research has conclusively shown that vision only systems simply can't match LiDAR for this task.
You are right that one problem is possibly humans don't drive with their heads fixed in place, they move their neck and constantly adjust viewing angles.
w.r.t. the super computer point, AI systems have been able to out perform humans for specialized tasks for a while.
This is a great example! I hope people will take the time to click and watch it drive.
I think it’s a tough call in this case. The Tesla is trying not to block an intersection where a light is green but there’s no room to clear the intersection on the other side due to traffic ahead. In fact an oncoming car is able to turn left in front of the Tesla because it waited.
Probably the law is that you should NOT enter the intersection in such a state, but the human nature would be to make a more nuanced judgement of “how bad would it be” to continue thru and possibly get stuck sticking out into the intersection for a bit until traffic moves again.
I would think - how long might I be stuck? Would it actual impede other traffic? Also factors like, am I’m late for work? Am I frustrated because traffic has been horrible or am I enjoying a Sunday drive?
Ego (Tesla’s name for the driving software) doesn’t get impatient. It can maximally obey all traffic regulations at all times if they code it to do so, even if human drivers behind might be laying on the horn.
This little clip really shows how much nuance is involved in designing these systems, and how doing the technically right thing can either be wrong or at least ambiguously right.
LA has an unwritten law where 2-3 cars make unprotected lefts after oncoming traffic has cleared on yellow/red lights. Letting FSD drive, it can’t honor this “cultural” (technically illegal) behavior. If I am operating it, off goes FSD in that moment.
Tesla has been specifically reprimanded by NHSTA for allowing FSD to drive like human rather than to the letter of the law. The rolling stops is one I remember, but basically it'll apply to anything.
Dude it's totally following the law. You're allowed to enter the intersection for an unprotected left even if it's not clear to turn (possibly not if the end of the intersection you're going to is full of cars). If you are in the intersection, you're allowed to clear when possible regardless of the color of your light.
Most intersections with unprotected left turns let you fit at least a whole car, sometimes two in the intersection to wait, and the third car has its front wheels on the paint, so it's totally in the intersection.
Correct. The law states that as long as your front wheels are in the intersection prior to the light turning red, you should proceed through the intersection. Inexperienced drivers that either stay in the intersection or try to reverse into traffic behind them are breaking the law and create a huge hazard for others.
Even if the light for opposing traffic turns green while the turning car is still clearing the intersection, opposing traffic is legally required to wait and not enter the intersection until opposing traffic has cleared the intersection.
While you are allowed to enter the intersection during a yellow, you are also considered legally to have been warned and would be consider liable if doing so causes an accident.
This extra liability would seem like making self driving cars error on the side of stopping to be in the interests of auto makers who would be exposed to that liability.
Incorrect. As long as your front wheels pass the stop line prior to the light turning red, you are legally driving through the intersection and should continue and clear the intersection on the other side (not stop in the middle).
Imagine how crazy the law would be if this were the case:
- Light changes from green to yellow 1ms before your front wheels pass the stop line
- Other direction traffic runs a red light and hits you from the side
- You're now somehow liable because your wheels entered the intersection with zero ability to react quickly enough (no human can react in 1ms and no car can stop that fast) and the other driver that clearly blew a red is not?
Mostly the cars will have entered during the green, not the yellow, and were waiting for a chance to turn. When there was no opportunity to turn while the light was green, they must turn while the light is yellow or red, because they are already in the intersection and must clear the intersection.
You can't legally enter an intersection (advance past the stopping line if present) until you are clear to perform your transit of the intersection. Left on red is never legal and marginal on yellow.
> You can't legally enter an intersection (advance past the stopping line if present) until you are clear to perform your transit of the intersection.
Yeah, I've seen tickets written for this in Emeryville, CA for cars that entered on green and weren't going to clear the intersection in the next ten minutes. (I don't know if the intersection is still that bad.) Bicycle cop, because there was no way they'd get a car in there.
> Left on red is never legal
I recall that this was legal in Michigan, where I grew up, but it either had to be onto or off of a one way street. I could never remember which, so I didn't try it.
But I did see someone make a left on red on market street in SF from the rightmost lane, which is very illegal.
> I recall that this was legal in Michigan, where I grew up, but it either had to be onto or off of a one way street. I could never remember which, so I didn't try it.
Yeah, this covers the case when turning left doesn't cross any oncoming lanes of traffic - so it's basically the same as turning right on red at a normal light.
Although searching for this now, it does appear to also be allowed from a two-way,[0] which sounds vaguely familiar but isn't really the same as turning right on red.
The law here is clear. There has to be "sufficient space on the other side" before you enter the intersection. There is no need to make the entire maneuver at once.
> Left on red is never legal and marginal on yellow.
Not parituclarly relevant to the gridlock violation at issue here, but left on red from a one-way street to a one-way street is legal in the same conditions as right-on-red generally in a large majority of US states, a few of which also allow left-on-red from a two-way street to a one-way street. Left-on-red is only completely illegal in a few states.
Again...that was not the failure. Actually there are two failures:
- First one, to stop too far away both from the road intersection and the pedestrian crossing. And of course you should not stop on top the pedestrian crossing. The car behind the Tesla, noticed that right away, and went over the Tesla. Even the Tesla driver in the video commented on that. I wonder what FSD version that driver has :-)
- Second failure, the car ahead, the one already over the road crossing started moving but the Tesla would not move at all. Only when the video author accelerated as he mentions in the video.
If you stay there, blocking traffic for no reason, and do not proceed when you are clear to go, as it did until the driver accelerated, you will fail a driving license exam in most countries.
"Most countries" would fail a driver for not crossing an intersection until there is space on the other side? I think you're being a little absurd here.
The driver in this case could see that the cars beyond the blue car were starting to move, and thus predict that he could cross a little bit early (betting on the space being available by the end of the maneuver, but risking being the ass who ends up blocking the crosswalk after miscalculating).
I’m left wondering what this comment proves? The four situations you posted were the most minor of minor annoyances when the car was being too cautious, and you’ve decided they must be cherry picked without any evidence given.
It's shows the system does not have a semantic understanding of what is happening.
That is why he stops so far away from the gate, it does not know it is a gate.
They have a driver attention detection system, that works even with dark glasses, as the driver in this case. Also even if you do not take over in the 10 seconds, like in the case for example of the driver fainting, or having a heart attack, the car will stop safely on the side of the road, alert emergency services, and unlock the doors for first responders.
In that 3 year old build, you can see the driver disengage out of pure fear before the turn started. You can also see the route planner plotting the corner maneuver. You can then see the same software completing the corner twice when the driver does the exact same thing but merely refrains from overriding.
Plus it's a U-turn situation on ancient version of FSD Beta from two years ago (see the dotted lines in the display, or the YouTube upload date for that matter) that did not support U-turns. And it's not necessarily clear it wasn't going to brake more abruptly if the driver had not intervened.
I'm not sure exactly your point. The Tesla does sometimes require intervention, that's why it's Level 2. But it's still attempting to drive in significantly more complicated situations than this Drive Pilot thing. Does Drive Pilot stop at stoplights or make turns? I don't think so.
Regarding deceptive editing, plenty of people post their Teslas doing squirrely things and them intervening. So it's not like a secret that sometimes you have to intervene.
We know Tesla cannot match Mercedes. We don't know whether or not Mercedes can match Tesla. Mercedes isn't reckless enough to let untrained fanboys play with defective software in two-ton vehicles on public roads.
"We know Tesla cannot match Mercedes" - how? You know this?
"reckless" "untrained fanboys" "defective software" - what is this tone? Why is it reckless? Why do the fanboys need training? Why do you think the software is defective? These are significant unjustified claims!
To me, it seems each company has a different strategy for self-driving, which aren't directly comparable. Beta testing with relatively untrained people on public roads seems necessary for either strategy though.
Mercedes' system does not do most of the things Tesla's does, right? Such as stop at stoplights or make turns, or do anything at all off-highway. It's a significantly different product, and since they didn't try to do many of the things Tesla is trying to do, it's pretty difficult to claim that those things aren't necessary because Mercedes didn't do them, when they haven't even attempted to deliver the same feature.
It's not necessarily worse, since there is a person driving the car who can prevent the car from behaving badly. What's the safety difference between this and a regular cruise control, which will happily plow you into a wall or car if you don't intervene?
And, empirically, there's no evidence that these cars are less safe when driving this way. Tesla claims people driving with FSD enabled have 4x fewer accidents than those driving without it, and nobody has presented any data that disputes that.
"...Several attempts have been made to reach out to Tesla for comment over the years since these numbers first started coming out, however, Tesla closed its press relations office and no longer responds to press inquiries..."
This critique of their impact report (I was referring to a more recent statement) only goes as far as saying FSD beta is equally safe to humans driving, not worse, which seems perfectly acceptable?
Depends on the average of human driver. Especially if the average includes motorbikes.
Saying fsd on tela has the same statistic than the general driver population prints a grim picture, as it puts it in a strictly worse performance than peers vehicles (SUV or saloons depending on the model)
You do not get to assume safety in a safety-critical system, period. The burden of proof lies with the entity trying to release a dangerous product to prove that it is safe, not demand everybody else to prove that it is unsafe. The entire argument that there is no evidence that it is unsafe and thus it is okay is wrong to its very foundation. It is such a bad argument that a licensed engineer would stand a good chance of literally losing their license if they advocated such a safety-regressive position. Whoever you heard that from who is actively actually promulgating such inane logic is completely unqualified to talk about safety-critical systems engineering and should be completely ignored.
First, thanks for being an asshole. Second, the product has been released, I did not engineer it, and people online are arguing about it. I’m interested in whether the criticism I’m seeing is valid. Is your idea that anyone who claims something is unsafe, for any reason, should be immediately trusted? If not, how do we have such a discussion? How do we weed out poor arguments? I gave a first-principles argument for why the system might be about as safe as a commonly accepted feature, and also mentioned corroborating evidence of it’s safety. You are welcome to disagree with it, but so far, like most people, you’ve just come in hotheaded and not provided any substantive argument.
It is assumed unsafe. It must be proven safe. This can be done via a engineering, statistical, or other appropriate analysis done by entitys with full access to the design specifications, usage information, etc. who are competent to do the analysis and who have no conflict of interest/unbiased. This has not been done for Tesla FSD therefore it must be assumed to be unsafe. As Tesla deliberately misclassifies their Full Self Driving Beta testing program to avoid government reporting requirements and does not release raw data to any unbiased non-government entity or research organization, it is literally impossible for any external analysis to prove that Tesla FSD is safe. The most a third party can do is affirmatively prove that it is unsafe as you need millions of times less information to affirmatively prove it is dangerous relative to existing systems.
I am not joking when I say millions of times less information. The aggregate motor vehicle fatality rate in the US is ~1.3 per 100,000,000 miles. To even have a chance of proving safety you would need to analyze on the order of 1,000,000,000 miles exhaustively. In contrast, to demonstrate it is not safe you would only need a list of like 20 fatalitys over those same billion miles to affirmatively prove it is unsafe. A list of 20 names versus the raw data for 1,000,000,000 miles is at least a factor of millions in information required and qualitatively different to achieve. Proving safety is enormously difficult and basically requires direct access to the raw information. Proving unsafety is extremely easy and can be relatively easily demonstrated by third partys when dealing with safety critical systems.
Safety engineering is not some sort of new idea that we need to reinvent by the seat of our pants from first principles, it is a mature methodology regularly deployed in aerospace, medical, civil, and automotive engineering applications. Assumptions of safety are fundamentally incompatible with safety critical engineering. This is not a fringe opinion, it is literally a core concept at the foundations of safety critical engineering and not understanding that concept demonstrates a fundamental misunderstanding of modern safety. It is like a geologist arguing the Earth is flat; anybody who suggests such a idea is so fundamentally divorced from the ideas of the field that their knowledge not only fails to reach the level of a expert, they fail to reach the level of even a basic practitioner.
Tesla not only isn't putting their money where their mouth is, they have a lond-standing history of using illegal DMCA takedowns to remove these sorts of videos from Twitter, reddit, Facebook, etc.
Even considering its heavy limitations, the Mercedes system is miles better than the Tesla, because it is actually useful. You can actually turn on the system and do your email, or take a nap. Yes, the 40 mph limitation on highways seems to make it useless, but many highways get congested to the point where you are not going over 40 mph anyways, and those are the times when it is most frustrating to drive.
And regarding the FSD, most youtube videos I have seen run into human intervention within 20 minutes or so. Going 60 minutes without human intervention is certainly possible if you get a bit lucky but that does not mean you should take your eyes off the road for a second. FSD still has to be constantly supervised, which makes it of very limited utility. At this stage it is still a fun experiment at most.
"do your email, or take a nap" <- The definition of an SAE Level 3 system is that you must remain in the seat, prepared to take control immediately when the system requests you to. Taking a nap or otherwise not paying attention is not what such a system supports.
I don't think that's correct. I've seen "in a certain amount of time" but you make it sound like it's a safety issue, which it's not, that being a key differentiation from a L2 system. When it can't drive it stops driving and you have to drive. If it's not a safety issue you could conceivably bee sleeping, as long as you can wake up in a timely manner.
The SAE makes it clear that the car is driving at L3, and on that basis you would expect the transition to another driver would be graceful, just like with two human drivers.
That "certain amount of time" is variable, but in practice is on the level of 10-15 seconds for the Mercedes system - at least that's what it was when it was first certified in Germany some time ago. It is designed to let you take your hands off the wheel and look at the scenery, but anything more than that is too much distraction for when it requires you to take over. And it will, because it's really not very capable at all - it's basically an advanced traffic jam autopilot that can follow other cars in a straight well marked line, that's it.
For person like me that takes very light naps when sitting, 10 seconds is plenty to wake up and take control. Also if I am using a tablet doing email or browsing or even playing games, 10 seconds is plenty of time to put the tablet away and grab the wheel.
>For person like me that takes very light naps when sitting, 10 seconds is plenty to wake up and take control.
I'd argue that isn't remotely enough time to safely take control. You're betting your brain wakes up sufficiently in that time, and if it doesn't the consequences are potentially deadly.
>Also if I am using a tablet doing email or browsing or even playing games, 10 seconds is plenty of time to put the tablet away and grab the wheel.
That's a bit better from an alertness standpoint, but not from a situational awareness one. You're deprived of both context and situational awareness on the road.
Some key details extend far beyond a 10-15 seconds, such as: another car driving erratically, lanes ending, line of sight on pedestrians or cyclists visible prior that are now occluded by traffic. The list goes on.
>>Also if I am using a tablet doing email or browsing or even playing games, 10 seconds is plenty of time
> That's a bit better from an alertness standpoint
How do you know?
Subjective measurement of time is very inaccurate. I have experienced this, when I timed some regular activities I undertook every day, total habit. I would have said they took me ten seconds. When I timed them, it was forty five seconds.
I have no faith in anybodies' perception of how long they can switch context like this unless they have objectively measured it
That's a good point. On the face of it, I'd assumed an awake person already engaging their brain would perform better than a person just waking up in that context, but perahps not.
>I have no faith in anybodies' perception of how long they can switch context like this unless they have objectively measured it
Agreed. Moreover, I'd say that the risk for faulty information processing or incorrect preconceptions is higher if the person is distracted but awake.
I guess it is not ok to be asleep as even a loud alarm will take time to wake you up, but not paying attention, like when you are doing your email is fine. In fact, that's the entire point of level 3 autonomy.
You are "on call" rather than "at work", so you must be prepared to act when the car rings. If the car doesn't ring, you are free to do whatever you want as long as you can hear the ring and take back control when it happens.
Afaik the key differentiator of a L3 and a L2 system is that if you don't take control when the system requests you to, the L3 system can safely stop/pull aside while all bets are off with an L2 system.
Now try to look up videos of Mercedes’ L3 system in operation and you’ll see how hilarious this claim is. It shuts off immediately without a vehicle to follow in front of you. Good luck taking a nap and typing emails. L3 my ass.
You're not allowed to sleep, you need to take control within 10-15 seconds.
And no, it doesn't turn off if there are no vehicles in front, but it does turn off above 45 MPH. On most highways you'll only be below 45 because of traffic.
Above 45 they have a separate L2 system that requires "normal" attentiveness.
Autopilot works fine in highways under 40 miles per hour. You still pay some attention but that’s also true with this, if traffic goes above 40 mph you have to take control in 2 seconds while Autopilot is more gradual
> And regarding the FSD, most youtube videos I have seen run into human intervention within 20 minutes or so. Going 60 minutes without human intervention is certainly possible if you get a bit lucky but that does not mean you should take your eyes off the road for a second. FSD still has to be constantly supervised, which makes it of very limited utility. At this stage it is still a fun experiment at most.
I can understand financial liability, but what if somebody gets seriously hurt and there’s criminal charges? Is there already a legal framework in US for transferring this kind of liabilities from the driver to manufacturer?
How exactly is Mercedes accepting liability? I mean to say, how does this work in practice? Who absorbs the demerit points if your car is accused of going 40 in a 30 zone?
Nope. Local police and prosecutors have no authority to make decisions on liability. Their authority is limited to criminal matters. However, police reports can generally be used as evidence in civil trials where judges and juries make decisions about liability.
There may be some circumstances where a driver operating a Mercedes-Benz vehicle using Drive Pilot could be committing a criminal offense. For example, I think sitting in the driver's seat while intoxicated would still be illegal even if you're not actually driving. But that is separate from liability.
"once a driver turns on Drive Pilot, they are "no longer legally liable for the car's operation until it disengages." The publication goes so far as to say that a driver could actually pay zero attention to the road ahead, play on their mobile phones, and even watch a movie. If the car were to crash, Mercedes would be 100 percent responsible."
for now, that is. in German they call it "traffic jam pilot" Staupilot. they start L3 autonomy with the boring and tedious scenario first.
it's obvious they'll expand it to more and more freeway/highway driving scenarios, and from there grow into any out of town driving.
meanwhile Waymo and Tesla can take their bruises with downtown traffic and pedestrians and children and hand drawn road signs and dirty signs and poorly placed ones and whichever more crazy real life reality show surprise guests appear on stage...
I wouldn't (and I don't think you did but several others in such discussions do) snicker too much on Mercedes. they have a knack on getting a couple of car related things pretty right.
Folks, don't downvote me, show me where Mercedes actually says they are taking liability for actual cars that are actually on the road with customers. What people cite on this is a vague marketing puff piece from a year ago. If they are doing it, it should be pretty easy to find out how that is structured from their website or whatever the car owner has signed, or just like literally any evidence whatsoever.
This doesn't mean the system is actually better. It just means that Mercedes thinks that the cost of covering the liability won't exceed the boost in sales that come from this decision. It reminds me of what an automotive innovator once said[1]:
>Here's the way I see it, Ted. Guy puts a fancy guarantee on a box 'cause he wants you to feel all warm and toasty inside... Because they know all they sold ya was a guaranteed piece of shit. That's all it is, isn't it? Hey, if you want me to take a dump in a box and mark it guaranteed, I will. I got spare time. But for now, for your customer's sake, for your daughter's sake, ya might wanna think about buying a quality product from me.
Not that I'm implying Autopilot is necessarily a "quality product" in comparison. It is just that a guarantee or liability coverage is nothing but a marketing expense for the company issuing it. It doesn't actually mean the product is higher quality.
I have a Model 3 in Europe with the so-called FSD, and it’s mostly terrible. I regret paying for it. The car often doesn’t understand even speed limit signs, so it fails at the bare minimum of safety.
Recently I visited LA and a friend gave me a drive in their Model 3. The difference in sensor interpretation quality was massive compared to what I see back home. That gave me hope of my aging Model 3 still seeing improvements… But also a nagging suspicion that Tesla may have spent a lot of manual effort on data labeling and heuristics for LA roads specifically, and the results may not be easily scalable elsewhere.
If you have purchased the "FSD capability," your car should be upgraded to include FSD Beta (or "release") at some point in the future. In the meantime, what you have is essentially Enhanced Autopilot with some bonus features like automatic lane changing on freeways.
The difference in FSD behavior is due to the EU’s laws on autonomous vehicles. The EU mandates maximum lateral acceleration, lane change times, and many other details that make for a worse (and less safe) driving experience.
It's very different goals, the Tesla approach is more of a hack, which is to release things, without any liability or guarantee "we quickly hacked this together, good luck, if you die or get injured this is your problem!", and Mercedes is delivering a product that only support a few features but do it well, and they put their responsibility on it.
Not sure how that relates to the point OP was making. Unless I misunderstood his point?
> Mercedes is delivering a product that only support a few features but do it well
This naturally implies that Tesla experimented with a broader set of self-driving features in an throw-it-at-the-wall experimental fashion. Which is more of a technical/product management question than a financial/legal/marketing one. Unless OP simply means Mercedes released the same set of features just extremely restricted in what you can do with them?
If so the hacker/experimental vs limited focus on "small set of features" dichotomy is not super relevant. It's just business risk aversion or gov regulatory strategy, not product/technology development strategy.
So, in the absolute best-case scenario cherry-picked from thousands of videos, Tesla's system is capable of going twenty-five miles without making dangerous mistakes.
> Without any comparison to humans those numbers are completely meaningless
I disagree.
What I want to know, is is it safe? Not is it safer than the average human driver, but is it safe in an absolute sense.
When a human driver crashes we can almost always pin the cause down to human error. Errors caused by some human failing, being less than they could be. The promise of machines is that they are consistent. They are never "less than they could be", but are consistently at their best.
Comparing humans and machines is comparing dogs and roses - it is interesting a rose smells better, interesting a dog is more loving (or fierce), but not a valid comparison
Self driving cars stand or fall on their own capabilities.
2. What would a good absolute number for that period be? Our threshold can't be zero or we'd have to give up every technology from showers/baths (~60x more deadly) to fresh produce (E. coli, salmonella, listeria).
> 2. What would a good absolute number for that period be? Our threshold can't be zero or we'd have to give up every technology from showers/baths (~60x more deadly) to fresh produce (E. coli, salmonella, listeria).
Zero accidents caused by the failure of the software. (Clearly a good system may have some obscure bug that makes it fail - but failure of a self driving system should cause uproar and consternation - unlike the reaction of Tesla to the deaths caused by their systems, it seems)
If I slip over in the shower it is not because the shower head went rouge and strangled me. If a shower was designed that in the period of one year strangled eleven people (a fair comparison to Tesla's record) imagine that?
Point being your comparisons (food poisoning or household mishaps) are not relevant
- It's not clear why you no longer care that a technology is killing a lot of people if there's a human in the loop that you can blame. If a driver runs over your sister/son/friend, does it matter if you can blame someone?
- About 43,000 American motorists died in 2021. If we have a way to prevent many of their deaths with software, would you not want to unless you could prevent every death?
- Why is a software tool failure different from any other tool failure? Brakes can fail, wheels can fall off) (happened to me once), etc. Listeria in produce is a failure in the production chain. Showers can be made safer. You use your car/shower/spinach and some day, for reasons entirely beyond your control, it might kill you.
- Why isn't the driver to blame in a Tesla? They're supposed to be watching and responsible.
- There's no clear distinction between what's a software failure and what isn't. Collisions have multiple contributing factors. Perfect software will still have collisions. The numbers reported aren't just collisions where the Tesla was at fault. One person was killed by a self-driving vehicle when a person jumped a concrete barrier and ran across the highway at night. The human driver couldn't react in time, and the software didn't see them. Is that a software failure? At what point do we accept that a collision is no longer the car's fault?
Sure it's different whether you were hit by a malfunctioning machine that was confused by sunlight or whether you were hit by a driver who wasn't paying attention because they texted. In one instance you have the person genuinely apologize to you while in the autonomous car case, maybe there wasn't a passenger at all.
But on the other hand, if you can identify the most accident prone group of humans, and require them to use AI cars, and with this you could significantly reduce the number of road kills/accidents, wouldn't that be an improvement?
Autopilot is just fancy cruise control. How many crashes have there been with cruise control turned on? It's just a completely meaningless thing to even look at. How many crashes would there have been during those miles without autopilot?
"...Of the 12 ADA systems we just finished testing, Ford BlueCruise came out on top, followed by Cadillac Super Cruise and Mercedes-Benz Driver Assistance. Tesla, once an innovator in ADA with its Autopilot system, fell from its second-place showing in 2020 to seventh this time around—about the middle of the pack. That’s because Tesla hasn’t changed Autopilot’s basic functionality much since it first came out, instead just adding more features to it, says Fisher...
...“After all this time, Autopilot still doesn’t allow collaborative steering and doesn’t have an effective driver monitoring system. While other automakers have evolved their ACC and LCA systems, Tesla has simply fallen behind.”..."
How many Tesla cars with full self driving are there compared to regular cars?
A Tesla with FSD will only be using FSD a small fraction of the time.
If we had statistics on the number of hours FSD was active compared to the number of hours all other car driving of all cars we might be able to compared these numbers.
Hour wise, I think normal driving is way above 7500:1.
Why are people willing to excuse things because they happen in a car? If an AI power tool were going haywire once in a while and chopping the arms off of bystanders near by, would we find that acceptable because people using non-AI powered tools also chop off their body parts sometimes? 'We can't know if it would have happened if a person were in control' is not an argument that would fly for anything else just as dangerous.
- Must be under 40 mph
- Only during the daylight and only on certain highways
- CANNOT be operated on city/country streets
doesn't exclude all roads? Where I live, highways all have a speed limit of 65-75 mph, and all streets with a speed limit of 40 or below are city our county roads. So where can you actually use this?
From what it sounds like, when you are in a traffic jam you use it and then the second that traffic jam ends you immediately have to take control.
Ngl that sounds like it requires more attention than autopilot, if you’re not doing anything and then have to take full control that sounds like it could lead to an absurdly unsafe outcome
The human has to keep the hands on the wheel at all times and eyes on the road.
Edit: I looked at these FSD videos before and am very sure it will not work that well in an average European city. Unfortunately, because I really would love to buy a real self driving car that legally allows me to watch a movie while driving (highways is enough).
Skeptical but genuine question here: if Tesla's Level 3 system is more advanced, why don't they have authorization to release it? And why does the article quote Tesla as saying they only have a level 2 system?
They aren’t pursuing it, and stopped publicly reporting disengagement data. They found out their stans are happy to buy the cars and use FSD without liability protection or safety data, so why bother? Whenever one crashes it’s the driver’s fault by definition.
I own a Tesla with the FSD Beta. (Model S Plaid). I don't know why you seem so bitter about Mercedes product. I'm excited about it! The Tesla FSD really isn't useful, except as an enhanced cruise control.
Owning a Tesla with FSD, it no where close to being able to drive autonomous. I have intervened multiple times where it would have caused a serious accident like merging into a concrete barrier or not detecting a car while trying to turn right into a lane. They still can't fix the phantom braking problem has plague my car from the beginning and i have to drive on the freeways with my foot over the accelerator so i can cancel the hard braking.
I remember seeing a video where Tesla FSD veered right at an island/telephone pole. Another where it veered at a crowd of pedestrians on the corner of a signalized intersection waiting to cross.
A reminder that the software has been buggy for years and Tesla is somehow being allowed to "beta test" it in public among people who did not consent to the risk.
As a counter point: these Tesla videos are much more common because there are more Teslas driving around with the feature enabled.
We don't know of the Mercedes is as much of a murder machine based on the little material from their FSD system there is.
All self driving car manufacturers have videos of their cars doing the most ridiculous things. It's hard to pick one above the other. At least Mercedes seems to have documented their limitations rather than assumed their system works everywhere, so that's a good sign to me.
Because ultimately there hasn’t been a single publicized death or injury resulting from FSD Beta, while it only took a few months for a professional tester of Uber’s SDC effort to kill someone. So maybe it makes sense to test with millions of people for a few minutes a day instead of 8 hours a day with a few hired testers. The media would sure jump on such tesla headlines.
> This seems like a marketing gimmick by Mercedes at best; the two technologies aren't even in the same league.
You might be right or not but gave zero arguments in your comment. The fact that these safety/regulatory restrictions exist does not mean the system is not more capable.
By my count, 46 videos of it not staying in lane, 54 videos of it swerving into objects, 23 videos of it ignoring red lights, 22 videos failing at intersections, 12 videos of pulling out into traffic, 15 videos ignoring road signs, 7 videos ignoring pedestrians, 5 videos ignoring speed limits, 23 videos stopping randomly, 5 videos failing in the snow, 3 videos ignoring buses, 5 videos speeding in school zones with children present, and 3 videos ignoring school bus stop signs.
> They are not even betting one cent in their car safety.
Do you mean strictly in the sense of accepting liability for crashes while AutoPilot is engaged?
Because Tesla makes safety a primary selling point of their vehicles. Objectively Tesla invests hugely in the safety of their vehicles, and scores the absolute highest marks for safety in many government tests.
Tesla makes an L2 system where the driver must remain engaged. And part of their FSD system includes the most sophisticated driver gaze and attention tracking system on the market. This has made their FSD system on predominantly non-highway roads safer than the average human driver without FSD.
> And part of their FSD system includes the most sophisticated driver gaze and attention tracking system on the market.
That’s a just blunt lie. Their driver monitoring system is very deficient. They lack, for example, industry standard of infrared camera that allows you to see through the sunglasses.
And all older Model S/X, that have FSD, lack any camera at all.
You’re right, their older cars don’t have their most advanced system. When I said “on the market” I mean the cars they are selling now, not historically.
I’ve found the system extremely adept at gaze tracking and alerting. The cabin camera is infrared in their latest models (at least for the last 2 years).
Please don’t call me a liar. I am happy to be called wrong and corrected.
I should have said “one of the most advanced” because this is in truth a subjective measure and I don’t think agencies like NHTSA rank and qualify attention tracking?
In minor traffic a week or so ago, I ended up next to a Tesla for a few minutes where the driver had zero hands on the wheel, and her eyes were buried in her phone, with her head angled downward. Whatever system was running seemed to be totally fine with that situation; if that's the most advanced driver attention monitoring system available, we're in a lot of trouble. Tesla caring so much about safety is so obviously a bad joke.
It is not. The FSD Beta enables an aggressive attention monitor which is not active with the basic AutoPilot system.
The nag when FSD is enabled is actually quite annoying. Even glancing over at the screen for more than a second will trigger it. If it triggers more than a couple times you get locked out for the drive. If it triggers more than 5 times in total across any number of drives, you get locked out entirely for a full week.
I don’t believe you. Teslas have a cabin camera that monitors your gaze and quickly emits warnings if you look at your phone while autopilot is on. If you ignore the warnings, the car puts it emergency flashers on, pulls over, and disables autonomous driving until your next trip. If you do the five times with the FSD beta, you are banned from using FSD.
No, I mean actually bet your life, say put your little children in the back and send them to some destination because you seen Tesla videos on YouTube.
if you won\t do it for a randoms destination, on what would you risk your children life, on some white listed high-way? which one?
Yowzzaa, but the damm Beta on Tesla can't still think of slowing down while cornering on steep curves, throwing people on to the side and not navigate into a dead end that drops into the ocean.
This is not an anecdote, it's my personal experience.
No way I'm not gonna trust a system again, that puts human lives (even those who are not driving a Tesla) for a beta test.
Tesla FSD is useless though, because you need to constantly monitor the system for it to do something stupid (which it more or less regularly does).
With a level 3 system like the one from mercedes you can check your phone or do anything else and only have to react when it tells you to take control again.
Yea you will struggle to find a single video of drive pilot where it’s not driven by a representative from Mercedes.
In those, you can see the system disengage the second it doesn’t have a vehicle to follow at slow speeds right in front of it (like when the vehicle ahead changes lanes).
It is sad to see how many people in this thread will assume the absolute worst with respect to Tesla's FSD capabilities and then will accept this Level 3 announcement from Mercedes in the most charitable way possible. It requires little effort to achieve Level 3 in extremely constrained situations. For example, it would be almost trivial for any autonomy company to achieve "Level 3" at 5 miles per hour within a specific geography. 40 MPH with all of the other limitations of this system is simply not very interesting.
It seems many people in this thread are highly motivated by prior bias toward the negative on Tesla; far more than those of us who appreciate what Tesla is doing.
I'm also finding this really sad. Its painfully obvious to anyone who has used either system recently the majority of commenters have no idea what they are talking about either, the amount of nonsense in this thread about both Tesla and Mercedes ADAS implementations. As always with car discussions on this site, people's feelings about brands and their stereotypical owners seem to cloud any reasonable discourse (especially when Tesla or FSD gets mentioned, sigh). That and of course everyone on Hackernews is apparently an automotive industry expert to boot...
Mercedes plays tic-tac-toe while Tesla plays Chess. It's just not comparable.
Tesla is taking a radically different and open approach to autonomy compared to every other player. That opens them up to intense criticism though as there are thousands of videos online, with successes and failures. When close to a half a million people are using an autonomous system across the full ODD and all of the US and Canada it's going to be very easy to compile a collection of critical failures. But IMO this is the best way to make fast real progress in this space and not just a limited Disney ride.
But FSD isn't even necessary to help, an autonomous acceleration and follow mode for all cars on traffic lights and in traffic jams would already have huge benefits.
In addition to radar and lidar, they use cameras for something according to the article. That's probably the component that doesn't work as well as night. (If I had to guess what the cameras are for maybe traffic light detection, lane detection, and human review of video after an incident)
A few weeks ago, i saw a Tesla suddenly go 90 degrees on a small road downtown Atlanta ... It took a while for it to get going again... Of course I don't know if it was FSD but I have never seen a human do something this dumb.
good god! No autonomous drive should drive through fog or flooded roads or construction zones. I wouldn't trust any system in those conditions regardless of what stans say.
If you constrain a system to where it’s effectively useless and declare victory, that’s worse than trying to actually solve the problem and saying you’re not there yet.
It’s lowering the goal precipitously until you can achieve and then pretending you did it. Tesla has flaws, but this is a dumb article.
Tesla FSD navigating the complex city streets of LA for 60 minutes with zero human intervention.
This seems like a marketing gimmick by Mercedes at best; the two technologies aren't even in the same league. Any comparison is laughable. They are outmatched, outclassed, and absolutely outdone. Karpathy (now @ OpenAI) and the Tesla FSD team have really done an incredible job.