Hacker News new | past | comments | ask | show | jobs | submit login
Mercedes to accept legal responsibility for a vehicle when Drive Pilot is active (roadandtrack.com)
613 points by gscott on March 22, 2022 | hide | past | favorite | 457 comments



I wonder how much of this is part of a genuine attempt to bring a safe product to market, and how much is to pressure the German government (and soon thereafter, the EU) into making this sort of promise mandatory for all car manufacturers, including those that are ahead of Mercedes in the self-driving space. I.e. to create a legislative moat in an area where they're behind.

While I don't agree with Tesla's strategy of playing it fast & loose, I'm afraid that risk aversion will lead us to demand perfect instead of the good enough.

A car manufacturer might not be willing to accept 100% liability, but their system might still be a lot safer than a drunk driver returning home from the bar.

Realistically we're not going to be able to pick "well, don't drive drunk then!", it'll happen. By demanding that these system are that reliable before they're in common operation we'll have incurred a net loss in terms of safety.

I don't know what the right solution is, probably some mixture of the two. Clearly you don't want manufacturers to be incentivized to ignore the risk associated with their systems, but you also don't want these systems not to be offered when they'd be a net gain in safety.

The point at which they're a net gain is not the same point that they become financially viable to the manufacturer from a liability point of view.


> A car manufacturer might not be willing to accept 100% liability, but their system might still be a lot safer than a drunk driver returning home from the bar.

I don't care what they're willing to accept. It's a total cop-out to not "accept 100% liability" for a self-driving system they built, and instead avoiding responsibility by insisting on an inhuman-level of human monitoring. If they're not willing to build a reliable system (or buy enough insurance to pay out the claims when their system crashes), then they simply shouldn't be selling production self-driving cars, period.

The "drunk driver returning home from the bar" problem has been solved. It's called a taxi. No need for defective self driving cars for that case.


Since you brought up insurance, here's a third option: leave it up to manufacturers and consumers to determine how liability is structured in their private agreements, but make self-driving car insurance mandatory. This could even be enforced by mandating that new vehicles refuse to activate self-driving mode without a valid token as proof of insurance.

With a system like that, the market would ultimately work out an optimal solution. Vehicles with safer technology and greater manufacturer liability would be subject to lower insurance rates, while on the other end of the spectrum consumers would be responsible for paying for the externalities of their choices. No one is going to use a self-driving mode that costs more than the vehicle itself to insure, so in theory manufacturers will be more strongly incentivized to get this right.

Edit: A follow-up thought: self-driving car insurance should be independent of manual car insurance. They could be bundled together for savings, but someone who doesn't intend to drive manually shouldn't be required to purchase two types of insurance. Eventually, as the technology improves, self-driving insurance will become cheaper than manual insurance.

This model also neatly solves the current issue of enforcing insurance requirements. If self-driving mode can require a valid insurance token, then so can manual driving mode.


This. Insurance is how liability should be handled because that aligns incentives for everyone. Tesla's approach "it's drivers responsibility to pay attention" is a cop out and "manufacturers are responsible for 100% of everything that happens when automation is active" will stifle tech development. Insurance can set premiums based on number of accidents in the area and let market find most efficient solution.


> "manufacturers are responsible for 100% of everything that happens when automation is active" will stifle tech development

That's... technically true but not at all an acceptable explanation for anything. We don't just go ahead and do whatever we want to "speed up innovation". All kinds of abuse constantly gets bundled under the "only way to innovate" umbrella.

If a manufacturer advertises a feature and even charges money for it then it certainly should take responsibility for the feature working as expected and for the fallout from any failure outside of user control. The customer insures against things that are their own doing, or simply cannot be assigned to someone (weather?). The company pays for it's mistake directly, not via "market forces". That's why companies don't get out of such lawsuits by just claiming "innovation" and "the market will deal with us".

"Driving" is absolutely core to a car. It's not a GPS where the manufacturer can or should be able to just say "we take no responsibilities for the outcome of a GPS or (self)driver malfunction".

If the insurance on your car suddenly triples because the stuck accelerator pedal caused accidents, you're now left with a car which is not only terribly expensive to operate but also impossible to sell because the expense follows the car. All because of something the manufacturer caused. And that will also stifle innovation in all kinds of ways.


Insurance shouldn't (but probably will anyway) go up when your brake pedal is discovered to be defective, because the insurance is there to pay for the financial loss caused by the defective pedal. The loss in the resale value of your car and the increase in future premiums are losses caused by the defective pedal.

Of course, the insurance company can simplify this by just not charging you more for future premiums, rather than paying you for future premiums and taking it back later. And by not charging other people more for future premiums, they preserve the resale value of the car.

This is an automobile equivalent of "why health insurance should cover preexisting conditions"--any increase in future premiums caused by your condition is an expense caused by the condition, so your insurance should pay for it. If you switch companies, technically the first company should pay for the increased premiums from the second company, but averaged over the whole market this is still equivalent to just not increasing premiums.


> Insurance shouldn't (but probably will anyway) go up when your brake pedal is discovered to be defective, because the insurance is there to pay for the financial loss caused by the defective pedal. The loss in the resale value of your car and the increase in future premiums are losses caused by the defective pedal.

I actually have (some) personal experience with that. When Toyota has the defective accelerator pedal issue:

1. I got a check from Toyota from a settlement compensating me for the reduced resale value of my car due to the issue.

2. Insurers sued Toyota for compensation for the claims they had to pay out: https://www.reuters.com/article/retire-us-toyota-insurance-i...


> "manufacturers are responsible for 100% of everything that happens when automation is active" will stifle tech development

So does food safety, regulation of harmful chemicals, etc. Do we have a shortage of tech development? For self-driving cars? Are these companies so fragile?


Insurance is about shifting risk. Sometimes shifting the risk aligns incentives and/or efficiently allocates the costs of the risk but it’s far from guaranteed.


It's not guaranteed, but there's money to be made for anyone who can accurately assess the risks and determine optimal rates. Price too low and the payouts will put you out of business; price too high and the competition will eat your lunch.


> leave it up to manufacturers and consumers to determine how liability is structured in their private agreements, but make self-driving car insurance mandatory.

The free market's efficiency requires a balance of power and information. Similarly, in law, the validity of contracts depends on the relative position of the parties; the court will frown on a company that tries to enforce absurd contractual clauses on naive individuals.

A car manufacturer has incredible power and resources (including lawyers, actuaries, etc.) compared to an individual consumer buying a car. What will a consumer do, hire their own legal team to call the general counsel's office of Mercedes and renegotiate the agreement? It will cost more than the car. Have you ever done that with any agreement with a corporation?

We use our government to regulate where the market fails. Otherwise, we still have lead in gasoline, monopolies, etc.


It’s my opinion that until the next major breakthrough in AI, all self driving cars should be unable to engage self driving mode if they detect more than a certain number of active vehicles on the road that are not self driving. There is no way in hell we are ready for self driving vehicles in crowded cities like Chicago or New York. For road trips through the rural areas sure! Navigating geometry is not the problem it’s decision making in complex real time environments.


How does it help the victim of the accident?

The owner of the vehicle got the cheapest package and nobody will cover the victim.

This is the reason why we demand drivers to have an insurance and don't leave it to the owner to figure this out. Without regulation this doesn't work as not everybody who in the end is affected is involved in the negotiation when buying the insurance.


>How does it help the victim of the accident

Insurance payout goes to the victim.

>The owner of the vehicle got the cheapest package and nobody will cover the victim.

Like regular insurance, 2nd party liability is mandatory.


No, don't leave anything up to consumer and manufacturer to figure things out. First of all, it not two way relationship, it is manufacture, insurance companies, consumer, other participants on the road and government in two capacities: as a an authority that sets the traffic rules and an another authority that maintains road infrastructure. These relationship is highly asymmetrical and tilted against the consumers, which asymmetry contract law won't work as expected.


A model where the manufacturer is responsible is also leaving it up to the market. All you're doing is assigning blame where it belongs (the actual driver being the software and not the end-user).

The manufacturer (or their insurer) will find the sweet (market) spot between safety and costs, where costs include not just manufacturing but the legal repercussions.

Putting the onus on the end-user is messy and wasteful from a market efficiency stand point. It may benefit Elon to make the user at fault, but it's just as "socialist" (or more-so) than putting the onus on the manufacturer.

I think the bigger concern is how to assign fault. If you allow the end-user to override the software then you'll need an objective 3rd party to determine blame (not the manufacturer or the end-user).


Leaving things up to capitalism to solve has not worked.

Regulation works.

Standards works.

Consumers get rug pulled.


    > If they're not willing to [...] buy enough insurance to
    > pay out the claims [...] they [...] shouldn't be selling
    > [these] self-driving cars, period.
The problem with this view is that you're assuming that the current risk created by drivers that society would benefit from moving to self-driving systems is priced into their current driving, or into their insurance.

Western countries have mostly decided that society should simply eat that cost.

If a child gets killed by a negligent driver the driver or their insurance is almost certainly not going to be "fairly" compensating anyone, to the extent that such a thing is even possible when it comes to losing a human life.

There was a case in The Netherlands[1] where a grossly negligent driver killed 2 year old and her grandparents by rolling their car over them in a turn they were speeding through. The driver got probation, and a bit of community service.

On reflection I don't even think that's a bad outcome per-se. Very few people could truthfully report that they've never broken the traffic law in a similar manner even once.

It's mostly just moral luck that decides the difference between that being something stupid you did in your 20s with no repercussions, and another equally irresponsible person being unlucky enough to kill a child.

So should that person spend 20 years in prison and be subject to wage slavery for the rest of their life as a result? Probably not.

Now, imagine the same outcome, except this time the CEO of Tesla or Mercedes is going to have to answer in court for it. "Mr. Musk, why isn't your company paying out these parents fairly?". Of course a self-driving car is unlikely to accidentally make that specific mistake, but statistically the outcomes might be comparable.

My point upthread is that we're almost certain to end up over-pricing the risk caused by self-driving vehicles, while directly and indirectly subsidizing the continued operation of motor vehicles by even more flawed human beings.

People are simply bad at reacting to diffuse v.s. concentrated risk, and are inherently conservative when it comes to evaluating the risk of something new, v.s. an even riskier activity they've gotten used to.

1. https://www.mirror.co.uk/news/world-news/watch-father-throw-...


> The problem with this view is that you're assuming that the current risk created by drivers that society would benefit from moving to self-driving systems is priced into their current driving, or into their insurance.

That's neither here nor there. Liability shouldn't be separated from the driver, since that's incentive for them to drive better. However, with self-driving vehicles, the driver is the manufacturer, full stop.

> My point upthread is that we're almost certain to end up over-pricing the risk caused by self-driving vehicles, while directly and indirectly subsidizing the continued operation of motor vehicles by even more flawed human beings.

1. You're making the flawed assumption that self-driving vehicles will be better drivers than humans. Maybe they will be, maybe they won't. It's yet to be seen. We'll know once the manufactures can't cherry-pick driving conditions or rely on backup humans to smooth over their defects.

2. If the problem you outline is an issue, the solution of pushing liability onto people who are not actually driving is a bad one.


> However, with self-driving vehicles, the driver is the manufacturer, full stop.

I don't think this argument has unanimous agreement.

If we force manufactures to be responsible, then they are going to find a way to reduce payouts. No country is going to destroy their automotive industry because of self-driving cars. And Too Big To Fail companies are never held responsible for their actions.

I'm in the camp of owner-responsibility. Insurance rates for self-driving cars will set a price for the safety of the system. It will happen rather quickly and will be largely accurate. Are you going to even use a self-driving system if it elevates your insurance premiums to $10k a month? Even a person who prefers convenience over personal safety is going to think twice about paying all that money to insure themselves.


> If we force manufactures to be responsible, then they are going to find a way to reduce payouts. No country is going to destroy their automotive industry because of self-driving cars. And Too Big To Fail companies are never held responsible for their actions.

No country would "destroy their automotive industry because of self-driving cars," by making manufactures liable for their products. Worst case is the shareholders would get wiped out in a bankruptcy, but the manufactures themselves would survive.

> I'm in the camp of owner-responsibility. Insurance rates for self-driving cars will set a price for the safety of the system. It will happen rather quickly and will be largely accurate. Are you going to even use a self-driving system if it elevates your insurance premiums to $10k a month? Even a person who prefers convenience over personal safety is going to think twice about paying all that money to insure themselves.

That's a ridiculous, Rube-Goldbergian idea. Lets simplify it: the manufacturer has the liability, buys the insurance, and includes the cost into the price of the self-driving feature. Instead of waiting for your insurance bill after you buy the car to find out if you can afford self-driving, it's there in the sticker price.


> I don't even think that's a bad outcome per-se. Very few people could truthfully report that they've never broken the traffic law in a similar manner even once.

I've never run over anyone, no. [Edit: The next paragraph addresses the concept of moral risk raised by the parent. Maybe this was a bad opening line, as it seemed to be the only thing responded to.]

It's unclear from your link what the person did. But, in general, hitting and killing three people requires unsafe driving that I've never done. And, to preempt the false analogy, I've certainly broken traffic laws. But I don't think I've ever driven unsafely while doing so.

It's also worth noticing that that person never even apologized. That's horrible.

> Now, imagine the same outcome, except this time the CEO of Tesla or Mercedes is going to have to answer in court for it. "Mr. Musk, why isn't your company paying out these parents fairly?"

Why wouldn't he be able to pay the parents fairly? Either (a) he fundamentally mispriced self-driving cars (in which case he should lose money) (b) his self-driving cars were worse than he expected (in which case he should lose money) or (c) he correctly accounted for risk in pricing, in which case the payouts are already baked in as a cost of doing business.

In the event of (c), it's very similar to other insurance products. Everyone pays some extra to cover the inevitable rare but serious failures.

Yes, I oppose any set of laws that allows Tesla to walk away from a fatal car accident caused by their self-driving without at the very least an appropriate civil punishment.


    > I've never run over anyone, no.
The point is that you could have if circumstances beyond your control were different.

The concept of "moral luck" is about people having drastically different moral outcomes for similar actions, mostly due to getting unlucky.

E.g. most people who own a cell phone and drive cars have probably broken the laws stating that you shouldn't operate your cellphone without using the appropriate hands-free equipment.

I certainly have at least. I know I shouldn't do it, and I can truthfully say it's not a thing I "do". I know about the dangers involved. But has it happened? Yes, and I'm not proud of it.

But nothing's bad has ever happened to me as a result, but there's some person somewhere where the first time they looked at their cellphone beeping the driver's seat they drove over someone and killed them.

Can we say that I'm a better person because that hasn't happened to me? I don't think so, I just happen to have good moral luck (so far).


I understand what you're talking about. As I said, I've broken traffic laws, but never in an unsafe manner that relied on luck not to kill three people. I reject your binary "if you broke any laws, you were equally likely to kill someone". I've observed my surroundings and took what I considered to be safe actions. I also try to apologize when I make mistakes.

I don't feel that the person took appropriate care in the way a reasonable person did. But the article was unclear. Why don't you react by citing the specifics instead of doubling down on the bigger general point, since I disagree that it was moral luck in this case, not with the concept.


I don't believe for a millisecond no mistake you've ever made couldn't have had a drastically different outcome. This notion you seem to have that you are completely aware of every variable in every situation you've ever been in is absurd.


I've never claimed I never made a mistake that couldn't have had a drastically different outcome. But I don't believe I've ever driven in a way that could have killed 3 pedestrians without some aspect of the vehicle failing. I've repeatedly asked for details of the "totally relatable because we all do it" accident.

But yes, I've avoided failures through luck. That's just a different failure state. I admit that moral luck it totally a thing (although I am using the other poster's phrasing, not familiar with that term.) And certainly I've avoided car accidents like that. But not hitting pedestrians.


Yes drunk driver is not the right bar, but I think he still has a point. If the accident rate is less than overall human accident rate we should want to use the self driving cars, but the accident rate might still be high enough that a car company wouldn't want to claim 100% liability for every single accident.

Car accidents are one of the top causes of premature death in the US. Self driving cars aren't there yet, but when they are an improvement on human driving they should be encouraged, even if the absolute numbers are still not amazing.


who do you think should take the other portion of responsibility? Certainly not the person sleeping in th back seat with no control?


worth noting the manufacturer doesn't have complete control over the car either, once it's "in the wild". maintenance is an important and frequently overlooked part of driving safety. a sophisticated ai driver (or human for that matter) can't do much with a set of bald tires or bad brakes.


Well, if they're the ones who are nominally in control of the self driving car, because they activated it and are occupying the car, I would say that they have a portion of the responsibility.


Then we're back to the Tesla model - you don't have a self-driving car, you have a car that claims it is such, but a human needs to be 100% aware and in control at all times.


Honestly, I think that's fair. If you are the one who pushes the button and sends the device out into the world, even if you didn't program it, I think you deserve some part of responsibility of it malfunctioning.


I would think for a self-driving car to reach an acceptable level, it should be like a train. If I get on a subway or a bus and it hits someone, I am not held responsible, even if it stopped specifically to pick me up first.


My biggest problem with this approach is humans are terrible at paying attention to things that don't demand 100% focus.

A car that drives itself 99% of the time, but fails catastrophically the other 1% is doomed to failure. The human operator won't be engaged enough to take over that 1%. At least not without airline levels of squawks and beeps and wheel shakers and even that might not be enough - airlines don't have to worry about children chasing balls into busy streets, etc. And the pilots are highly trained - drivers are not.


Taxis are liable for their accidents.


The problem is that it's a different set of people who will die with self driving cars. Imagine the trolley problem but instead of 5 vs 1 it's 30,000 vs 29,000. The 30,000 people get hit by the trolley and die if you don't pull the lever, and a completely different set of 29,000 people die. People who did not do anything wrong. People who had no chance to fix the situation. Sure, it's 1,000 fewer people, but none of those 29,000 people did anything wrong. They were all killed by the self driving systems made by car manufacturers.

Regardless of whether or not you personally would pull the lever, you have to admit it's not the same as saving 1,000 lives.


This is completely absurd reasoning. If we knew who was going to die, we'd let them know beforehand, so they could pick a safer spot on the street at their "death time".

There's no such thing as a deterministic future, as you suggest.

We can't stop doing things because a theoretical future will be changed. This is absurd.


While it is a whiff, one grain of truth is that its possible that pedestrian deaths increase with self driving vehicles, even if overall deaths decrease.

Is it fair to trade lives of people who choose not to drive a car at all for those who do? As of now NHTSA weighs weighs them identically which I think is a huge mistake.

That said its hugely dependent on the technology involved. I think a Waymo is safer than your average driver already for pedestrians. FSD Beta if unmonitored would be drastically worse than your average driver for pedestrian safety.


I'm not suggesting it's deterministic.

People have been saying that if driverless cars are safer than humans by any amount we should allow them. I'm saying they should be much safer than humans before we allow them.

The public will not tolerate car manufacturers killing 10's of thousands of people per year just because they are very slightly statistically safer than the average human driver.


I just want to point out that drunk driving also kills innocent bystanders: pedestrians, cyclists, sober drivers around, etc.


> It's called a taxi

Try finding a taxi in about 99% of the US more than a few miles out from big cities.

I live just outside Minneapolis and we were SOL until Lyft arrived. Lyft and Uber still aren't available in most small towns. This is not "solved".


> Try finding a taxi in about 99% of the US more than a few miles out from big cities.

> I live just outside Minneapolis and we were SOL until Lyft arrived. Lyft and Uber still aren't available in most small towns. This is not "solved".

You can call for a taxi, even "just outside Minneapolis."

Also, taxis aren't the only solution to drunk driving that's better than unsupervised self-driving cars. You also have things like designated drivers, etc. It would have been silly for me to enumerate every single one in my original comment.


Then have a designated driver. It's not complicated. If you go out to a bar and cannot arrange for a designated driver than don't drink.


easy to say, and not hard to do yourself. nearly impossible to convince the general public.


Make the bar identify a DD (or not if you're in an area with taxis) or lose their liquor license (like with serving underage kids). Or have them confiscate keys of people who drive into the parking lot and only give them back after a breathalyzer.

It seems easy enough to do if we're serious about it.


Unless things have drastically changed from my youth, a taxi is just a phone call away anywhere within about 40 miles of the Twin Cities.


> The "drunk driver returning home from the bar" problem has been solved. It's called a taxi. No need for defective self driving cars for that case.

It hasn't be solved at all, there are still too many deaths on road due to drunk/high drivers.


It has been solved in other countries, but the U.S. is ok with the right people driving while drunk so has decided to not solve it.

For example in Norway you can drink 1/4 of the amount that you can do in CA while driving, you get jail penalties at .05 and right from the start at .02 the fine is one month salary, meaning even if you are rich you will pay a meaningful amount. So in general Norwegians don’t drink and drive.


Out of curiosity, how do they determine the fine for people without a salary (students, retirees, self-employed, disabled, otherwise out of work, etc.)? Social safety net amount? Fixed minimum?


There is a minimum, it starts at ~$700 for 0.02 BAC. It is also possible to use less than a month salary for people with low income and/or high degree debt due to children etc. The intention of the law is that everyone faces meaningful penalties.


Very interesting, thanks!


I doubt if drunk driving is solved solely by a sliding scale of punishment. Nor is it remotely ok to drive while drunk in the US.


I agree.

In my opinion, drunk driving is solved by city planning.

It's a fight and a half to convince most Americans that it's actually realistic for suburbs or even rural communities to have functioning public transit and walkable distances, even though towns and suburbs built that way exist in America (rarely, but they do exist).

Rural towns with functioning transit and walkable distances include a laundry list of college towns.

Suburbs with these features include a laundry list of inner ring and street car suburbs built before widespread automobile ownership.

Nothing grinds my gears more than hearing someone say "people need to stop moving into my town, the traffic is getting ridiculous." It's not the people that's the problem, it's the unsustainable forms of development that are just seen as normal.


I love that the fines are scaled to income, makes so much more sense than a flat rate.


That's a Scandinavian thing I love as well. I remeber that a Nokia CEO once paid the equivilant of a new Porsche 911 for a speeding ticket in Helsinki.

On the other hand, I like it when my tickets are basically only table change for me. Egoistics aside, scaling tjose fines with income would be much fairer than a flat fee.


Kind of gives police departments justification for profiling wealthier people. The country already has a serious problem with policing for profit, and you can bet it will only get worse if they can issue $10k+ tickets to individuals.

As it stands now, if you get a BS ticket from one of these towns[1], you're out a few hundred bucks. Scale it up to a month's wage. In a lot of these places, you can't even get a fair trial because the local court is in on the action.

https://www.thrillist.com/cars/nation/the-worst-speed-traps-...


California solved this by making speed limits not specified explicitly by the CVC unenforceable if they aren't backed up by a traffic survey. This will still result in places where the speed limit drops and local enforcement may try to profit from the drop. However, it stops local governments from reducing the speed limit arbitrarily for the purpose of creating a revenue stream.


Towns trying to juice out money from drivers happen at very least in Italy as well which is not Northern Europe at all. At least in NE this scales up for the wealthy as well. In Italy's case if you are rich you can just ignore speed limits (to a certain degree, driving licenses have points and you lose some of them when caught doing something illegal).


Right now, due to probation fees and interest, it's in the police's best interest to ticket poor people who pay more and cannot fight back. Having the police target wealthier people is strictly better, since they have the power and incentive to fight back against BS

But we should also take steps to fight policing for profit.


in most us states, 0.08 is the "per se" limit. you can absolutely be convicted of DUI (or whatever it's called in that jurisdiction) below that BAC based on the testimony of the officer. at or above 0.08 just makes it automatic (you can only argue the validity of the test itself).


> It hasn't be solved at all, there are still too many deaths on road due to drunk/high drivers.

More precisely: it's got a solution better than a current-generation, unsupervised self-driving car.


Or you can just let someone else drive, or you can just not drink or get high if you have to drive a long way back home, or take public transportation. These all are also "solutions" that don't work all the time. So a vehicle that can safely drive itself is yet another solution that can be added to mitigate the issue.


>The "drunk driver returning home from the bar" problem has been solved. It's called a taxi. No need for defective self driving cars for that case.

This is just wishful thinking. This is a description of what people should do, not what people actually do, and I don't see what relevance it has to the discussion. Millions of people drive while over the limit of alcohol consumption [1]. It is totally plausible that self-driving cars could radically reduce that number and improve overall safety even if they are not perfectly safe and occasionally cause accidents.

[1] https://www.cbsnews.com/news/millions-of-americans-admit-to-...


> It's a total cop-out to not "accept 100% liability" for a self-driving system they built

And what exactly is the liability for anyone produces cars that go well above and beyond every legal speed limit? By your logic, any car made that can go above the highest legal limit in a country should be held 100% liable for any accident occurring in that condition.

A certain level of data should be collected and available upon every crash like with airplanes. If the manufacturer is found at fault due to a flaw in their design, then they should be liable.


What if the person doesn't maintain their car? What if they drive through mud that partially covers some sensors? It seems like there are lots of things outside the control of the manufacturer. It's probably best to see if any future accidents are because of errors with the system or end user errors. Anything else is doomed to failure.


The manufacturer controls the only thing that matters: whether the autonomous driving system will engage. If the sensors are covered in mud or the tires are detected to be bald, the system can choose not to engage, disengage with a window for a human driver to take over, or pull over and disengage.


having a system that can detect the current state of all safety-related components of the car seems like an unreasonably high bar. humans are driving around right now with bald tires, loose lug nuts, bumpers held on by scotch tape and a dream, the list goes on.


It's probably easier to test for bald tires and loose lug nuts than you think. Don't know about bumpers, but I doubt something like that would be covered by the manufacturer's liability. I assume any collision involving the self-driving systems would have an investigation to determine fault, much the same way they do already


should be easy to test for loose lug nuts, I agree. bald tires (or inappropriate tires for the season, just as bad) I'm not so sure of though. afaik, limit of traction can only be determined by exceeding it.

> I assume any collision involving the self-driving systems would have an investigation to determine fault, much the same way they do already

agreed. specifically, I feel the manufacturer should design the system against some minimum level of maintenance and publish that. they would be liable for any mishaps that occur within design parameters. the driver/owner should be responsible for actually keeping the vehicle within that envelope. I don't feel that's an unreasonable burden.


No maintenance and self driving is disabled. If sensors aren't working then self driving will be disabled, if currently in use then some kind of safe mode should be available to safely stop the car.


You can say that "drunk driving is solved" if you speak from the point of view of the driver, because of course one can avoid it. But that's not too relevant.

Getting killed by a drunk driver (which is actually the real problem) is far from solved. It's something that can happen to anyone.


"The "drunk driver returning home from the bar" problem has been solved."

No, it has not. Taxis have been around as long as cars, and people are still dying.

Drowsy driving hasn't been solved, either.


I'm skeptical they could even avoid liability in the first place. Products liability is hard to avoid.


> self-driving system they built

It's not 'self-driving', it's driven by Mercedes's developers.


“Safer than a drunk driver” is a crazy low bar.

At the .08 BAC legal limit odds of crashing a car are 200% higher than a sober driver. The risks obviously increase with higher alcohol levels. A .16 ‘double the legal limit’ drunk is 1500% more likely to crash. See https://www.washingtonpost.com/news/wonk/wp/2015/02/09/how-j...

So yeah, ‘safer than a drunk driver’ means basically ‘only about twice as likely to crash as an average sober human’.


200% (2x) higher is not as high as I believe many would guess. If you did a straw poll you'd likely get answers like 10x, 20x etc. Hell SOBER driving had 10x more crashes when you/your parents were learning to drive compared to you/your kids.[1]

[1]https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...


It's actually 3x, not 2x. Probably why writers shouldn't express things this way, but it's clearer reading the article, this is a 200% increase, not that the rate is 200% of the normal rate.

Also, the link to the NHTSA doesn't work, but I'd be interested to read the primary research. Concern is they're just taking accident rates for sober drivers and BAC 0.08 drivers, which isn't a fair comparison because they aren't time matched. As in, drunk driving is a lot more likely to happen very late at night/early in the morning when bars close, when traffic is extremely low and accident rates are also very low. If your chances are 3x the chances of a person driving any time at all, you're much more than 3x the chances of someone driving unimpaired at 2 in the morning.

Of course, even if they time match the rates, there's still the reality that a lot of sober people driving at 2 in the morning are still impaired. Tired driving is also quite dangerous.


I found literally the most conservative statistic I could for how dangerous drunk driving is - that someone whose impairment is literally on the threshold of legal is 3x as likely to have an accident than a sober driver.

I could have gone for a shock stat - 100000 people a year are injured in accidents involving drunk drivers. 10000 people are killed.

I could have taken the position that a typical drunk driver is probably actually 10x as likely to have an accident as a sober driver, based on those NHTSA numbers.

But I didn’t need to to make the point that a self driving car could be literally twice as likely as an average driver to crash and that would still be ‘better than a drunk driver’.


Exactly. And yet, many people choose to do exactly this on a regular basis.

Granted, it becomes an issue when people who wouldn’t drive drunk decide to use self-driving functionality simply because is there. But, there will be a point where it crosses over into being a net improvement, and that point may very well be a fair bit before meeting parity with humans. Obviously it’s not quite that simple (not all failure modes are equal) but it’s not that far off, either…


Reminds me of what I've heard about how bicycle helmets can cause riders to take more risks, so overall accident rates go up (I don't know how true that is, but it sounds plausible).

People who may not have chosen to drive (e.g., tired, drunk, impaired, or otherwise incapable of driving) might choose to do so since they can "lean on" the car doing it for them, and if the car encounters a situation it can't deal with the driver is completely incapable of handling it.


I hadn't heard that in the context of bicycle helmets, but it's definitely true for football helmets and protective gear.


I suspect that point will come sooner if manufacturers are held liable for accidents their self-driving tech causes.


Should we let drunk people get home in their fully autonomous Mercedes? It's an interesting problem!

I think the answer right now is no, we're still requiring the driver to be able to take over, and our thought experiment presupposes they're inebriated.

On the other hand, it's obviously better to have the computer take a shot (no pun intended) at getting home than letting the driver do it themselves, even today. At what point does that moral balance tip? It seems like we're not too far off from the answer being "obviously let the car drive, whatever the drunk person was going to do is probably riskier for both themselves and everyone else".


It's way cheaper and safer if they just ride a bus/taxi/the car of their "designated driver friend" instead, all with tech that's already here.

I'm way more interested in self-driving (and, more broadly, driving assistance) for emergencies like someone having a heart attack while at the wheel. For example, I feel a lot better now that, should I somehow die or lose consciousness while driving (but not due to being drunk!), my car with autonomous breaking will mostly keep going in whatever direction I had it but with a much better chance of stopping before causing someone else harm.


It's way cheaper and safer if they just ride a bus/taxi/the car of their "designated driver friend" instead, all with tech that's already here.

I think a lot of people are unfamiliar with rural America -- the nearest bar might be 10 miles away down a country road with nothing else around (which was the case in my hometown... well more of a "village" than a town), and your friend may live 10 miles on the other side of that bar, so he may not want to drive that far out of his way so you can get drunk and he can't. And taxis and buses aren't an option.

Relying on humans to make the correct and safe choice to not drive while drunk is a losing proposition, our DUI death rate shows that.

Even if a smart car can't be trusted to drive you, it should be able to notice that you're drunk and refuse to let you drive. My car notices when I'm losing focus and gives me an alert reminder, so it should be able to notice when I'm drunk.


It always strikes me as odd, when I'm driving on country highways and encounter a bar in the middle of nowhere.

How is the bar not sued out of existence? There is no plausible way to travel to and from the bar, other than a private automobile, and no taxi service within an hour's drive. There are no houses nearby with clientele that could walk to the bar. Why isn't that considered an attractive nuisance like a swimming pool?


How often does that happen to be a concerne for you? Just asking if you have numbers or so. :)


I had a heart attack a few years ago and while I kind of know the fear is not very rational, I still wanted to be able to continue driving and not feel like I could hurt or even kill someone if I get another one that kills me while at the wheel.

So, hopefully not very often :) But it's one of those things where, since I could afford to change the car, I felt I had to go ahead and make that investment in safety.


There is much more cheaper and safer solution, a system that detects if the driver is drunk,ill, not paying attention etc. But some fanboys prefer faked self drivers systems with faked stats because it looks SciFi rather then say force a drunk detector in all cars.


That problem cries for a solution... Maybe some service, paid of course, that can drive those people home? Maybe combined with an app, and self-employed people providing that service, shared economy and so on... Seems to be a great opportunity!


The problem is that people are rational and 1) the costs of using Uber for routine trips to the bar are very high; and 2) the likelihood of getting in a crash on any particular trip are pretty low. So millions of people every year do the math and decide it's worth the risk.

I think they're wrong to do this, because the worst potential outcome of drunk driving is so devastating and catastrophic that it's personally optimal to eliminate that risk to zero. But the fact is that this isn't really how humans weigh risk.

Obviously anti-drunk-driving campaigns have been somewhat effective. People take this problem more seriously than they did 40 years ago. But I wonder if we aren't at a point of diminishing returns. Some people are going to drive drunk, no matter how many times you tell them to take an Uber.


Is someone who won't pay for a taxi really going to spend money on a tesla or Mercedes with self driving capabilities?


When I say "costs" I don't mean the $20 for the Uber (although that's not nothing). You have to think about the guy who wants to stop at the bar on his way home from work. To do that properly, he has to drive right past the bar, park at home, schedule an Uber, wait for it, then take it back to the bar.

Or he can just stop on his way home.

We both know what the right thing to do is, but one has significant costs the other doesn't.

Or imagine he does stop at the bar on his way home and tells himself he'll just have one beer, but he ends up having four and shouldn't drive. Now he has to get an Uber, take it home, get another Uber before work in the morning, and take it back to the bar to get his car. And then, finally, head to work.

Or he can just drive home.

Again, we both know what the right thing to do is, but one has significant costs the other doesn't.


It's probably a negligible portion of drunk driving incidents, but there definitely are people like that. In the past year there have been multiple drunk driving incidents with well-paid NFL players in nice cars, and these are just the ones who get caught. Whether those guys would turn on self driving mode is another question I guess, but every time it's baffling they didn't just request an Uber Black or something.


That tells you that the "cost" of taking an Uber isn't primarily monetary. It's a big hassle to leave your car at the bar, take an Uber, wake up, get another Uber, go get your car, then drive your car back home. People are weighing that hassle against the odds anything bad will happen on their trip home and deciding it's worth it.


People with expensive cars don't drunk drive? Seriously?

Even if that were true, I have a hard time imagining that the cost of self driving packages won't continue to decrease until it's included in literally every new car. Just a matter of time (assuming someone is able to make a fully working version).


In my town, Uber absorbed a bunch of the travel, public transport was cut in half, and now with gas and other expenses no one can afford to run Ubers up here. Maybe one car around, often a 20 minute wait. Car pool services cannot fill the gap.


Gas is too expensive to afford using Uber, so you have to use your own car instead?


I interpreted that as gas being too expensive to make it worth being an Uber driver.


I thought Uber had some fancy math that raised their rates to incentivize more drivers when their pool of drivers got too low? So why hasn't their math increased pay to offset the increase in gas prices if they don't have any drivers?

I don't live in an area that has Uber, so I'm not that familiar with it, only used it a few times when traveling :)


I don't think I know them any better than you do. I used one to get to the airport once and it cost $40. Economy parking would have been cheaper, so I use that now. That's the only time I've used one personally.

My impression of the the way drivers are compensated is that the hours, fuel, and wear on your vehicle make it kind of slim margins.


An Uber driver needs to be able to cover gas out of their pay plus make enough extra to effectively receive a decent wage. If gas prices go up but the price of an Uber fare (and subsequent payout) does not, then it becomes uneconomical for people to drive for Uber and they stop doing it.

Other than when Uber deeply discounts to drive growth, it's always been cheaper to drive yourself than to pay Uber. The reason you pay Uber is for other reasons (e.g. convenience, cost of parking, being away from your vehicle, intoxication).


That works great in major cities and not at all on country roads in the rural US.


How many Uber drivers are high or a little drunk at work? I’m betting it’s more then 0%.


Maybe an alcohol test in the car. If too drunk it wont start and call an uber


Why not lower all regulations based on the fact that some people do them drunk? Doctors? Bus drivers? Security forces?


> Why not lower all regulations

Now we're onto something!


That's not a low bar at all. A lot of activities people do while driving are way more dangerous that driving "drunk". In fact, "drunk" (0.08 BAC) driving is often the baseline multiple that such activities are reported in.

Self-driving cars which are merely as safe as driving with a 0.08BAC will still greatly reduce traffic accidents. Of course, no one will report it as such. Having something that's paying attention 100% of the time is going to be a huge boon for traffic safety. Spending 15 seconds picking a song on your phone is all it takes to rear end a car or blow through an intersection.


200% increased chance of crashing would be 3x as likely to crash.

A 100% increase would be 2x as likely.


Yes. That is how math works.

I gave the ‘better than a drunk driver’ claim the benefit of the doubt and allowed it to be merely 2x as likely to crash than a sober driver, since that is still ‘better’ than a driver who is exactly at the federal US legal limit (and who will crash 3x as often).


There are gadgets that prevent the car from starting when your BAC is too high. Clearly we need the opposite for activating the autopilot.


Alcohol based mouthwash becomes a requirement in the glove box. Whiskey manufacturers start doing sponsorships where the autopilot is named after them. "Had a long day? Jim Beam will get you home."


Why aren't alcohol manufacturers investing in self-driving technology? That seems like it would be a net positive for them now that you mention it.


> At the .08 BAC legal limit odds of crashing a car are 200% higher than a sober driver.

What are the odds of a sober driver crashing a car?


The all cause rate is somewhere between once per 10.5 years according to Allstate [1], which at an average of 15,000 miles per year is ~157,000 miles, or once per 500,000 miles according to the Bureau of Transportation [2] numbers of police-reported accidents. The rate of injuries is 84 per 100 million miles (once per ~1.2 million miles or ~80 driver-years) and fatalities is 1.11 per 100 million miles (once per ~90 million miles or ~6,000 driver-years) by the same Bureau of Transportation statistics.

[1] https://www.allstate.com/resources/allstate/attachments/best... Page 4

[2] https://www.bts.gov/content/motor-vehicle-safety-data


Not sure, but 30% of crashes are alcohol-impaired. https://cdan.nhtsa.gov/tsftables/tsfar.htm# -> chapter 2 -> alcohol -> table 31


Even if it was the EU doing, it has nothing to do with demanding perfect instead of good. It's about blaming who/what is doing the actual driving when an accident happens. I think it's perfectly sensible.

Put in another way, I don't expect my taxi driver to be perfect, in fact I probably drive more safely than most of them. But if they get into an accident it's their fault, not mine.


But isn't the human always ultimately in control of the vehicle? Any time the car is set in motion, it is the result of a human request, they don't just randomly drive around for fun. Just because the human farms out some of the monotony of driving, does that absolve them from setting the vehicle in motion in the first place?

For now I'm leaving aside things like "summon my car to me", or "tell the car to randomly drive around until it finds a parking spot" where no human would be present in the vehicle at all because I think those kinds of scenarios are still a very long long way off, where the vehicles can be expected to move with no supervision at all.


But the "vehicle in motion" is not the problem. Accidents are a problem. Exactly the same way as in a taxi - if the taxi driver causes an accident, is it your fault because you told him where to go?


That's the OP's point, if you blame it simply because it's driving, you're incentivizing the systems to only work when it thinks it's 100% safe to drive. It's how you get "you can only engage on pre-defined major highways and only during daylight".


If the automaker advertise a level of automation where I am not supposed to do anything while the car drives itself (which seems to be the case), of course I blame it in case of accident: what could I have done it to prevent it, except for not using the feature at all ?

If instead we talk about driving assistance and I am supposed to keep an eye on the car, I am fine with being responsible for any accident.


> I'm afraid that risk aversion will lead us to demand perfect instead of the good enough.

Then Mercedes' move is exactly the sort of pragmatic, no-bullshit yardstick you want. 'Putting your money where your mouth is' is a widely-understood and respected way of showing you are not bullshitting.

Current liability laws have not forced car manufacturers to make perfectly safe vehicles (which, would, of course, kill the industry), and there's no reason to assume they will do so for self-driving cars.


Unfortunately Tesla has been completely irresponsible, so I'm entirely in favor of the moat.

And this addresses the "uncanny valley" of driverless cars, where the car does everything up until you're expected to leap into immediate action to prevent it from killing someone, even though for hours you may have been doing nothing.

The technology really does need to be all-or-nothing. The in-between solution that Tesla is trying to exploit isn't reasonable given how human minds operate.


I do not see how this can be an attempt "to create a legislative moat in an area where they're behind". If they are behind in self driving they would not want to raise the stakes by creating this higher liability hurdle which another company can handle better. Any way you look at it if this were an attempt to get a competitive advantage through using the liability laws, then it must only mean that Mercedes are certain that they are ahead in the field of self driving not behind.

I very much welcome this development and it is, in my opinion, necessary for level 3 autonomous driving. By definition, level 3 autonomous driving implies that the car must take responsibility. The Level 3 definition says that the driver may take his attention off the road and does not need to supervise the technology (if the conditions are right).

Regarding drunk driving, if the machines start killing people, good luck explaining to the victims and the scared public that it may have been worse if they were drunk drivers. The fact is that new technology usually requires notably higher performance to replace the old, this will be the case with self driving, so we may as well embrace that and go forward.


I know we all hate subscription models, but for self driving it makes sense. If car manufacturers are expected to be liable for their self driving cars, then they should be charging money to their customers so they can insure them.

Manufacturers who produce safer self driving vehicles will have to spend less to insure their fleet, and be more competitively priced. This at least gets the incentives right


For an actually fully autonomous car without a steering wheel, maybe. But for a car that can engage autopilot when you choose, absolutely not. Even if the self driving software is a monthly fee, the car is something you should be able to own and control.


I don't want car manufacturers insuring vehicles. This would lead to inflated prices because manufacturers know that you cannot sell your vehicle at any time.


I don’t follow


Tesla gets a lot of flak about driver security, but this is incredibly irresponsible.

> Once you engage Drive Pilot, you are no longer legally liable for the car's operation until it disengages. You can look away, watch a movie, or zone out. If the car crashes while Drive Pilot is operating, that's Mercedes' problem, not yours.

If the car crashes, it's your problem primarily because you're the one in the car that could be injured. Maybe they'll pay for repairs or your hospital bills or even a loaner, but its primarily your problem. We have to remember the limitations of these systems and accept that they're level 2 and require full attention


Tesla gets a lot of flak for advertising Autopilot in a misleading way. Advertising with functionality and liability that could exist at some point but simply doesn’t.

What Mercedes does here is not just empty advertising, it’s offering actual lvl 3 autonomous driving complete with the car being liable in those situations.

Which in practice means Mercedes can actually deliver what Tesla so far has only been advertising with; Taking your attention actually away from the street to watch a movie or check your emails.

This is not a “problem”, it’s the literal definition of lvl 3 autonomous driving; Car takes liability in certain situations to such a degree that the driver is legally pretty much just a passenger.


> Taking your attention actually away from the street to watch a movie or check your emails.

If I were the kind of person that can afford and buy an EQS[1] I would rather hire a driver so I can do it all the time, not just at 40km/h in a slow moving highway jam.

[1] https://www.mbusa.com/en/future-vehicles/eqs-sedan


> Which in practice means Mercedes can actually deliver what Tesla so far has only been advertising with

Do you have experience with this tech or are you taking the word of an marketing campaign? Taking "liability" doesn't mean much if you get in a serious accident and have medical issues for the rest of your life. If you get in an unsafe cab does it give you a lot of assurance that technically the driver is responsible?


How does one differentiate an unsafe cab from a safe cab?

For a more apples to apples comparison: Tesla vs Mercedes. Tesla broadcasts L5 confidence in a poorly implemented L2 system, and accepts zero liability for its faults. Mercedes broadcasts L3 confidence in their system (which I'm not familiar enough to actually characterize), and accepts full liability for its faults. I'd prefer to share the road with a self-driving Mercedes than a Tesla, because mistakes are inevitable and insurance payouts are not. If I'm disabled for life, I'd rather get paid for damages than not.


It means that failing to prevent an accident will cost MB significantly more than they can possibly make with a single car. So they need to have run the numbers to be confident that this will not bankrupt them. This is _far_ more than Tesla ever did.

Tesla, until maybe this year, was never cashflow positive and the remaining lifetime of the company was always a significant risk. MB is an institution here in Germany. For many many reasons they will be one of the car manufactures that will disappear last. This is a company that cannot do such a risk without massive personal retribution risk to the executives, especially not after the Diesel scandal.

Of course that doesn't remove the "but I'm in the hospital/disabled/..." part of being personally in an accident. But that can happen _anyways_, irrespective of you being responsible even if you drive yourself. Compared to any other offering, this is a significant change of the rules of the game.


Getting in a cab where the driver accepts liability (and has insurance) seems much safer than getting in a cab where the driver passes liability to you and your insurance. Neither is intended to be the only way you asses safety, but the former is much better signal than the latter.


It is in fact the opposite, because it sets the bar at that level understanding that people treat Tesla’s autopilot in the same way. Tesla just hide their relatively lower capability behind a legal out.

People do not care what “level” their driverless control is. After a certain point of “it seems to do everything”, they assume it does everything. If they think they can get away with letting it take control, they will.

Requiring automotive manufacturers to deal with this is manifestly responsible. It is demanding an engineering control over an admin one in the hierarchy of risk controls, and that ecpectation is in line with how every non-software engineering discipline of engineering approaches the world.


How is this different from riding as a passenger in a taxi, for example?

Also: Drive Pilot is a level 3 system, not a level 2 system. That's the entire point.


The primary difference is a few billion years of optical and neurological evolution.


The point is that whenever you ride in a conveyance (whether as passenger or driver) it's always "your problem" if there's an accident.

Short of MB putting an executive to sit alongside every passenger in their car (which you'll note that Uber, the airlines etc don't do either), what exactly are they supposed to do above and beyond taking liability for accidents while the system is in operation?


Tesla gets flak because they have repeatedly implied (or flat out said) that their level 2 system drives itself. This is a level 3 system, so it actually does drive itself.

Level 3 is the lowest level where “the car drives itself”

https://www.motorweek.org/images/SAE_Chart.png


As road safety is a win for the whole of society and not only the driver it would make sense to incentivize safety at the government level. On the other hand there are already incentives in the form of insurance, which is mandatory at least in most (all?) places and should take into account vehicle safety. If autonomous vehicles are safer insurance premiums should go down for them, as it happens for example for vehicles with smaller power trains.


Maybe. It seems that some of that would be offset by the higher cost of the vehicle for the comprehensive part of the insurance.

"as it happens for example for vehicles with smaller power trains."

Is this still true? I know they did some HP based stuff in the 70s-80s, but thought they did away with that since then.


I'm not really sure about the power train, if it's still relevant.

Vehicle value impacts the premium, since the policy may cover things like theft and vehicle damages. However the biggest risk is damages against persons. Truck insurance for example is very expensive not because the value of the truck but because an accident is more likely to involve deaths and permanent damages. It's not unusual accidents for several million €. That said there are lot of considerations for selling policies. The insurer may make a discount for a risky sports car if it thinks it will recover the cost cross selling you other products. Or for commercial pressure, accidents are a problem for the future. But yes, you are right, it's not a certainty.


> While I don't agree with Tesla's strategy of playing it fast & loose, I'm afraid that risk aversion will lead us to demand perfect instead of the good enough.

I'm actually afraid of the opposite: where self-driving becomes so associated with failures and danger in the mind of the public that the technology never take off.

And it doesn't take that much effort to make the public afraid of something, just look how terrified of shark people are.


Accepting liability doesn't mean they require the system to be perfect.

Accepting liability really just comes down to who's insurance company pays when it fails.

If the manufacturer can demonstrate to the insurance company that their self driving system fails less frequently (or lower overall cost of claims payouts) than human drivers then the insurance company will happily charge a lower premium for cover. Even if it's not perfect. The closer the system can be demonstrated to be to perfect the lower the insurance premium is going to fall.

There is certainly a regulatory moat being set up here as it will prevent new startups from entering the space unless they can find an insurance company willing to cover their risk profile, which in turn means demonstrating a suitably high level of safety to keep the premiums affordable. But I think in this matter I want a level of regulation that says if you're making something that's used on public roads, and risks injuring or killing people you'll accept liability when it goes wrong. It's not much different from the system of mandatory insurance for drivers in most countries. If you're going to drive on the roads, you must meet a minimum insurance level whereby you can accept liability for any failure that causes damage to other property or people.


You can measure the confidence of the manufacturers, which seem to opt for beta testing on the roads. But I agree that it is probably a move to attack competitors.

Personally I wouldn't want to be near automatically driving vehicles for now. Maybe because I worked in computer vision and even with drastic advancement of the field, the possibilities are still quite primitive, even with "AI" support. Sure, I could get hurt by a drunk driver. But they are also not allowed on the roads for that matter.

I think a solution is to restrict the zones which an autopilot is allowed to be active at first. Ultimately the driver must be responsible. He made the decision to activate the auto pilot. Manufacturers should be forced to be honest about the capabilities though.


>I wonder how much of this is part of a genuine attempt to bring a safe product to market, and how much is to pressure the German government (and soon thereafter, the EU) into making this sort of promise mandatory for all car manufacturers, including those that are ahead of Mercedes in the self-driving space. I.e. to create a legislative moat in an area where they're behind.

Mercedes is just going to place the assumed liability with a reinsurance company anyway. It seems like a very efficient solution to ensure accountability SOMEWHERE without vast amounts of litigation every time there is an incident. It's honestly hard to see how self-driving vehicles will ever meaningfully make it to the consumer market without an agreement like this.


Note that drunk driving is much less socially accepted in Europe than in the U.S. A post on this topic was actually at the top of /r/germany just a week ago:

https://old.reddit.com/r/germany/comments/tejhxv/all_my_amer...

"All my American friends drive after drinking and I don't know what to do"


Several European countries "beat" the US (in a bad way) when it comes to alcohol-related traffic incidents:

https://www.itf-oecd.org/alcohol-related-road-casualties-off...

A friendly reminder from a fellow European that the entire continent isn't part of Germany :)


The US handily beats all those countries when it comes to per-inhabitant alcohol related traffic fatalities.


> risk aversion will lead us

Not wanting to crash into random things when rolling at 120 km/h is not "risk aversion". It's just common sense.


I think you're making OPs point for them. Real drivers also crash into things. Self-driving cars don't have to be perfect all of the time. Inebriated and distracted drivers already cause disproportionate problems: self-driving, even one that makes a mistake once in a blue moon, is preferable than some dude on their phone who hasn't noticed the traffic in front of them has slowed down. (And 120 km/h sounds like highway cruising: precisely the sort of place where self-driving cars have a relatively easy time and "better than humans" is not very far off from where we are today.)


The dude on their phone is not comparable to the an AV carrying a family of 5 unpredictably veering into a concrete divider. One is negligent, the other has done nothing and could have done nothing, they only trusted the system. How do you reconcile it as merely a question of numbers then? So you trade 100 human-error accidents for 5 "blue moon" AV accidents, and this is good because it's statistically much safer. But that's also 5 accidents that wouldn't have happened to safe, diligent drivers.


Nice clip. The quote was:

> risk aversion will lead us to demand perfect

And no, perfection is not a reasonable bar.


> And no, perfection is not a reasonable bar.

It's the only bar. Considering the regular person overconfidence in their ability they will either want something that doesn't crash ever or if it crashes it has to bring about other advantages such as speed.

Eg. Same rate of self driving crashes with an avg. speed of 100mph as human 40mph.

Like putting your Mercedes in the hands of Lewis Hamilton. People won't have any problem doing that, because they understand that Hamilton is a much better driver than them. If it's close people won't relinquish control, at least when sober.


If perfection is the bar for being allowed to drive, then nobody is allowed to drive because there is no such thing as a perfect driver.


Robo-Perfection or Robot Lewis Hamilton is the bar for a sober person (especially guys) to let go of the wheel.

If it's close I'd rather bet on myself. Many people feel the same way.

people discount the psychological effects of letting go of the wheel, the advantage has to be crystal clear


> Robo-Perfection or Robot Lewis Hamilton is the bar for a sober person (especially guys) to let go of the wheel.

It obviously isn't as there are many people who have jumped through many hoops to get access to self-driving software that is much worse than Lewis Hamilton (i.e. Tesla FSD beta). Clearly different people have different standards for what they consider safe enough to let it drive for them.

It seems like you are redefining "perfection" to be whatever your personal bar is for adopting a self driving car (which would make your argument a tautology if you weren't also overgeneralizing your personal bar to everyone.)

I would also point out that the bar for personal adoption is not necessarily the one that we should use to determine which self-driving solutions are allowed to operate at what level on public roads. While this is a complicated topic, I don't think "Lewis Hamilton" is the only reasonable place to set those bars.


I am not talking about these people using FSD on and off like it happens now with Tesla. AS it stands that would be a failure, it's just a glorified cruise control

I am talking about fully giving control to the car and preventing humans from interacting, after clear statistical evidence of autopilot being better than the avg. driver. Given that by definition the majority of drivers are avg. then society would be better off summoning Skynet and putting it at the wheel. It's a clear cut reasoning that works in theory.

Better than avg. doesn't mean that accidents won't happen though. Accidents will still happen and as soon as they start to happen the whole thing will be reversed. That's because people don't consider themselves the "avg. driver" and would rather bet on themselves being better than the avg. driver than the computer being better than that .

Even 95 yr old people consider themselves the local Lewis Hamilton.

You raised the example of FSD purchases, I cite you the counterexample of vaccines. Vaccines are much much much better than the immune system against COVID, if it were to be close you'd have much less adoption because skepticism is not linear.

When in doubt , people bet on themselves or the status quo. In this case the status quo is literally betting on themselves so you have a double effect there too


You are shifting your own goalposts here. You were specifically arguing that perfection is the only bar. You have now redefined "perfection" as meaning "above average", but even with that redefinition, your argument as stated still doesn't match reality.

People are already letting cars drive for them. They are doing it even when they are not allowed to take over for the car (i.e. Waymo in AZ). Clearly some people have lower bars so your claim that the above average safety is required for anyone to give up control of the vehicle they are riding in to a computer is false.

Reading between the lines of what you are saying, is seems that what you are actually trying to argue is more along the lines of "achieving (significantly?) above-average safety for self-driving systems is crucial to achieving mass adoption of self-driving technology." That's a much hard to disprove statement (and one I suspect it is at least partly true given varying levels of "cruciality").


> A car manufacturer might not be willing to accept 100% liability

It won't be the car manufacturer, but their insurance underwriter who has the final say (unless the car manufacturer self-insures, which, let's face it, is a non-starter).

I'm not sure if you've ever dealt with business insurance, let alone underwriters.

Underwriters tend to be, by definition, fairly boring risk-averse types.

If you catch them on a good day, they might be willing to make a few relatively small adjustments to their standard wording in order to relax the scope a bit.

However this assumes you are lucky enough to be dealing with one underwriter. If you're having to pool at Lloyds, then the chances of you being able to get significant wording relaxation drops dramatically because everyone in the pool has to agree, not just one underwriter.

You'll never find an underwriter willing to cover 100% liability for anything under the sun. That's just not how it works. Especially with new and little-understood risks such as self-driving.


Its about control, if the car can drive itself and the manufacture is always held responsible then there is no incentive to selling cars. The incentive to charge for transport is still there Mercedes would become the next Uber and the World Economic Forum would get the dystopian future it has always dreamed of.

You will own nothing and like it.


> I wonder how much of this is part of a genuine attempt to bring a safe product to market, and how much is to pressure the German government (and soon thereafter, the EU) into making this sort of promise mandatory for all car manufacturers

I'm fine with a company saying "We don't think we can do X safely, therefore we don't devote as much to research doing X, therefore we don't want to be penalized in the market because people who do X unsafely can obfuscate the risks and sell it to the market." And then attempt to create standards to do X safely. That seems reasonable to me.


Drunk driving has been solved and is on the horizon. Soon manufacturers will be required to include the technology in the cars somewhere around 2026 in North America and possibly sooner in Europe. There might be a combination of technology used such as at wheel breath test, sensors on the wheel that detect alcohol excretion in your skin, eye sensors that detect impairment and drowsiness, sensors that detect impaired driving based on how well you stay in your lane. Unsure what tech will be most common but change is on the horizon.


> I don't know what the right solution is, probably some mixture of the two. Clearly you don't want manufacturers to be incentivized to ignore the risk associated with their systems, but you also don't want these systems not to be offered when they'd be a net gain in safety.

The right solution has always been public transportation. It's just, you won't hear that from car companies. The right solution to self driving cars is not to have any cars.


You’re looking at the problem wrong.

There are 1.35 million deaths per year attributed to car crashes. I don’t know about you, but I would rather companies play loose and fast rather than take forever to figure out the perfect solution. If it takes five years longer, that’s more than 7 million deaths.

Even if self driving cars are responsible for 1000 deaths per year, which they won’t be, it would still be well worth playing fast and loose.


It took mankind a long way to learn that security (EDIT: I meant safety, it does apply to security as well so IMHO) and "fast and loose" don't go together. That's why traffic related death are as low as they have ever been.These lessons are codified in all the regulation around that stuff.

That being said, the number you stated is world wide? If so, it's not the one to use. And just ignoring the overall trend, in death per km / mile driven, is just disengenious.


Why not use worldwide stats? Self driving cars will take many years to propagate to all the worlds cars (which will eventually happen), but it still holds true, if it takes 5 extra years that is 7 million deaths

Also, traffic related injuries and deaths have been rising in recent years due to distracted driving


One, deaths are measured per driven mile, they are a relatibe stat not an absolute one.

Two, driving conditions have to factor in to make it comparable, including traffic deaths in Africa and India (global) is kind of pointless

Three, I'd like a source that per driven mile injuries and deaths are rising, all numbers I know of show the opposite trend (including pedestrians and cyclists).


Does this mean that the CEO or board members will be held liable for criminal manslaughter charges if their vehicle runs someone over?


Thats a good question. What does it mean to take responsibillity? I cant imagine that someone of mercedes will be accused of manslaughter, but this is taking responsibillity

Edit: if something happens you have 10 seconds to react . I don’t know if in that 10 seconds those responsibility counts.


> Realistically we're not going to be able to pick "well, don't drive drunk then!", it'll happen

I don't think legislators will ever accept the safer criminal activity argument.


You want millions of cars on the level of drunk drivers?


There are probably millions of sober drivers on the level of drunk drivers.


So Boeing should not be responsible for their software failures?


Is Mercedes behind or are fanboys overestimating Tesla/Musk?

German cars are designed for cities American cars are designed for LA.


>Handing over driving responsibility completely requires extremely particular circumstances. Right now, Drive Pilot can only engage at speeds under 40 mph (60 km/h in Germany) on limited-access divided highways with no stoplights, roundabouts, or other traffic control systems, and no construction zones...The system will only operate during daytime, in reasonably clear weather, without overhead obstructions. Inclement weather, construction zones, tunnels, and emergency vehicles will all trigger a handover warning.

Those are some big caveats that mean that you won't be able to use this in most situations. It is basically only good for stop and go highway traffic which is a situation that other driver assist features handle pretty well.


Yes but it's (according to the article) the only system that takes on any legal responsibility and guarantees a fairly long takeover window within which Mercedes will still be at fault for an accident.

All other companies don't go that far and make absolutely no promises. Sometimes their marketing wink-wink-nudge-nudges you and implies that they do the same but in reality they don't.

If this is successful you will soon see customers and regulators requiring the same for all competitors.


> Yes but it's (according to the article) the only system that takes on any legal responsibility and guarantees a fairly long takeover window within which Mercedes will still be at fault for an accident.

I don't think it's that big of a deal but it's clearly well done from a PR/marketing standpoint.

Insurance/warranty is just an expense from a companies POV. You control it via a combination of

a) building a well working product

b) limiting use of that product

c) putting a high enough premium on it

__

a) is what all customers want. b) is what Mercedes is heavily leaning into right now, because looking at the restrictions it's pretty clear that they don't think they have accomplished a) to any satisfying degree.

I am sure they will be setting c) to where their insurance math says it has to be to be financially viable.


> I don't think it's that big of a deal but it's clearly well done from a PR/marketing standpoint.

This is a really big deal. If you are required to instantly take over, you need to permanently pay attention to the current road situation, at which point the autopilot is really just a fancy cruise control. People still stop paying attention, of course, but that's actually a massive risk.

A longer takeover window actually allows you to do something useful, such as read or look at your phone, without taking this risk, since you will have time to adjust to the situation if necessary.


There is not ever going to be a takeover window long enough to allow the driver to read a book at their leisure. That would require tens of seconds for context switching during which the traffic is going to be changing behaviour.

How will the car detect a construction zone it can’t see yet with enough time to hand over to an inattentive driver?

I look forward to seeing this system in operation. I have significant doubts about the feasibility of its operational claims.


To ensure operation inside the limited legal responsibility, there are just two options:

a) The system would have to be allowed to disengage automatically when conditions change unfavourably, in which case you would still have to be alert, all of the time for when that happens

b) It would not be allowed to do that automatically and you are liable from the moment autopilot drives into an area that is exempt from its legal responsibility as laid out by the insurance coverage limitations

For example, take a look at the exemption of "construction sites": Either the car disengages and says "from here on our it's your job, not ours anymore" or it does not, and then in the case of an accident you are not covered by their limited legal responsibility. What the autopilot can definitely not do, is making the construction site disappear or guaranteeing that the car will never hit one after having been engaged.


You missed:

c) The system needs to detect worsening conditions early either prompt you to take over with enough time to spare (or fail gracefully).

That's the big thing that Mercedes guarantees here: You'll have enough time to take over even if you're doing something else; if the system does fails to give you a warning in time and you crash, Mercedes takes the responsibility. In all other systems, once the autopilot prompts you to take over, you are responsible. With this system, once the autopilot prompts you to take over, Mercedes is still responsible for the next ten seconds, which should be more than enough time to take over in an emergency.


Oh well, we seem to have different opinions about whether a 10s-to-react-window in quickly moving car qualifies as having to be alert all the time. Fair enough.


Absolutely.

Regulators also love b) and now that there is precedence we should see the next step by some of the a) companies.

If their product is better they should be able to easily match or exceed Mercedes (and I think they will. Either voluntarily or by law).

To me this is a really exciting step along the way to autonomous vehicles.


Fair enough. I definitely agree that autonomous driving has to be insured by the company providing the service and at least in so far it's a big step in the right direction.

However claiming they have beaten Tesla seems like a bit of a stretch given the circumstances.


I might have written that poorly.

Tesla probably has the lead on average but can't (or won't) guarantee that their system is safe in a specific set of best case conditions.

It's a different approach if you ask me.


If Mercedes would apply the same safeguards as Tesla, then Tesla would look not so nicely anymore.

I am still waiting for the promised full autonomous cross-country trip from East to West. I think it was promised for 2019.

While Mercedes had a full autonomous trip including small town traffic and several round-abouts.

Mercedes just applies higher standards on what they deploy on average Joe.


Musk may have promised that in 2019, but he famously promised it in 2017.


> I think it was promised for 2019.

https://youtu.be/o7oZ-AQszEI / https://old.reddit.com/r/teslamotors/comments/s7vea9/fsd_bei...

Although his earlier claims in 2014 / 2015 were met, as autopilot works reasonably well on roads and highways. The claim for LA->NY was 2016[1]

1: https://www.businessinsider.com/elon-musk-autonomous-tesla-d...


How does Tesla have the lead? Their autonomous technology seems pretty far behind Waymo and Cruise.


You just don't know about the autonomous technology from Mercedes. Because they don't talk about. They don't produce blog post about every small step they do, like Waymo and the others.

The automotive industry is more like Apple. They don't talk and show off until its done. Like: Never over promise and under deliver.


You must be joking. This is the industry that has annual travelling car shows where every manufacturer displays "concepts" that are never thereafter produced.


In particular the automotive industry has been sued often enough to be careful what they say. I fully expect Tesla to be found at fault for some situation where they officially say the driver is in charge, but the courts decide marketing messages mislead the driver.


>Tesla probably has the lead on average but can't (or won't) guarantee that their system is safe in a specific set of best case conditions.

Do people not see how much of a nightmare it would be if liability is constantly switching back and forth depending on the circumstances the self-driving system is being used under? The only practical approach is to either have the driver always be responsibile when the system is on or the manufacturer always be responsible when the system is on. The choice to use the latter approach means the system has to be limited to very narrow circumstances. That doesn't mean the system is necessarily safer than the competitors in those circumstances, it simply means that the companies are approaching the same problem from differenct angles.


They very much have beaten Tesla, or does Tesla have a similar approval and infrastructure in place? Will Tesla take liability from you when the car crashes in Autopilot? They won’t, while Mercedes actually will and has that legally backed by the German government.


As the top comment said, it's only in extremely tight circumstances. Tesla Autopilot / FSD beta (the one that you can use if you have >98 safety score) will work anywhere your car can see lane lines and it'll try its best to work in rain/snow or at 1am with limited visibility.


It is as tiny a degree as between software 95% ready and 100% ready. Or a software development contract with or without the pesky word ‚guarantee‘ in it. At some point a few degrees make the difference between water and vapor.


Tesla will never take legal responsibility for their autopilot with current cars.

There's a reason for that and it's because of a lack of confidence in the technology.


They will try to avoid taking responsibility anyway. The courts will decide if they do or not.


Insurance doesn't cover brand image, which is very important for Mercedes.


> I don't think it's that big of a deal but it's clearly well done from a PR/marketing standpoint.

If this is true why don't all of companies do it?


If I can read a book then how will I know if we come to a construction zone? They cant take such conditional responsibility, it will lead to bad things.


The car will probably detect construction zones on its own. Tesla's already do that (and have since at least 2019).


The fundamental problem is that in order for the company to accept legal responsibility the self driving system must follow all traffic laws, and there is no way consumers will accept a car that won't go faster than the speed limit in no traffic situation.


These two are interesting for comparing Daimler approach, systems used, and engineering philosophy. From the first presentation, Slide 9 is interesting as review of the different levels of automation.

From Slide 23: If you don't take over when requested, after an automated stop, car will unlock the doors call to your emergency response center.

"An Automated Driving System for the Highway - Daimler" [PDF]

https://group.mercedes-benz.com/documents/innovation/other/2...

"A Joint Approach to Automated Driving Systems - Daimler" [PDF]

https://group.mercedes-benz.com/documents/innovation/other/v...

Edit:

"The technical requirements"

"When the DRIVE PILOT is activated, the system continuously evaluates the route, traffic signs, and occurring traffic incidents. As a layperson it’s hard to imagine how sophisticated the hardware and software of the S-Class is in order to be ready for Level 3. Even the “normal” latest-generation Driving Assistance Package has the following:

• A stereo multi-purpose camera behind the windshield.

• A long-range radar in the radiator grille.

• Four multi-mode radars (one each on the right-hand and left-hand sides at the front and rear bumpers).

The optional parking package additionally includes:

• A 360°-Camera consisting of four cameras in the right-hand and left-hand exterior mirrors as well as in the radiator grille and at the trunk.

• Twelve ultrasonic sensors (six each at the front and rear bumpers).

"For the DRIVE PILOT, many additional components are needed besides the sensors of the Driving Assistance Package. The long-range radar in the radiator grille is combined with a LiDAR (light detection and ranging) system. Whereas radar uses radio waves, LiDAR employs pulses of infrared light in order to optically determine an object’s speed and distance and to create a highly precise map of the vehicle’s surroundings. This combines the strengths of both technologies: LiDAR sensors operate with higher precision, while radar is advantageous in bad weather, for example."

"The rear window is equipped with a rear multi-purpose camera that scans the area behind the vehicle. In combination with additional microphones, this device can, among other things, detect the flashing lights and special signals of emergency vehicles. The cameras in the driver’s display and MBUX Interior Assist are always directed at the driver so that they can determine if he or she falls asleep, turns around for too long, leaves the driver’s seat, or is unable to retake control of driving for other reasons."

https://group.mercedes-benz.com/magazine/technology-innovati...


> These two are interesting for comparing Daimler approach

The company is now officially called Mercedes-Benz as of recently, BTW.

The Daimler brand is now with Daimler Trucks, following a split.


It's a marketing parlor trick. They take legal responsibility for something that cannot happen and immediately drop to human control when things turn south, like all so called "Level 3" systems.

I'm also sure they will not take any responsibility if someone rear-ends you when the car stops in confusion on the highway.


Did you read the article? They guarantee a 10 seconds manual take over time and remain responsible in those 10 seconds. That's not the same as the instant dropping the ball that Tesla, Volvo, GM etc all do.


Their system (confirming to German law and as mentioned in the article) gives you a 10 second takeover window within which they are still liable.

Yes they do. That's the point of their announcement and the reason why they only allow it under very specific and favourable conditions


This is remarkable, twice over.

First, a software vendor accepting responsibility for the software's actions? Wow.

Second, they're confident of being able to predict accidents ten seconds in advance? That's up to 160m away and I think that's great, even if they limit the circumstances sharply and allow many false positives.


I don't think they're predicting exactly; I think they've decided that's what is reasonable to ask of a consumer, and they're building their Ts & Cs to fit. They'll then make the technology fit as best it can, but if they can't, it's their fault.


This is not "a software vendor accepting responsibility for the software's actions" this is a car, placed between human, unpredictable, drivers operating 2 tonne machines. Far from Photoshop working on your PC or an embedded system for your fridge.


So it should be much easier to get a warrant of fitness for your fridge, if not the photo-editor, right?


No, they are confident that they can detect at least 10 seconds in advance if one of the many conditions required to operate the autopilot will be violated. Upcoming tunnel, construction, etc.


Right. And any accident falls into at least one of these three classes: Something they won't need to pay for (even via insurance premiums), something the software can avoid and lastly, most significantly, something for which the software can provide ten seconds' warning.

I don't find it remarkable that the software has many reasons to disengage. I find it remarkable that potential accidents >10s into the future are on the list of reasons, even in limited circumstances.

The first software I bought came with a warranty that covered nothing: It explicitly said that the software wasn't guaranteed to perform "any particular function". As I read that text, that vendor had the right to sell me three empty floppy disks. You've seen similar texts, right?

And here we have Mercedes guaranteeing considerable foresight in limited circumstances. No matter how limited the circumstances are, that's a giant leap.


I'd expand that to four classes of accidents. Four, something they /will/ need to pay for (both in money, as they're committing to, and in PR). Inevitably some accidents won't fall into your three "preferred" categories -- making a system like this successful is about managing the size of bucket four, not eliminating it entirely.


If that is the only use case Mercedes feel their safety assurance can justify what does that say about other self driving cars for which their manufacturers will not accept liability?


It says that they’re a responsible company that isn’t comfortable playing fast and loose with your safety. Unlike some US companies that don’t come from a background of safety but a background of “move fast and break things” and a business model of regulatory arbitrage.


But this is quite a bad product in that respect. If this is all it takes, Tesla would implement the same restrictions and get the same fanfare, but alas they're not constantly aiming for "PR stunt" levels of driving.


You're saying that Tesla of all companies isn't aiming for PR stunts with their automated driving? Tesla, the company that calls their level 2 automation "Full Self Driving" isn't out for PR stunts? I would be hard-pressed to name many companies that seem more prone to PR stunts than Tesla.


Assuming that Mercedes Benz (and all other manufacturers at that) know their shit, which they do, it says more about the self-driving competition and self-driving tech in general thann it does about Mercedes' offering.


I personally would rather have a system that I can actually use when I want even if that means I need to accept liability while using it.


> I personally would rather have a system that I can actually use when I want even if that means I need to accept liability while using it.

It's all a trade-off. My accident-rate thus far is one serious accident in over 700000km of driving. My fender-bender rate is three FBs in over 700000km of driving.

My understanding of the Tesla system[1] is that it requires roughly one intervention every ~5000km of driving in order to avoid an accident. For me this is an unacceptably high risk, because not intervening in 4999km will definitely (100% certainty) mean that I will be in a poor position to react when the intervention is necessary.

Now, you might claim that the driver has to be alert while not in control for 4999km to avoid the accident on the 5000km mark, but if drivers were that good at being alert while not engaged with the act of driving, then the self-driving system is redundant anyway.

[1] I read the stats a long time ago, so maybe they've changed.


The statistical POV is a very good point, thanks for bringing this up!

It reminds me of this other excellent discussion about taking risks the other day: https://news.ycombinator.com/item?id=30264760


From my point of view, the actual act of driving is not that difficult (after the first 50k miles or so). The issue is the mental effort required to continually pay attention to what's going on to drive safely.

Tesla's system still requires me to pay attention to what's happening to exactly the same degree as normal because I might need to intervene (and in fact makes it harder to be ready to do so). All it does is take away the (to me) trivial aspects of pressing a pedal and turning a steering wheel.

Whereas this system introduces a set of circumstances in which I don't need to drive at all. No need to pay attention at all. And it's the most tedious form of driving there is - stop start traffic on a motorway.


> And it's the most tedious form of driving there is - stop start traffic on a motorway.

As someone who has used autopilot in the circumstances you describe, I'd be shocked if you need to pay more attention during autopilot than the Mercedes system. Autopilot generally requires intervention in danger situations, which are incredibly rare in stop/start traffic.


This will be treated same as drunk driving. Just because you want to take responsibility doesn't/shouldn't mean anything to legal framework.


I'm not sure that I understand your point. Which party takes responsibility for an accident says nothing about the frequency of accidents or the kind of threat this car would pose to other drivers.


Of course it does (indirectly). Tesla for example can't take responsibility, since they know that their "full self drive" or "autopilot" systems are never reliable, in any driving circumstance.

What you're saying is like "the engineers that built this bridge never drive over it, but that doesn't mean it's shoddy" - technically correct, but almost certainly wrong in practice.


You are assuming the motivations behind these decisions are purely based on safety rather than a philosphical difference in approaches.

From a practical standpoint, liability needs to be all or nothing. You can't have a driver worrying about whether they are going 40mph or 41mph. You can't immedaitely give the driver liability if it starts to drizzle or the sun sets.

Mercedes is taking the approach that they are always responsibile. Other manufacturers are taking the approach that the driver is always responsible. The end result is that the Mercedes system is much more conservative in how it can be used. This says nothing about the quality of their technology in comparison to their competitors. It simply says they are focusing on the easiest problems first while their competitors are taking a more holistic approach trying to design a system that has more broad usability.


As everyone knows by now, literal full self driving (as in get in your car, tell it to take you to the other end of the country and wake you up when it gets there) is entirely out of reach to current technology, and will stay out of reach until we design new sensors and possibly general AI*.

So, the current goals must be to achieve something similar in certain well defined limited conditions, and with reliable automatic checking that you are still within those conditions - hopefully conditions that one is actually likely to encounter. Until we have that, letting self driving cars on public roads is a menace.

Current self driving cars are at best at the level of a driver going through their first driving lessons, and one with very bad eyesight at that. Having a human act as the driving instructor, theoretically prepared to step in whenever the AI makes a silly mistake, is not enough to make these cars as safe as the average (non-drunk, non-sleep-deprived) human driver.

What Mercedes seems to be doing is responsibly pushing the state of the art further. Having a car that is safer than a human driver without depending on your constant vigillance is a huge step forward. Obviously, this only works in certain conditions, but the car itself detects when those conditions are no longer met, and gives you quite ample warning to assume back control.

* Elon's shameless lies about having your Tesla act as a self driving taxi and generate a profit for you while you work in the coming years have well been put to rest.


The point is that if the designers are not even confident in saying "this works without a hitch under X and Y circumstances", allowing its use on public roads at all (and you choosing to use it) are bad ideas.


Yes, but you are missing the essential point of this move.

They are saying that they have enough confidence in the system that they will pay for anything that goes wrong. This means that they have run the calcs to work out how much its going to cost.

Tesla have basically gone: "fuck me its bad, lets just legal boilerplate ourselves out of the consequences. Oh and charge people to QA our shit"


Here's the thing, every customer want this way, until something happens then it's always the engineer's fault.


It doesn't say anything really. This only speaks to Mercedes and the performance of their driverless tech.


This is insurance. It is all probability.


More Actuarial Science but your point still stands.


I like this because these restrictions paint a very realistic picture of the current state of the art of autonomous vehicles. Any claims beyond these are likely just marketing fluff (at this point in time).


Highway traffic is the one thing I would trust a self driving car with right now, and that's how most of the time is spent on long trips. If can legally watch a movie or browse the web while my car is simply following the line (including traffic jams), then I'd be perfectly happy. Just alert me 5 minutes before the exit or intersection and let me handle that.


Handover warning is the key limiting their liability.

Mercedes will accept legal responsibility only while DP is engaged so driver still has to pay attention and react(so called manual take-over).

DP max speed is 60km/h or 17m/s. System is likely to detect and issue under 100m away so driver has less than 5s to take over and sort it out.


They give you a 10 second window from telling you to take over control that they will still be liable for any issues.


I saw the mention of 10 seconds reaction window but I don't think that's universal. As in my example, if there's an orange light blinking ahead and car is driving 60km/h it's impossible for the car to give the driver 10 seconds to take over.


> Drive Pilot can only engage at speeds under 40 mph

Going below 60km/h on the Autobahn... This seems to only be useful for the narrow use case of very bad traffic.


Not unheard of on German roads.


> Those are some big caveats that mean that you won't be able to use this in most situations

It still seems like we need to adapt the roads to autonomous vehicles and not the other way around. In the UK, and there's no indication of it whatsoever at the moment, I hope we never end up with the US's crazy car-centric town streets where as a pedestrian you can only cross at crossings, and if you get hit by 2 tons of steel, it's your fault.


Huge caveats for the initial rollout. Over here, the speed limit for divided highways with no traffic control systems is 80km/h at a minimum. I guess you need to start somewhere though, even if that is 'traffic jams on highways'.


> Drive Pilot can only engage at speeds under 40 mph (60 km/h in Germany) on limited-access divided highways

Don't all highways in Germany have minimum speed of 70 km/h? So basically there's no road where Drive Pilot can be used?


> Don't all highways in Germany have minimum speed of 70 km/h?

No. The only legal limit is that the vehicle has to be type-licensed for going more than 60 km/h, which an S-class clearly is. Actual minimum speed requirements are rare.

Nevertheless this is for now clearly primarily useful as an assistant for being stuck in bad traffic, and you can get ticketed for going slow enough to hinder traffic flow without a good reason, and if you can argue a good reason depends. An upgrade to 80 so you can stick it behind a truck in the right lane would make it a lot more practical.


That’s the trade-off between level of responsibility and capability at our current level of technological progress. Level 2 systems operate in much broader conditions but have high enough rates of failure that a driver must be attentive at all times. The jump to level 3 is mostly about responsibility. That higher level of responsibility means that the car is not going to drive itself unless it’s absolutely sure it can do it by itself.


It's weird how to this day Tesla Autopilot can only really be semi-reliably trusted in exactly the same situations this works

... but because Tesla has the reality distortion field in full effect you never hear these complaints about their solution.

Driving up sidewalks, killing people by slamming into stationary objects, people trying to sleep in your glorified ADAS guided car, all the cost of being able to claim you work in more situations.


>It is basically only good for stop and go highway traffic

I mean if you commute regularly in a major metropolitan area that could easily be 90% of your driving.


I wonder if there will be a long term unintended consequence where as self driving progresses into more difficult use cases people will have less overall experience and will struggle when they have to take over for more difficult scenarios like heavy rain or snow?


It's already happening with existing functionality. I know several younger drivers that struggle with parking without all the cameras and auto-park features.


Unlike Tesla, you can rely on it in those circumstances.

In Tesla you have to have hands on wheel all the time.


You will be able to use it in one of the most annoying situations; Heavy highway traffic with stop&go.

Nobody likes driving in that, a free highway can actually be fun, but as soon as the traffic gets dense it becomes the opposite and quite high stress.


I'd bet anything the restrictions are for insurance reasons otherwise the premiums would be too high


It’s not about insurance, it’s about the definition of level 3 automation. It requires the vehicle to have very high confidence that it can drive without mistakes, because a human is not monitoring the driving.

Level 2 systems can often work in more conditions because they have a lower requirement for reliability. They are permitted to make mistakes without realizing it, because the human driver is required to monitor and override the system.


Mercedes is perhaps the only auto company that is both supremely innovative and supremely respectable at the same time.

Tesla is innovative, but they get ahead of themselves, and Musk is a glorified internet troll who bullies and arguably lies.

The Big 3 are respectable in that they don’t really overhype or misrepresent, but Mercedes will probably out-innovate them in self-driving like they’ve out-innovated them (and pretty much everyone else) in just about every other aspect that matters over the past century.

Disclaimer: I drive a Toyota Sienna and a Subaru Outback. But I’ve seen a few Mercedes up close (and sometimes driven them) over the years. The only really crappy one in my experience was the ~2006 G500.


I stopped respecting Mercedes engineering when they (and the others) were caught cheating (x2) on emissions. IMHO, they all had a better chance to compete with Tesla, but they took the path of cheating and global pollution.

https://jalopnik.com/mercedes-may-have-been-caught-using-def...

https://en.wikipedia.org/wiki/Diesel_emissions_scandal#Merce...


Kind of interesting then that Mercedes was an early Tesla investor. Seems at least someone in Mercedes saw the future.


Is there a statute of limitations on that, or are they tainted forever?


Statute of limitations did not apply. All crimes were scrubbed by Western govts and forgotten by Western media. Remember legacy auto pays for most ads during news broadcasts


Given the context provided by the rest of the sentence, they were clearly not talking about any legal statute of limitations.


The comparison with Tesla baffles me. One is a fairly common car with limited self driving abilities that are commonly used all over, the other is an exclusive model. Tesla sells each quarter an order of magnitude more than they yearly sells of s-class. Most of the s-class are irrelevant - they are not sold in Germany.

When Mercedes will make this feature and promise available to each A-class sold worldwide we will be able to compare abilities. This day will come some time in the future, but the current move is not leading me to believe that Mercedes is a safer bet then Tesla.


I've never worked on Mercedes, but if it's anything like BMW, its the equivalent of overtly complex spaghetti code hidden behind a super nice interface.


I think GM is doing a pretty good job. Super Cruise is one of the better driver assist technologies on the market today. And Cruise (unrelated despite the name) is second only to Waymo in full self-driving technology. Mary Barra has stated that they want to sell cars with Cruise technology once it is sufficiently mature, not just offer a ride service.


FWIW, my friend at Cruise thinks that GM is messing up the company.


Mercedes' tech is largely a joint effort with Luminar, so I'm not sure you can consider them innovative in themselves, they probably just know when and where to invest to stay in the game.


> he only really crappy one in my experience was the ~2006 G500.

Try the CLA 250. Their mid and high ends are pretty nice, but they tried to target <40k without sacrificing too much of their margins.


This is an important step. For the first time there is full self driving where the driver is no longer require to pay attention to what the car is doing. However, this is not a generic solution. This system is limited to a very narrow set of conditions. Specifically, to premapped highways with a divider between directions, no crossings, just on- and off ramps. On top of that, at very low speed, so in normal traffic you unfortunately cannot use it either.

So how does this compare to Teslas FSD (not Autopilot, which really is only an assistant)? The answer is: not much at all. The Mercedes system has limited the conditions to a point, where it basically just has to do collision avoidance with traffic with predictable behavior. It is limited to selected roads. You don't need much "AI" here and for this task, LIDAR excels in the specified weather conditions.

The Tesla FSD system on the other side aims to be a generic driving AI. It has no limitations about on which roads it operates, it uses maps only for navigation to the destination. But the driving operation is done solely by the evaluation of camera footage. Which already works in many more environmental conditions than the Mercedes system.

These systems approach the goal from entirely two directions. The Tesla system aims to be a complete solution, but certainly is not able to do that unobserved in the current state. It is doing reasonably well at that, but the literally billion dollar question is, can it ever get so reliable, that the driver can completely hand over to car ever? On the other side Mercedes has full autonomy, but they achieved that in a very limited and controlled environment - it remains to be seen how much they can extend that.


> The Tesla FSD system on the other side aims to be a generic driving AI. ... can it ever get so reliable, that the driver can completely hand over to car ever?

Using the current machine learning algorithms? Not likely. Imagine releasing a new airplane and telling the pilot - "this airplane flies well in 99% of scenarios, but 1% of the time it refuses to respond to pilot commands and kills everyone on board. We don't know why it happens but we'll keep training the model". That's the state we're currently at with "AI"; as someone put it, it's alchemy. If it works, we don't know why. If it doesn't, we don't know why - but we tune the parameters until it seems to work. We cannot prove it will work for every possible input. ... Not exactly a scientific approach by any means. Such self-driving cars might be accepted in USA but I imagine they'd quickly get banned in Europe after a few random fatalities.


In the 4th quarter, we recorded one crash for every 4.31 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.59 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.

https://www.tesla.com/VehicleSafetyReport


there are a few things to unpack here

1) Autosteer and active safety features are not "autopilot", its adaptive cruise control.

2) if they think something bad is about to happen, it beeps and hands over control, thus its the drivers fault, as the system wasn't engaged at the time of the crash

3) its not their self driving tech.


exactly. Steer assisting and active safety features are standard features for every upper class car you buy today.


1) Autosteer + active safety features is exactly what autopilot is. What do you think it is?

2) Tesla count 5 seconds after a disengagement and count it as a fault of autopilot if a crash occurs.

3) It's part of their self driving tech, but it's different to FSD - yes.


“FSD” and “Autopilot” are both SAE level 2 systems, which by definition, never take over driving responsibility from the driver.

They may have the worlds best driver assist features, but they have yet to produce a product that drives itself in any capacity.

https://www.motorweek.org/images/SAE_Chart.png


SAE is good for classification but not for evaluating performance. By definition autosteer isn't "driving", but it does effectively drive for you at the level you'd expect, and it can take you 300 miles on a highway without disengagements and the $12k package can even overtake cars slower than you.


You’re only looking at one metric of “performance”: the number of conditions it operates in.

The SAE is looking at a different performance hurdle between levels 2 and 3: reliability.


That's my point. Mercedes might be able to operate unsupervised on a some predefined map of highways while under a set of conditions and at a certain speed, but that doesn't mean it's a good product that solves real problems if it can't take over driving unsupervised on a regular road trip where you're driving 70+mph.


> Autosteer + active safety features is exactly what autopilot is.

If you ask a normal person on the street what "autopilot" means in a car, you'll get a range of answers. But the consistent opinion is that autopilot means that you don't need to concentrate. Most will say that its automatic driving, and will drive for you.

But autosteer requires concentration. Its just sparkling lane assist.

> it's different to FSD - yes.

Which is the point. Its marketing fluff. Autopilot isn't really that, its adaptive cruise control with lane assist. Thats the thing that rankles. All of this stupidity, injury, and noise comes from a marketing decision. One designed to cover up that the CEO over promised and wildly under delivered _yet again_.


Autopilot in a plane is cruise control and flight heading (lane assist). Is that not the same thing?


"Tesla's Favorite Autopilot Safety Stat Just Doesn't Hold Up": https://www.wired.com/story/tesla-autopilot-safety-statistic...


There isn't actually any substance to that article. Why don't the numbers hold up?


It's covered in the article.

"NHTSA's Flawed Autopilot Safety Study Unmasked (2019) - The safety regulator's claim that Autopilot reduces crashes by 40% was based on flawed data and analysis, which it attempted to keep secret.": https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-...

"Tesla Autopilot Safety Stats Said Imbued With Statistical Fallacies, Interpret Cautiously": https://www.forbes.com/sites/lanceeliot/2019/06/09/tesla-aut...

"In 2017, the feds said Tesla Autopilot cut crashes 40%—that was bogus": https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...


These are refuting NHSTA numbers, not Tesla’s numbers.


According to the article it seems to be:

1. NHTSA has reiterated that its data came from Tesla, and has not been verified by an independent party (as it noted in a footnote in the report).

2. Second, it says its investigators did not consider whether the driver was using Autopilot at the time of each crash. (Reminder: Drivers are only supposed to use Autopilot in very specific contexts.)

3. And third, airbag deployments are an inexact proxy for crashes.

... which all sound like really flimsy reasons to conclude "doesn't stack up." A more honest summary based on those would be something like "hasn't been verified by a 3rd party yet."

It's incredible how openly the media makes things up about Tesla, likely just to generate page impressions.


That article is pretty old. More recent analysis is much more specific as to how the data is flawed, see tweet and associated paper:

https://twitter.com/Tweetermeyer/status/1488673180403191808

The primary issue is that the data doesn't adjust for road classification in any way. Highway driving, where autopilot is used, has considerably fewer crashes per mile than city driving. Tesla compared Autopilot's rates against all driving rather than just highway driving which would be the relevant metric.

That adjustment alone almost completely eliminates any safety advantage of Autopilot before you get into any of the other adjustments like age.


So being safer on highways should be completely disregarded? Okay so this Mercedes news should also be completely disregarded


Tesla's data doesn't even show that they are safer on highways, I don't think you understand the original criticism.

It's like this:

Tesla: Look how great we are, our apple is redder than their orange.

Critic: It doesn't make sense to compare the colors of those things in that way. Here's why ...

You: So being a redder apple should now be disregarded?


Tesla’s data shows autopilot is safer (crashes less) than not on autopilot. The only question mark I’ve seen in this discussion is on NHTSAs data regarding Tesla.

Could you explain the flaw in Tesla data? How is crashes/mile not a good proxy for safety?


Tesla count any crash above 12mph. Airbag deployment isn’t required.


>In the 4th quarter, we recorded one crash for every 4.31 million miles driven in which drivers were using Autopilot

But this is not FSD , is a combo of human + some driver assistance, come back when Elon discloses how many times the human was forced to intervene to prevent a crash.

I am wondering why Tesla fans think this stats are proving that so called FSD is safe , don't you understand stats? did PR tricked you,? this proves that soem assistant like one keeping safe distance between cars is safer, make all the data public and let us compute the real stats please.


That's an apples to oranges comparison, due to differences in car quality and situations driven.

If you only engage Autopilot in the easiest situations to drive in, it might look safer artificially because of that.

Comparing a Tesla to all vehicles means old cars with less safety features enter the picture, skewing the statistics again.


> So how does this compare to Teslas FSD?

Well, it exists. Courtesy of Wikipedia:

> In December 2015, Musk predicted "complete autonomy" by 2018. At the end of 2016, Tesla expected to demonstrate full autonomy by the end of 2017, and in April 2017, Musk predicted that in around two years, drivers would be able to sleep in their vehicle while it drives itself. In 2018 Tesla revised the date to demonstrate full autonomy to be by the end of 2019. In February 2019, Musk stated that Tesla's FSD capability would be "feature complete" by the end of 2019. In January 2020, Musk claimed the FSD software would be "feature complete" by the end of 2020, adding that feature complete "doesn't mean that features are working well". In August 2020, Musk stated that 200 software engineers, 100 hardware engineers and 500 "labelers" were working on Autopilot and FSD. In early 2021, Musk stated that Tesla would provide SAE Level 5 autonomy by the end of 2021 and that Tesla plans to release a monthly subscription package for FSD in 2021. An email conversation between Tesla and the California Department of Motor Vehicles retrieved via a Freedom of Information Act request by PlainSite contradicts Musk's forward-looking statement.

https://en.wikipedia.org/wiki/Tesla_Autopilot#Predictions_an...

(Also, incidentally, Musk's decision to use cameras instead of LIDAR – for depth perception etc – is insane and is widely regarded as such. He's doubled down on saying LIDAR is rubbish, since the entire industry has bet on it, and presumably Tesla is too far into the woods to reverse.)


> Musk's decision to use cameras instead of LIDAR – for depth perception etc – is insane and is widely regarded as such

Let me know when humans get lidar implants because they can't perceive depth via their eyes.


Catchy, but not a particularly good criticism. Few people believe that using cameras for this is impossible, only that it makes the problem so much more difficult that it's insane to think that that's the way to go if your goal is to be a leader in self driving.


Good analysis.

MB also uses cameras extensively. LIDAR and maps are used in addition.

I wonder whether these approaches will one day meet in the middle.


> Right now, Drive Pilot can only engage at speeds under 40 mph (60 km/h in Germany)

I just felt the urge to clarify that this is written with the intention to mislead readers, and that 40 mph are also 60 km/h in other parts of the world.


It's poorly written, but... mislead readers into thinking _what_? That Germany has its own special kilometers (or possibly hours?)

I think it's just bad editing.


Sorry, the joke got lost in the text medium! After some minutes I thought that I should have added a "/joke" an "/s", or something like that, but in the end forgot to do it.


But 40 mph is 64.36 km/h, isn't it? So perhaps it turns on under 60 km/h in Germany and under 64.36 km/h everywhere else? For some weird legal reasons?


It's 60 km/h everywhere else than the US, where it's 64.36 km/h for weird reasons.


No need to attribute malice. Why would that be intentional? Its just ambiguous drafting.


Everyone responding to you earnestly without understanding the joke... lol


I may be too cynical about the car industry, but wouldn't this be a great strategy from Mercedes to hamstring competitors? If they can force stricter regulations that they know/suspect competitors cannot meet, they could prevent these competitors from selling cars with their 'FSD' tech? This would level the playing field, allowing the German car manufacturers to catch up, or at least hold off competition for a while.


100% this. They are so far behind and every move will be to use regulation against competition


“If the car crashes while Drive Pilot is operating, that's Mercedes' problem, not yours.”

It’s still my problem if I’m inside the car whilst it crashes.


“If the car crashes while Drive Pilot is operating, that's Mercedes' problem, not yours.”

As long-time grey-beard MB owner, and, honestly, it's not just them, the sheer amount of noise and distractions in modern cockpits is absurd now.

By this I mean, reversing sensors, slow-forward speed sensors, massive LCD screen (which may or may not be touchscreen) positioned at windscreen level, the right-hand "-ometer" that's configurable, a dial, a trackpad, a steering wheel with at least four buttons, and two 'joystick' things. And 3 instrument clusters on stalks. Heating buttons, fan, windscreen control, radio button, all down the bottom of the centre console where I have to take my eyes off the road.

It's a shit show.

My point? My current MB has never been so distracting to drive where I have to take my eyes off the road for a second to hit the demister button. Versus a car from the 1990s - no centre-LCD so the heater/demister buttons were in your peripheral vision. No touchscreen - regular buttons with tactile feed back that even had "braille like" raised paint so you could figure out where your hand was without taking your eye off the road. Don't even get me started on trying to use the radio or equaliser.

I hope, I really do, that autonomous driving works and reduces crashes, but I'm astonished there hasn't been a noticeable uptick in crashes based on just what I've written.

From the article, this is yet another button on the steering wheel. Probably because they've run out of dashboard space for all the other buttons. Auto-stop, car alarm off, assistance, sport mode, are at least 4 buttons I never use.

Where's the Jony Ive for car cockpits?


> Where's the Jony Ive for car cockpits?

That's the single touchscreen world of the Model 3 and Y :)


And that's exactly what I imagine they are factoring into their risk analysis when working out their potential liabilities.


Not much at below 60 km/h max.


My mom is a doctor and once was called to the site of a frontal crash that happened at 100 km/h (two cars moving at 50). Only to examine the bodies, there was no one left to save.


This system doesn't support two-way roads though, so a frontal collision is unlikely.


Mythbusters found that two cars colliding at 50mph is not equivalent to one car hitting a brick wall at 100mph. It resembles one car hitting a brick wall at 50mph.


Was this work published and peer reviewed?


Think of it this way: the two cars (if they’re the same weight) create a virtual immovable object at the impact point.

“Brick wall” is doing a lot of work there - note that it’s not the same as hitting a stationary car at 50mph, that car would move. We rarely encounter actual immovable objects on or near the road, which is why a 50+50 head-on is more violent than a 50+0 collision into something softer.


It makes sense as well. If we assume each car has the same mass, and picture the collision from the side, it would appear as if each car was hitting an invisible brick wall that separated them. Or, picture hitting a stationary car while driving 100km/h (perhaps on some frictionless surface) - afterwards, both cars would be sliding at 50km/h.


Nowadays it depends on how new the car is, as newer cars are magnitudes safer than anything 20+ years old.

https://youtu.be/C_r5UJrxcck


Deadly crashes are not uncommon on half that speed, and risk does not grow linearly.


Crashes at 30 km/h will kill cyclists and pedestrians outside the car. The occupants of modern Mercedes-Benz cars will survive 100% of the time.


What are they doing on a controlled access highway?


Exiting their car after it caught on fire in a live lane.


They should walk in front of their car then where no other cars would be.


For sure not in a S-Class or EQS.


seems you're both thinking in absolutes


It was/is pretty clear that german car makers are going to supersede the cult of Tesla. Better engineering, history, support, network, trust and an actual taste for design. Bonus: Leadership teams that don't post gibberish about Crypto.


You may want to revise your thinking. Tesla attracts the best engineers

https://electrek.co/2020/11/11/tesla-most-attractive-company...


Electrek and the writer of that article are heavily biased towards tesla. Also, those are inexperienced students, not established engineers. Also, that article claims that elon is "the world's most brilliant engineer", but elon isn't even an engineer and has never engineered anything.


So do you just go around making up stuff so it fits your world view?

Elon is the chief designer and chief engineer at SpaceX. If you watch the video referenced in this article and are not convinced that Elon does engineering, then I don’t know what to tell you.

https://observer.com/2021/09/elon-musk-spacex-title-design-e...


He hasn't designed or engineered anything. He doesn't have an engineering degree. Giving yourself a title of engineer at your own company doesn't make you an engineer automatically.


You're just completely wrong, and judging by your comment history, you have a huge bias against Elon Musk and Tesla so trying to have a real discussion with you on this subject is impossible.


You missed the /s at the end


This doesn't seem unreasonable.

If you consider that it is the driver who is liable today for road traffic incidents (including failing to adapt to environmental conditions)... then why wouldn't the "driver" still be liable when the driver changes?

https://www.synopsys.com/automotive/autonomous-driving-level...

In this it's plain that Level 3 autonomy is the first level where it's considered that the car is the driver. This is what Mercedes are proposing, to accept that the sum of software and hardware that they provide at Level 3, assuming it's used in the situations and conditions described, is the driver and that they are liable.

Those caveats seem reflected in what they're proposing (if the article is accurate), and it seems reasonable.


Legal liability seems like a much cleaner way to define autonomy for the general public than the SAE levels. If the person is responsible anything the car is doing is assistance or aid. If the manufacturer owns liability it's some level of autonomous. There needs to be some kind of law about how long the car has to alert the driver before handing over liability, but beyond that it's a much cleaner divide.

This would also solve issues like Tesla naming their level 2 systems 'Autopilot' and 'Full Self Driving'. I love my Model 3, but those names are deeply misleading at best and downright fraudulent at worst. The FTC really needs to step in and regulate what manufacturers can claim based on liability.


"If the car crashes while Drive Pilot is operating, that's Mercedes' problem, not yours."

Only legally speaking. If the car drives me into an incoming truck, it still is very much my problem.

So currently it is only enabled for very clear situations, driving under 40 mp/h(60 kmh) in trafic jams on a motorway (where there usually is a physical separation to the other lane, except in roadbuilding situations, but I would guess, that the autopilot will be disabled then).

So it should be quite safe. Otherwise they would not accept the risk. German car makers are very conservative.

Edit: correct speedlimit


Well, that is the catch: you can engage the system only on certain roads which also happen to have a divider between the traffic directions. So it is basically impossible to be facing an incoming truck there.


Still, driving with 60 kmh into an obstacle can result in serious injuries, despite airbags.

I hope/think they considered a pointy metal, like it sometimes is on back of trailors in front of you, stabbing through the windshield for example?


A proper Mercedes developed algorithm will only ever use the left lane, so no risk to have a trailer in front /s


Right, the detection of obstacles needs to work perfectly. This is where LIDAR is good at, it gives pretty precise 3D information about your immediate surroundings. Especially when comparing with high resolution maps, obstacle detection should be very good.


Minor note: it's 40 miles /h, which is about 60km/h.


Whatever Mercedes says may not be what the law says. It may not be possible for them to take the liability from you. It may be criminal negligence or something else to not operate a car on a road in a state that says you have to.

Maybe they can agree to pay any civil liability you'd have, but taking a nap after you turn a car on may open you to criminal liability that Mercedes can't intervene in.


It is what the law says in this case.


Which law in which state in which circumstances?

If a knife salesman tells me his knives can't hurt people, and signs a contract saying he'll assume all liability, I am nevertheless liable when I stab someone with it.

The judge will say you damn well should've known you couldn't go to sleep in a car you're operating, and are in fact culpable for your negligence.

"But Mercedes said!"

"If Mercedes told you to jump (or drive) off a bridge..."


The law in the one country this currently is offered in, I'm not sure why other places would matter.


Putting your money where your mouth is, is probably the only way to obtain acceptance for semi-autonomous driving systems.

I imagine airplane manufacturers have long been on the hook if their autopilot engages Kill All Humans mode, despite the best efforts of Boeing's lawyers to blame some of the humans their software killed.


Autopilots in airplanes have nothing to do with autonomous driving.

Airplane autopilots are so simple they they can be implemented with entirely analog circuits. They are simple feedback control loops that operate off a single scalar variable in each axis.

There is ample time for the pilot to correct any “mistakes” the autopilot makes.

There is no expectation that the autopilot operates independently of the pilot’s supervision and judgement.

In terms of decision-making and autonomy and liability - an airplane autopilot is really analogous to cruise control in a car, nothing more. It operates the flight controls to maintain simple parameters chosen by the pilot, and that’s all.


Maybe the analogy would be more accurate comparing to autoland systems? https://en.wikipedia.org/wiki/Autoland


AFAIK automatic landing systems just get the plane into the right 'landing parameters range', but won't take action if anything unforeseen happens (like a blocked runway).


Right the Garmin Autonomi Autoland system is intended only for emergency use to get the airplane on the ground if the (single) pilot is incapacitated. It's not intended for routine autonomous flight.


I think the earlier comments are referring to CAT 3 auto land systems using technologies like ILS, which are used routinely and are good for 0ft visibility. That is, approach in cloud down to the ground


Not only that, Pilot training assumes Autopilot can disengage at any time.


On top of that, aviation autopilots operate in one of the most controlled and stable environments. Ground based individual traffic is the polar opposite of that.


As aviation people eloquently put it: https://en.wikipedia.org/wiki/Big_sky_theory I think there should be a correlative 'small road theory', if only to underscore the difference.


It seems the non-existence of a small-road-theory is in itself an problem when it comes to self-driving cars...


Which is why (imo) Autopilot was a spot-on name for Tesla's initial driverless feature.


Except the general public seem to massively over-estimate how capable aircraft Autopilots are, and then expect Tesla's Autopilot to live up to their inflated expectations.


the general public doesn't fly planes, so to them every flight is getting on a plane, falling asleep, and magically teleporting from one city to another. They have no perspective on just how much an autopilot doesn't do.


I'm talking solely about liability when the system that you sell as being able to operate on behalf of a human gets it wrong.

Comparisons of the systems is irrelevant to the very simple point I made - putting your money where your mouth is garners trust.

As for your claim about autopilot being simple, and humans having ample time to correct its mistakes, well, two 737 MAXs and one Air France flight are very straightforward counterarguments.


Yeah, that's not autopilot. FLCS, fly-by-wire, drive-by-wire are all different things. Autopilot is really specific. If you want to discuss this productively, you should probably learn the difference.


I apologise that I got the precise terminology wrong when I used a term that most humans understand to mean "when the computer controls stuff your plane is doing on your behalf".

My biggest regret about getting it wrong is that it gives pedants something to fixate on and split hairs about, in lieu of engaging with the actual point.


Ease up a bit - not everyone has to be an aviation expert.


That's true, but some aircraft are now equipped with TCAS and GCAS which will automatically seize partial control from the pilot and maneuver to avoid some types of crashes. So far I think those systems have been 100% reliable but the manufacturers would be liable if they caused a crash.


> Airplane autopilots are so simple they they can be implemented with entirely analog circuits.

For reference: https://www.398th.org/Research/B17_AFCE.html


Lot's of disclaimers. Plus what happens if the system disengages and you crash 5 sec after because you were not expecting to? Technically they are not responsible anymore it seems.

From the article:

"...Handing over driving responsibility completely requires extremely particular circumstances. Right now, Drive Pilot can only engage at speeds under 40 mph (60 km/h in Germany) on limited-access divided highways with no stoplights, roundabouts, or other traffic control systems, and no construction zones. Eligible roads must be mapped by Mercedes for Drive Pilot use (similar to GM SuperCruise); the automaker has already mapped every such highway in Germany, and most of those in Nevada and California. The system will only operate during daytime, in reasonably clear weather, without overhead obstructions. Inclement weather, construction zones, tunnels, and emergency vehicles will all trigger a handover warning. And no, you can't close your eyes or go to sleep while it operates..."


> Technically they are not responsible anymore it seems.

No, technically whether they or liable or not reverts to normal principles of product liability, rather than their special acceptance of liability.

But manufacturers liability for harms caused by their products, while varying in detail from jurisdiction to jurisdiction, tend to be pretty broad in most of the West (incl. the US, despite it's otherwise weak consumer laws.)


Are there even any limited-access divided highways in California where speeds under 40 mph are safe? I'm not sure what they would have mapped.

As the default speed limit for such a highway would presumably be 65 mph (and traffic usually much faster), anywhere with such lowered speeds would presumably have special circumstances that would probably make it unsuitable.

The guaranteed 10 second handoff is impressive, however.


The car gives you a 10 seconds heads up.

From the article:

"[...] Unlike all currently available driver-assist systems, Drive Pilot is designed to give drivers a 10-second warning before switching off; engineers had to make sure that, in every situation, the system would safely and faultlessly hand over control. [...]"


> Kill All Humans mode

I think they discovered that it was the pilots who were accidentally leaning on the that switch when getting up for a coffee break.

Terrible decision to put it right next to an armrest...


> you are no longer legally liable for the car's operation until it disengages

Until it disengages. Not until you disengage it. So close, yet so far to go.

Unfortunately since this isn't a full self driving I don't know if they can do much better. If it hits the end of the road it needs to hand control back.


Actually that is exactly the right thing to do! If something goes wrong when Autopilot is on, I think the manufacturer have to prove that it was a human error and not otherwise.


We don't want to say "no mistakes allowed" though, that would be going too far.

It's good for manufacturers to err on on the side of limited capabilities to keep things safer, but we don't want them to err too far in that direction. And if they're providing ongoing insurance with no revenue, that's an incentive to be overly restrictive.


In a civil lawsuit for wrongful death (or whatever) the manufacturer doesn't have to prove anything. They are treated the same as any other defendant.


Self-driving cars will be popularized by people not having to pay (increasingly expensive) accident insurance. Once human driving becomes prohibitively expensive, it'll be reserved for the 'elite' who get a kick out of driving a car themselves.


The way things are looking right now it's the autopilot owners who will have to pay an added insurance premium if the tech gains traction.


We can't repair our cars, we have no privacy in when and where do we drive and now we don't even drive them nor are responsible for them. Essentially the car manufacturer plays a role of a transportation service provider now.

Ok, I'm not insisting this is bad. But now what's the point of buying a car into your ownership instead of just calling it like a taxi when and where you need it?


Agreed but I'm totally cool with renting a self-driving car on demand. I don't want to own and maintain a car at all, if I can avoid it.

It sounds like a good self-driving car is going to be really expensive, too, using technologies companies like Tesla are afraid to implement (e.g. LIDAR). Renting by the minute probably makes a lot more sense financially, too.


I used to work on autonomous cars back in the early 2000s. Legal responsibility was considered one of the biggest limiting factor to autonomous cars.

Bold move.


I never understood why others are not required to do so. Even when you stand outside your Tesla and "summon" it to you and it drives into something; you're somehow legally the driver?


It may not really be all that different between Tesla and Mercedes. The conditions that Mercedes is releasing this under sound like they will give them plenty of room to say "not covered" and need lawyers involved. Similarly, if you're using Tesla's summon feature and it drives into something, you can opt to get lawyers involved. The main difference is what these companies are telling you upfront; Mercedes seems to be saying something to the effect of "don't worry," and Tesla is telling you that you must pay attention.


Considering you need to hold the summon button down for it to proceed, yes you are?


If you were controlling the car as you would an RC car then sure. Considering that you are not in the control of the car I don't see why you should have any responsibility.


You are controlling it with the single button. You must look at the car the whole time, and release the button in case there is a danger.

So you are in control of the car. Not the trajectory, but you control its velocity. And you are responsible according to tesla.


> So you are in control of the car. Not the trajectory, but you control its velocity.

So you are not in full control of the car then. If a car decides to go in reverse when instead it should go forward and proceeds to hit a pedestrian that was tying his shoelaces behind it. Can you explain how that was an action of a person pressing a button that has no control over the trajectory of a vehicle and why we should consider them somehow responsible for that action (of backing up) instead of the car manufacturer?

> And you are responsible according to tesla.

That is irrelevant for this discussion.


Easy: You shouldn't have continued pressing the button with people in danger of getting anywhere near the thing while it's in motion.

You became liable as soon as you weren't paying enough a attention that someone could apparently sneak up next the car to tie their shoelaces.


> Easy: You shouldn't have continued pressing the button with people in danger of getting anywhere near the thing while it's in motion.

What is the point of the feature if I have to come to the car and inspect that there are no blockages around it? Should I then walk to the place from which I intended to use the feature and use it or should I maybe just enter the car seeing that I am expected to circle the car before using the feature?


I don't know, maybe there isn't any point in it. I don't own a Tesla, and have never used or seen this feature.

The rest of society is under no obligation to scale its expectations of your liability in proportion to you getting a perceived return on investment out of your expensive toy.

I.e. maybe you should always walk up to and around the car before using the feature in a public parking lot, and only make use of it to e.g. have the car back out of your own garage, provided that you have a clear view of your driveway being unobstructed.


Driving from the back of a mostly empty Cosco in the middle of the workday is pretty much the ideal use case, since you can easily spot the car and keep track of what is surrounding it as it comes to you. The app specifically asks you to make sure you have line of sight of the vehicle, and will not work if it thinks you're too far.


So who is responsible?


I don't see how that makes me "in control" of the car; I can perhaps be lucky and manage to stop it if I see some obvious a mistake coming, but you sort of trust it to "miss" things as well if you use that feature.


Hold to summon is a configurable option.


Neat, but the article seems a little rosy-eyed. eg:

...the software uses microphones and cameras to detect emergency lights and sirens far enough in advance to issue the full 10-second warning before manual takeover.

That doesn't help when the lights and sirens are switched on a few dozen cars away and reach you in 6 seconds.


Emergency vehicles don't and can't expect full compliance and open road 6 seconds after they turn on their lights and sirens, that would be irrational. It takes time to move out of the way, especially in the sort of traffic this is designed for (low-speed stop-and-go highways).


You are not supposed to be asleep at the wheel with any of these "autopilots", nor are you supposed to be reading books or looking at your phone. What are you doing that you can't take back control of your wheel in less than a few seconds ?


Article clearly says that "In Germany, drivers can legally use their cellphones while Drive Pilot is engaged, a first for semi-autonomous systems". However, you can't go to sleep.


If your car is yelling at you to grab the wheel, I'm sure you can take two seconds to put your phone in your pocket. With one hand even, so the other can go to your wheel, and your feet are still nice and snug on the pedals, ready to brake should you need to.


I think the crucial thing here is, in this case if you don't do any of that, Mercedes will still take responsibility for a crash until the ten seconds is up.

Therefore they need to be very confident in that ten second rule, and naturally we look for cases where it might not work out well.


And your focus and situation awareness is out of the window.


Hmm I know it's not the same but I see a lot of people fall asleep on public transit only to be jolted awake, look around confused for a few seconds and then bolt for the doors because they are realizing they're about to miss their stop. I think it's great to say "you can't go to sleep" but I have a feeling people will fall asleep anyways, especially commuting to and from work. Based on the disoriented, panicked, suddenly awake people I see on public transit, I think the 10 second warning could scare somebody awake, have them panic and cause an accident.


Maybe I'm a cynic, but I'm pretty sure someone at Mercedes just did the math, and concluded that the number they're expecting to pay in damages is a smaller number than the profit from the extra sales.

Nothing wrong with that as it is, but while it can be seen as a signal that "Mercedes is super sure their cars won't kill anyone" it should be seen more as a signal that "Mercedes is pretty sure they'll make a profit even when subtracting the price of killing people."


I love this. I genuinely do. But I can't help but imagine some dystopian future where they regret making this decision and they start fighting legal cases on the technicalities like, oh they crashed after they disengaged self driving, so it's the humans fault. Or even worse, they program in self driving to disengage when a crash is imminent. When companies take on liabilities, they're very unlikely to actually pay for it. Leaving the common folk to struggle.


I'm more concerned about "gray areas" where I'm driving but a proprietary computer system in my car have the real control so at a certain point in time it might decide to break or steer or refuse to for some programmed reasons I do not know.

Just imaging actual emergency braking systems: you see an accidents and you decide that the best option is going against another vehicle or even a wall for instance to avoid crushing on a school group on a trip. Actual system do not and can't know that, they'll try to stop your emergency maneuver and due to the little time you have even if formally deactivate you can't deactivate them. Who will be responsible and how to prove what really happen?

That's the issue of automation. I can't trust back box logs from a proprietary black box designed, built and installed by the same company who sell the car. Who can? Even public authorities how much practical resources can have to really investigate?


> Right now, Drive Pilot can only engage at speeds under 40 mph...

Ah, so maybe this would be great for getting around town.

> ... on limited-access divided highways with no stoplights, roundabouts, or other traffic control systems...

Oh, so not where the majority of driving happens.

> ... and no construction zones.

And none of Pennsylvania.


I see this as a major signal to the outside world that Mercedes feels confident in its technology.

Mercedes must really think it has a better system than its competitors.

Either that or these expenses are going to be considered part of its marketing spend?

Another thought: isn't it still usually the responsibility of the at-fault party's insurance company to pay out for crashes anyway? Or, in no-fault states, each insurance company covers their own driver and passengers, so it's not really important who cases a crash.

I wonder if this announcement is legally meaningless in most cases...but I'm not expert in this area.


I was imagining a play like this utilizing insurance to smooth out the wrinkles. Seems like a fairly good bet that the restrictions are designed to essentially eliminate fault in the event of an accident.


This really sets a new standard in expectations around how safe and reliable these products are. TSLA probably has a significant head-start in terms of technology - but here's a new (call it marketing) feature by Mercedes that actually leap frogs over Tesla in terms of perception on the product.

I think in 3-5 years everyone else will have caught up to tesla and we'll have a more level playing field, mainly in terms of the technology and AI. And "marketing" features like this help accelerate the closing of the gap.


> If the car crashes while Drive Pilot is operating, that's Mercedes' problem, not yours.

And the person you hit. Unbelievable that we value these metal boxes on wheels more than human bodies.


Civil liability is one thing, but what about criminal liability?


Not a lawyer, but I think that if you don’t intend for the crash to happen, and don’t act negligently you’re not criminally liable. If the automated process is designed not to require attention of the operator, it should not be considered negligent not to pay attention to it.

Maybe a similar case is how business jets that used to always require two pilots, even tiny ones like the Learjet 23. Flying one of them alone might not only be against the regulations, but even be criminally negligent if you crash and kill someone. But newer business jets have more automation and are designed for single pilot operation, so operating it alone would not be negligent.


At least in Germany the law was ammended to allow this specific scenario.


But how far does this go? Does this mean that on root-cause analysis a particular Engineer could be criminally charged?


It is the same as with any product on the market. The producing company is liable, if the product causes harm when used and it is the products fault that the harm was caused. For example if lets say a TV blows up during normal usage. This is fist of all liability for damages. A criminal charge only happens if it can be shown, that there was any active wrongdoing leading to the damage. Like neglecting usual engineering practices, or even knowingly selling a defective product.


I am no lawyer but there is certainly precedence for companies delivering faulty products and causing damages and injuries.

At some point a judge will decide.


If they go through with this, awesome! I expect that they will walk some of this back before the release, though. (whether through extreme limitations on the situations they will take responsibility for, delaying release, extremely limited volume, etc.)

That said, every material step forward is a great thing. If we started out in the autonomous world that we're going to end up in, the idea of humans getting behind the wheel of a vehicle would seem incredibly crazy.


The next question is how Lloyd's of London will handle the insurance risk at scale. Clearly, Mercedes is going to need one heck of an insurance contract to justify the policy. What happens if the system is hacked and the number of payouts required reaches stratospheric proportions? What quantity of secondary insurance will be needed for the insurer?

Fascinating questions.


I doubt it's even connected to the internet. Odds of being hacked would seem no different from cruise control of a random Toyota.


The big difference is that traditional automakers are almost entirely supplier-innovation-driven and their competitive advantage basically lies in the integration of these supplied parts and solutions. If there are problems with, for example, the Drive Pilot, they will likely blame the supplier of the system and force them to take responsibility/cost.


The hardware components may be supplier driven, but who wrote the software? Isn't that from Mercedes? Its the software that takes the decisions.


There are almost no software developers working at the traditional automakers, they are working with consulting firms and software suppliers.

Volkswagen is the first traditional automaker showing large investments in software with CARIAD [1], however, according to employee reviews, it‘s very poorly managed (by people who have never worked in software).

[1] https://cariad.technology/


Not really. They use the Nvidia Drive platform[0], and most of the software is from Nvidia.

[0] https://developer.nvidia.com/drive


I'd be interested to know what legal structure they're using here.

Does the system legally act as an "insurance policy" that will indemnify the driver against any damages claims from a third party? Does it have a monetary cap? (Car insurance policies vary, but might only cover $250k, which actually won't cover many multi-car accidents).


This is the only acceptable result to society and if they limit responsibility it eill only be due to industry lobbying and in benefit to no one else. This can't be an externality for everyone else, if it is too much of a burden sinply don't sell your murderous half baked self driving and we'll keep going on like before.


> Once you engage Drive Pilot, you are no longer legally liable for the car's operation until it disengages

This is a step in the right direction - but they although they may take responsibility for any financial penalty or civil liability - but can't control what a criminal court decides.


Traffic jam assistant is like the hello world of autonomous driving. It's basically an example project from Github. They are not even approaching the hard problems which Tesla, Openpilot etc. are solving.


> If the car crashes while Drive Pilot is operating, that's Mercedes' problem, not yours.

If my car plows into a tree at 40mph and I get turned to mush that still sounds like a "me problem" to me.


True.

The difference is that Mercedes will have to pay your relatives while Tesla will say "after 2 hours on autopilot the driver wasn't able to take over in a split second"

Neither company really cares about your life and wellbeing, but one is also risking their own money and brand image. I usually trust companies to be good at judging financial consequences


Fair enough, I was just irked by this sentence in the article and had to quip. Overall I'm still ambivalent on automated driving, feeling conflicted by claims that it is safer "on average" but simultaneously painfully aware of the shortcomings of machine learning, and susceptibility to attacks.


Tesla count 5 seconds after a disengagement in "Autopilot crashes".


>>If the car crashes while Drive Pilot is operating, that's Mercedes' problem, not yours.

Unfortunately, no matter how much legislation you put in front of it, that's not how physics works.


All these fancy self drive cars I can't afford on a UK Principle Dev's salary so it's interesting but I'm not sure how mass market these things are or will ever be.


Remote job to the US will pay for one :)


If you work harder and longer you can buy a car you'll never have time to enjoy. Cool.


Work smarter not longer.


I'm sure you'll be able to afford to rent one, which is the ideal scenario for most people.


What is a "principle dev?"


A misspelled principal developer.


This is the only way this can work legally.

If the car causes a crash, the car manufacturer has to be liable. End of story.

As long as the self driving system performs well, it should be a win for everyone!


I assume they mean civilly liability. I believe most states still require a driver behind the wheel and will level the criminal/driving penalties against that driver.


I wonder this as well. Unless it's the CEO/CTO going to jail, it's not actual responsibility... it's just "we'll pay you some tiny amount if we kill someone" YOLO


That system is hardly "self-driving". From the article, it will work only under 40 mph in certain highways only. It's way behind other competitors


It's a bold move that signals confidence in the own system. If it works out others will probably soon follow. In 10 years cars probably drive fully autonomous.


A bold press release isn't much of a prognostication aid.


Interesting. Let's see how this plays out for them.


I doubt this is legal. If your car kills someone, you're going to jail. Mercedes, the company, cannot do jail time; it can at most pay fines - and there's no legal system in the world today that would accept it.

And if it does, we'll just get a few enterprising folks delivering "hitman as a service" in no time at all (driving hacked Mercedes vehicles into people claiming no responsibility).


If you're insured with Tesla insurance, and you have a crash... who takes responsibility for the repair?


> On certain highways, below 40 mph, a Drive Pilot-equipped S-Class or EQS will take control of the car's speed, steering, and brakes to move you along in traffic.

Where can one locate these specific highways that offer guaranteed legal responsibility? I seemed to have missed it in the article.


Compared to the conditions under which the competition takes legal responsibility they seem quite extensive?


Well, they are doing it in infinitely more broad areas than any of their competition so far...


Germany


Saved you a click:

> “On certain highways, below 40 mph”


Quite predictable that a company using lidar in combination with video would take the lead over a company using video only.


You have a very distorted view of "the lead".


I guess it is subjective.

Is it stock market price, followers on Twitter, or another criteria?

My claim is that if you narrow it down to actual autonomous driving; lidar + camera + a big chip will beat camera + a very big chip for the foreseeable future.


What's your definition of the lead? To me, this is huge. Tesla has never been willing to take responsibility for any crashes, instead blamining the driver.


Tesla instead just crashes less.

I don’t know that anyone is shopping for cars based on legal liability of a crash. You are still in the car. You still are injured or die.


Do you have evidence Tesla crashes less than Mercedes? I'd be interested to see a comparison.


Mercedes don’t publish any data. They also don’t collect any data. So it’s difficult to compare.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: