Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure exactly your point. The Tesla does sometimes require intervention, that's why it's Level 2. But it's still attempting to drive in significantly more complicated situations than this Drive Pilot thing. Does Drive Pilot stop at stoplights or make turns? I don't think so.

Regarding deceptive editing, plenty of people post their Teslas doing squirrely things and them intervening. So it's not like a secret that sometimes you have to intervene.




We know Tesla cannot match Mercedes. We don't know whether or not Mercedes can match Tesla. Mercedes isn't reckless enough to let untrained fanboys play with defective software in two-ton vehicles on public roads.


"We know Tesla cannot match Mercedes" - how? You know this?

"reckless" "untrained fanboys" "defective software" - what is this tone? Why is it reckless? Why do the fanboys need training? Why do you think the software is defective? These are significant unjustified claims!

To me, it seems each company has a different strategy for self-driving, which aren't directly comparable. Beta testing with relatively untrained people on public roads seems necessary for either strategy though.


>how? You know this?

California seems to think so.

>Beta testing with relatively untrained people on public roads seems necessary for either strategy though.

Then why is Tesla the only company doing it?

Mercedes has an L3 system shipping today and they didn't see any need to endanger my life to build it.


Mercedes' system does not do most of the things Tesla's does, right? Such as stop at stoplights or make turns, or do anything at all off-highway. It's a significantly different product, and since they didn't try to do many of the things Tesla is trying to do, it's pretty difficult to claim that those things aren't necessary because Mercedes didn't do them, when they haven't even attempted to deliver the same feature.


Attempting and failing is clearly, clearly worse for the general public and in my opinion Tesla should be strictly liable.


It's not necessarily worse, since there is a person driving the car who can prevent the car from behaving badly. What's the safety difference between this and a regular cruise control, which will happily plow you into a wall or car if you don't intervene?

And, empirically, there's no evidence that these cars are less safe when driving this way. Tesla claims people driving with FSD enabled have 4x fewer accidents than those driving without it, and nobody has presented any data that disputes that.


"Tesla Again Paints A Crash Data Story That Misleads Many Readers" - https://www.forbes.com/sites/bradtempleton/2023/04/26/tesla-...

"...Several attempts have been made to reach out to Tesla for comment over the years since these numbers first started coming out, however, Tesla closed its press relations office and no longer responds to press inquiries..."


This critique of their impact report (I was referring to a more recent statement) only goes as far as saying FSD beta is equally safe to humans driving, not worse, which seems perfectly acceptable?


Depends on the average of human driver. Especially if the average includes motorbikes.

Saying fsd on tela has the same statistic than the general driver population prints a grim picture, as it puts it in a strictly worse performance than peers vehicles (SUV or saloons depending on the model)


You do not get to assume safety in a safety-critical system, period. The burden of proof lies with the entity trying to release a dangerous product to prove that it is safe, not demand everybody else to prove that it is unsafe. The entire argument that there is no evidence that it is unsafe and thus it is okay is wrong to its very foundation. It is such a bad argument that a licensed engineer would stand a good chance of literally losing their license if they advocated such a safety-regressive position. Whoever you heard that from who is actively actually promulgating such inane logic is completely unqualified to talk about safety-critical systems engineering and should be completely ignored.


First, thanks for being an asshole. Second, the product has been released, I did not engineer it, and people online are arguing about it. I’m interested in whether the criticism I’m seeing is valid. Is your idea that anyone who claims something is unsafe, for any reason, should be immediately trusted? If not, how do we have such a discussion? How do we weed out poor arguments? I gave a first-principles argument for why the system might be about as safe as a commonly accepted feature, and also mentioned corroborating evidence of it’s safety. You are welcome to disagree with it, but so far, like most people, you’ve just come in hotheaded and not provided any substantive argument.


It is assumed unsafe. It must be proven safe. This can be done via a engineering, statistical, or other appropriate analysis done by entitys with full access to the design specifications, usage information, etc. who are competent to do the analysis and who have no conflict of interest/unbiased. This has not been done for Tesla FSD therefore it must be assumed to be unsafe. As Tesla deliberately misclassifies their Full Self Driving Beta testing program to avoid government reporting requirements and does not release raw data to any unbiased non-government entity or research organization, it is literally impossible for any external analysis to prove that Tesla FSD is safe. The most a third party can do is affirmatively prove that it is unsafe as you need millions of times less information to affirmatively prove it is dangerous relative to existing systems.

I am not joking when I say millions of times less information. The aggregate motor vehicle fatality rate in the US is ~1.3 per 100,000,000 miles. To even have a chance of proving safety you would need to analyze on the order of 1,000,000,000 miles exhaustively. In contrast, to demonstrate it is not safe you would only need a list of like 20 fatalitys over those same billion miles to affirmatively prove it is unsafe. A list of 20 names versus the raw data for 1,000,000,000 miles is at least a factor of millions in information required and qualitatively different to achieve. Proving safety is enormously difficult and basically requires direct access to the raw information. Proving unsafety is extremely easy and can be relatively easily demonstrated by third partys when dealing with safety critical systems.

Safety engineering is not some sort of new idea that we need to reinvent by the seat of our pants from first principles, it is a mature methodology regularly deployed in aerospace, medical, civil, and automotive engineering applications. Assumptions of safety are fundamentally incompatible with safety critical engineering. This is not a fringe opinion, it is literally a core concept at the foundations of safety critical engineering and not understanding that concept demonstrates a fundamental misunderstanding of modern safety. It is like a geologist arguing the Earth is flat; anybody who suggests such a idea is so fundamentally divorced from the ideas of the field that their knowledge not only fails to reach the level of a expert, they fail to reach the level of even a basic practitioner.


I don’t think we are talking about the same thing and I’m not interested in discussing it further with you.


Look at is happening at this point, another additional failure mode, not seen in the links I posted before

https://youtu.be/WowhH_Xry9s?t=272

See how is confusing the speed limit for cars and trucks, and doing a sudden break. More precisely, at 5:12.

So if you have car driving behind you, not expecting the sudden break, this could cause an accident. They are joking with people lives.

Ethics matter in Software Engineering.


FYI: Not unique to Tesla, I get plenty of sudden slowdowns when riding Cruise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: