Hacker News new | past | comments | ask | show | jobs | submit login

The null experiment is the human driver of that particular Tesla, not an average of all drivers



So if the crash didn't involve "the human driver of that particular Tesla" but instead was an identical incident with a different Tesla driver, we wouldn't be having this conversation? Because that seems highly improbable.

This particular driver isn't what makes the story special or newsworthy. The nature of the incident is. Our choice of statistical cohort should reflect this data selection effect, ie "all Autopilot miles driven," not "all miles driven by <person's name>."


I disagree because we're comparing what this individual driver would have done without a feature advertised as Autopilot.

I'm a partner in an auto insurance company, and most of our no-accident records will stay that way, statistically speaking.

If this driver had a one in a billion miles chance of crashing and Autopilot (with an inattentve driver, as they all tend to be) had a one in a million miles chance, then Autopilot is decreasing safety.

At a certain level of safety, Autopilot would make everyone safer, but it's definitely not there yet.


How do you know that this individual driver has a one in a billion miles chance of crashing? By statistics from other human drivers I assume?


I didn't say that I knew. I was only disputing the relevance of defending individual Tesla crashes by saying that Tesla's are, on average, less likely to crash when on Autopilot. You can't compare Tesla drivers on Autopilot to all drivers -- you have to compare them to themselves, or you have an extraneous variable.

Fatal human accidents are on the order of 1-10 every billion miles of driving in the US, yes. Tesla Autopilot crashes seem to be much, much, much more common.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: