Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is it's called "full self driving" and it runs red lights.


Just like the rest of the drivers out there you mean. Just think logically for a second. If they ran red lights all the time there would be nonstop press about just that and people returning the cars. Theres not though, which is enough evidence for you to conclude these are edge cases. Plenty of drivers are drunk and or high too, maybe autopilot prevents those drivers from killing others


We evolved to intuit other humans intentions and potential actions. Not so with robuts, which makes public trust much more difficult despite the statistics. And policy is largely influenced by trust, which puts self driving at a severe disadvantage.


You think you can out intuit a drunk driver? That’s some serious hubris.


I happened upon a swerving drunk-driving police car once. A tesla would have continued on, following the rules of the road, trying to pass the swerving drunk-driving police car, likely getting in an accident with it. I was smarter, and I stayed the fuck away, far far back from it and changed my course to avoid it.


Yours is a bit of a strawman, considering drunk driving is illegal. So the appropriate comparison is to unregulated (illegal) buggy software operation. Do I feel more comfortable intuiting the intention of a drunk driver compared to buggy software? Yes. Similarly, as the other poster said, if I see a weaving car I tend to stay away from them because I can infer additional erratic behavior.

That’s also why people tend to shy away from people with mental health issues. We don’t have a good theory of mind to intuit their behavior.


Its not a strawman. Having a conversation can also distract you. Being elderly can be a risk. Being tired can be a huge risk. Yet none of these things are illegal. I could have just as easily used one of these examples instead of drunk driving and my point would stand against your criticism.

Fact is, humans in general are imperfect operators. The ai driver only has to be an alright driver, not a perfect one, to route around the long tail of drivers that cause most all the fatalities.


If those are the stronger examples, then you should have went with them. It’s more inline with the HN guidelines than taking the weaker interpretation.

I think you missed my point. Because software is more opaque, it has a much higher threshold before the public feels comfortable with it. My claim is it will have to be an outstanding driver, not just an “alright” one before autonomous driving is given the reins en masse. In addition, I don’t think we know much about the true distribution of risk, so claims about the long-tail are undefined and somewhat meaningless. We don’t have a codified “edge” that defines what you call edge cases. Both software and people are imperfect. Given the opaqueness of software, I still maintain people are more comfortable with human drivers due to the evolved theory of mind. Do you think more people would prefer their non-seatbelted toddler to be in a an average autonomous vehicle by themselves or with a human driver give the current state of the art?

But more to my point, humans are also irrational so statistical arguments don’t translate well to policy. Just look at how many people (and professionals) trade unleveraged stocks when an index fund is a better statistical bet. Your point hinges on humans being modeled as rational actors and my point is that is a bad assumption. It’s a sociological problem as much as an engineering one.


> it runs red lights

Fixing that would require "full self stopping". Coming soon[1].

[1] ... for some value of "soon", that is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: