Hacker News new | past | comments | ask | show | jobs | submit login

Sort of a tangent from the thread: I get the point about "good enough" at the moment, but I wonder if car AI really does need to perform much safer than any human driver before truly autonomous vehicles should be allowed to see widespread adoption. I'm thinking about the difficult problems re: legal and moral responsibility for human written/guided/trained programs like car AI. As well as the fact that, unlike in Go, real people's very lives are at stake in the program's successful performance. We already seem to have met the requirements for a research project---which is still unbelievable to me!---and I wonder how long the last leg will take.



AI cars could be safer now in most cases by simply not doing dumb illegal stuff.

The real problem is dealing with all the edge cases. Think of this edge case. You pull up to a red light, a guy with a gun starts running at your car in a manner you perceive to be threatening.

You as a human are most likely going to step on the gas and get the hell out of there saving yourself, at some risk of causing a traffic accident.

The car will just sit there till the light turns green while the windows get shot out and you get dragged out of the car.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: