Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Uber did recognise Elaine Herzberg as a pedestrian in the last 2 seconds before hitting her, after failing to do so for the 4 previous seconds [1]. It could have activated its breaks and she may have had a chance to survive (though probably not unscathed).

However, the car's auto-breaking decision had been disabled because it was considered too conservative. So the only agent who could have reacted in time was the woman driving the car, who, as we know, was on her phone.

Much as I find the hype around self-driving cars brain-dead, in this case, the car's AI was not at fault. Even if it could have made the decision to stop in time, the agency to act upon this decision was removed from it.

_____________

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg#Cause...




> Much as I find the hype around self-driving cars brain-dead, in this case, the car's AI was not at fault. Even if it could have made the decision to stop in time, the agency to act upon this decision was removed from it.

The agency to react was removed because its reactions are crap. If the AI is braking due to false positives all the time, and the the only way to fix it is to disable its ability to react, then I would say that indeed, it is the car’s AI at fault, albeit indirectly.


Like I say, _in this case_ the car's AI was not at fault.

I don't know how good or bad is Uber's car AI. If by "crap" you mean that image recognition in general is brittle when exposed to real world conditions, as opposed to the controlled experimental conditions in published results, then I agree.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: