The detection at 6 seconds was just of an object though, not an object moving in to the car's path. You couldn't drive a car if you had to constantly break because objects (such as people standing by the road) were being detected.
It's not clear at what point the car ascertained a collision would occur between detection 6s before and the determination that emergency breaking was necessary 1.3 seconds before.
Was there any other determination in between, and when? What I'd like to see is Uber's modelling of the woman's trajectory and the likeliness of collision across the 6 second window. That's completely left unsaid.
The average braking distance of a car is about 24m at 40mph, which is approximately the distance between the woman and the car at 1.3 seconds out. So perhaps the 1.3s figure wasn't the first moment the car determined a brake was necessary, but rather, the last moment the car could have braked to prevent a substantial collision. I want to know the first moment the car determined a brake was necessary at all. It's likely not 6s, but it's also likely not 1.3 seconds. It seems this was entirely preventable, or at least the collision impact could have been mitigated severely, had there been a braking and/or warning system in place.
Shutting off brakes on literally the only driving agent tasked with full attention is inexcusable. But that's what they did. To me that's murder. They used to have two passengers, one for tagging circumstantial data, the other to override the car when necessary and keep eyes on the road at all times. Either keep that and shut off emergency brakes from the car and put a warning system in place for the 'driver'. Or do not shut off emergency brakes. Instead they put a single person in the car, tasked to do things that kept her eyes off the road half of the time, and shut off brakes for the AI. That's insane.
> You couldn't drive a car if you had to constantly break [sic] because objects (such as people standing by the road) were being detected.
Yes, you could -- that's how I drive. Do you not? If I detect a mobile object that might be moving into my path, I slow down to give myself time to react until I am reasonably certain of safety. When doing so I take into account my situation-specific knowledge -- have I made eye contact with the pedestrian and do they know I'm coming? Is the dog on a leash and is the owner being attentive? Does the cyclist seem aware of my presence?
I would expect no less from anyone licensed to drive a car, be they human or software.
I didn't say you can't drive a car without being cautious.
I said you can't drive it without constantly braking the moment you detect an object, irrespective of what the object is doing. (e.g. moving into or away from the driving path).
i.e., just because an object was detected 6 seconds before impact did not mean the car ought to have started braking at that moment. It could be that the object was 200 feet away and moving away from the car's driving path, 6 seconds before impact. It'd be absolutely ridiculous to brake in that situation.
We have no information about this context, e.g. the car's data or determinations within the 6 second window. We only know it detected an object 6 seconds before impact.
It appears like the person I was replying to implied 'the braking distance was 180 feet, but the person was 380 feet away, thus uber could have prevented killing this woman had it not shut off the brakes'. In reality, the 6 second figure isn't relevant. What is relevant is the context that allowed a reasonable driver/AI to determine at a particular point in time, that the car should have slowed/braked. And we don't have that information yet. That's what I'm interested in.
I don't think I'm misinterpreting, just disagreeing about the level of caution. One point is that humans are quite good at immediately recognizing objects and evaluating threat level (at least when attentive). So a human is rarely in a scenario of "there's something up ahead I have no idea what it is or where it's going." But if they were, I don't think it's at all ridiculous to slow down until determining those things. If software is in that scenario, I absolutely expect it to slow down until it can determine with high confidence that no object ahead is a likely threat. (edit) For instance, an attitude like "in my training data, unidentified objects rarely wander into the road" is not good enough for me, I want to hold software (and humans) to a much safer standard.
Humans are frequently in this scenario, especially at night. For example, a reflection from a rural roadside mailbox's prism looks similar to the eyes of a deer, and shredded truck tire treads look similar to chunks of automotive bodywork debris. This doesn't invalidate your point about slowing down.
But we're asking a lot from this software (for good reasons), but humans commit similar leaps of faith of various severity on the roads daily -- failure to yield, failure to maintain following distance, assuming other drivers immediately adjacent to you will keep driving safely and carefully -- and only a small subset of these situations results in accidents. We're expecting an algorithm coded by humans to perform better than a complicated bioelectric system we barely understand.
Waymo has opted to commit to thoroughly understand its environment, which is why their cars drive in a manner that bears no resemblance to how humans actually drive. We as a society have to eventually reconcile the implications of the disconnect.
> reflection from a rural roadside mailbox's prism looks similar to the eyes of a deer
Deer hits a major cause of fatality out in the country. If your driving at night in deer country and you aren't eyes wide open then you are going to have an unhappy experience at some point. Their instincts are essentially the exact opposite of what they should do when encountering a car. They will stay in the middle of the road, and they will jump in front of you if startled.