> For example, Tesla seems to save replays of when its vehicles would have behaved significantly differently compared to how the driver actually behaved, and this data is supposed to be quite useful.
I don't think such a system would catch a false negative like the above, where the human would slow down cautiously but the self-driving system would do nothing. That situation is indistinguishable from a human slowing down to read house numbers.
To realize the problem, the system would need a full model of "what would the car be doing if not for the human input" in order to find a later point of alarming divergence.
I don't think such a system would catch a false negative like the above, where the human would slow down cautiously but the self-driving system would do nothing. That situation is indistinguishable from a human slowing down to read house numbers.
To realize the problem, the system would need a full model of "what would the car be doing if not for the human input" in order to find a later point of alarming divergence.