Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Shouldn't it be possible to train for these scenarios using imitation learning and expert demonstrations? For example, Tesla seems to save replays of when its vehicles would have behaved significantly differently compared to how the driver actually behaved, and this data is supposed to be quite useful.

Is this type of crowdsourced driving data a crucial part of achieving Level 4+ self-driving? If so, it seems that driving around the same six city blocks of SF or cruising down Central Expressway in MV is going to produce diminishing returns in terms of producing measurable progress.




> For example, Tesla seems to save replays of when its vehicles would have behaved significantly differently compared to how the driver actually behaved, and this data is supposed to be quite useful.

I don't think such a system would catch a false negative like the above, where the human would slow down cautiously but the self-driving system would do nothing. That situation is indistinguishable from a human slowing down to read house numbers.

To realize the problem, the system would need a full model of "what would the car be doing if not for the human input" in order to find a later point of alarming divergence.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: