Hacker News new | past | comments | ask | show | jobs | submit login

> Self-driving cars can share their experience perfectly,

I'm not a machine learning expert, but I'm not convinced this assertion is well grounded. If you have two ML systems, I don't know of a simple, general, reliable way to transfer the "knowledge" stored in one to the other without disrupting the "knowledge" already within the other.

If you mean just copying the video feed, sure, but that's only one input variable. There's no guarantee of a linear relationship between hours of video feed and driving quality. Humans seem to hit an asymptotic lower bound in the collision probability even as they accumulate experience, and while that might not be true for self-driving cars, it isn't guaranteed a priori.




You're right, it's not perfect, and it might be sub-linear improvement, but it can practice on its predecessors' input in a way a human can't. (There are plenty of scifi stories where they found a way to do this for humans as well, so it's certainly conceivable, but seems a long way off, whereas "feed all existing training data into the new model" is something we have now.)

You're pretty much guaranteed that self-driving technology "skills" will collectively improve over time, at some positive rate, whereas we don't expect "Gen Z" to be more skilled drivers than previous generations.

This is all entirely skill-focused. We're not guaranteed that self-driving cars will actually get safer if we don't have the proper incentives to favor safety over speed or cost. Similrly, despite the stagnancy of human driving skill, I anticipate safety will improve with human drivers for non-skill-related reasons -- specifically, driver assistance technology, and a shift towards just getting a Lyft if you're impaired.


One (probably not so simple) way could be to build up a shared database of input-decision-outcome records, so when a car is faced with some unknown object on a particular street, or its own risk assessment is high (e.g. heuristics- "I am planning to decelerate quickly"), it could search for similar input patterns (e.g. color, shape, gait) or similar decision plans in the database, and bias the car's own decision accordingly to avoid an undesirable outcome. Then that decision would get posted to the database as well, linked to its parent. Previous outcomes in the database could be assessed by humans in some kind of legal framework to help the cars determine undesirable vs desirable outcomes.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: