Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a word, context.

Human brains are just context gathering pattern matching machines. We can't even count without all kinds of context being automatically bubbled up into our consciousness.

When we see other drivers, we can take all kinds of subtle hints about their behavior. We can easily tell if they have an unsecured load (not just if it's currently shaking). We can see the way someone is looking at the road to know if they are going to go, if they're hesitating, if they're high. We can know where the road is even though it's snowed out because of the approximate distance from the ditch you remember being 15 feet out alongside it.

There are zillions of cases like this in the long tail. Self driving cars leapt forward by being able to answer the vastly important contextual question of "what is this in my sensor?" -- but to do the rest, it's hard to overstate how much a computer would have to "be human". To be able to apply past experience, psychology and complicated inferences about "why is this in my sensor? what can I do about it?"

In the end, self driving cars will thrive, just in an environment that poses these kinds of questions as little as possible.

Or we'll actually get a breakthrough and be able to create NNs that allow machines to learn and apply a vast breadth of learned context to sensory input. Teach them vast amounts of unrelated things just as every human learns in the 16 years before they drive (and then some), and then effectively put extra-sensory humans on the road.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: