Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Self-driving tech muggle here.

Could someone explain to me why any average Joe with minimal intellect can drive a car using only a pair of eyes but it's difficult to build a fully self-driving car even with 8+ cameras, GPS, LIDAR and other sensors us humans don't have?



In a word, context.

Human brains are just context gathering pattern matching machines. We can't even count without all kinds of context being automatically bubbled up into our consciousness.

When we see other drivers, we can take all kinds of subtle hints about their behavior. We can easily tell if they have an unsecured load (not just if it's currently shaking). We can see the way someone is looking at the road to know if they are going to go, if they're hesitating, if they're high. We can know where the road is even though it's snowed out because of the approximate distance from the ditch you remember being 15 feet out alongside it.

There are zillions of cases like this in the long tail. Self driving cars leapt forward by being able to answer the vastly important contextual question of "what is this in my sensor?" -- but to do the rest, it's hard to overstate how much a computer would have to "be human". To be able to apply past experience, psychology and complicated inferences about "why is this in my sensor? what can I do about it?"

In the end, self driving cars will thrive, just in an environment that poses these kinds of questions as little as possible.

Or we'll actually get a breakthrough and be able to create NNs that allow machines to learn and apply a vast breadth of learned context to sensory input. Teach them vast amounts of unrelated things just as every human learns in the 16 years before they drive (and then some), and then effectively put extra-sensory humans on the road.


We're using a lot of sensory data to compensate for the fact that the "AI" driving the car is really dumb. It lacks any meta-level understanding of the problem domain and is essentially a stimulus-response system similar to a very simple insect or even a single celled organism.

We can drive a car with much more limited sensory input because we have a very high level cognitive model of what we are doing. Better model equals better decision making on far less information.

Slight tangent:

Having kids was an interesting chance to observe how far we are from "real AI." You can show a 3 year old one example of an object and after a few seconds they can subsequently identify that object in any lighting condition or from any angle. They can identify it with one eye closed. They can identify variations of it, pictures of it, and line drawings of it. One example, seconds. The human brain absolutely destroys any AI/ML we have.


not quite true. when you show a kid something new you are literally exposing the kid to the thing thousands/millions of times. different angles, rotation, heck even if you’re just looking without touching from a fixed position your eyes will sample it a lot (saccades of your eye). you also connect/map the object to your prior knowledge. with an AI/ML model you’re starting from scratch. what makes learning “easy” for humans is the incremental nature of it + millions of years of evolution

also: humans can drive a car better than a machine because the roads are designed for humans and the other drivers are humans. the machines would destroy us at driving if the roads were optimized for machines.


Because it's not just the sensors, there's a lot of purely human interaction going on on the road. Looking at the other driver at the intersection and trying to understand what to expect from them; same for someone crossing the road in front of you - is the guy drunk? Eye contact - nope, he looks fine, he will give way. Etc. etc

Come try to drive in Paris or Rome at peak hours. No way on Earth AI can handle this kind of traffic.

I always say autonomous cars will require infrastructural change, similar to the one happened when we transitioned from horses to cars. How exactly - I don't know yet, but with the current infrastructure Level 5 seems impossible.


Haha. Very true. I was in Scilly last year and I said to my sister there is no way AI can drive on these roads.


Because we really don't know how to emulate "thinking." Some overconfident people think they can, but it's a much harder problem than making an ACID-compliant relational database.

Also, because we tend to forget that technological innovation that takes a few decades, or a century, is extremely fast in a historical context.

Our ancestors aren't going to care if we have self-driving cars in 2020, 2080, or 2120.


Because any average Joe with minimal intellect nevertheless has an intellect. It's impossible for a computer to actually understand or comprehend anything. Pattern matching, what computers are good at, has its limits.


Pointing a camera at something isn't vision. Making something recognize what they're looking at, and making it create a reasoned response to it, is wickedly hard.


Because driving a car is essentially analogous to full artificial intelligence.

We don't know how to do that. Like at all.


hmm. i have to disagree. driving a car is a walk in the park compared to AGI. they’re not even in the same ballpark. for driving we at least have an idea how to make it happen and we need better tech, more training data and maybe alter the infrastructure to solve some edge cases. AGI? we’re guessing at this point (at least what’s public)


>driving a car is a walk in the park

In the average situation, not the edge-case. The way you should think about it is, "If I had a black-box oracle that can drive a car exactly like a human, could I use that to simulate an artificial general intelligence?"

The answer is probably yes. For example, in order to "ask" the AGI a yes-or-no question X, you could contrive that the car find itself at a fork in the road with a roadsign that says "If the answer to X is 'yes', then the road to the right is closed. Otherwise, the road to the left is closed."


what does drive a car exactly like a human mean? i am going to assert that you don't need AGI for self-driving cars. in your example: that's not how driving works. the driver - even a human one - is not expected to answer random questions as they are driving.


>the driver - even a human one - is not expected to answer random questions as they are driving.

In the same way, the C++ compiler was not expected to be able to emulate arbitrary programs (i.e. to be Turing complete), but some ingenious people found a way to use it to do just that. The way they did it was by writing some extremely unusual and edge-casey code, but you can't just wave that aside and say "C++ isn't really Turing complete, because the proof that it's Turing complete involves C++ code that nobody would really write!"


Ahhh, a Turing car; if you can make a Turing car, you generalize it to make a AGI. I like it!


A lot of those sensors and cameras are less accurate during the night or in weather or with sun glare.

I think one of the biggest issues is that people will not accept self-driving cars that are as safe as human drivers. They want perfect driving. If someone hits a pedestrian that jumps out from between two cars unexpectedly, it might get written off as an accident and the insurance company may not even have to pay if it was clearly the pedestrians fault. If it was a "robot" car that hits someone, there is going to be a suit for $500M because "Google killed my son!!!!!!". This also leads to people trying to get non-lethally hit by robot cars just to get money. The "robot" car has a big trolley problem that isn't easily solvable.


> The "robot" car has a big trolley problem that isn't easily solvable.

I really think this is overblown. it's not hard to think of a solution to these issues because we already have it in traffic laws for humans. there are rules for accessing the right of way. if you follow the rules and make a good faith effort to account for people who don't, it is pretty hard to be found at fault for an accident, or even "cause" one to begin with. rear ending someone starts with too close of a following distance (or too high a speed) for the conditions. tbones and head on collisions can only happen when one or more people are moving without the right of way. pedestrian strikes on the street can only happen when the pedestrian or the driver is not observing the other's legal right of way.

sideswipes are kinda ambiguous. it's possible for two vehicles to check that there is space in the middle lane simultaneously, then merge into each other. I don't think it's legally required, but I never merge into the middle lane when there is a car to the far left/right.


> A lot of those sensors and cameras are less accurate during the night or in weather or with sun glare.

But for relatively minimal cost, those sensors can easily 10x better than humans. Whether it's visible light or infrared, whether it's microphones outside the car, or networked effects to other vehicles or highway camera.


> there is going to be a suit for $500M because "Google killed my son!!!!!!"

I'm pretty sure that Google, Apple, Uber, Ford, Gm, Chrysler, and all of their supply chains could get lobby for the tort reform they need, especially witb much of our current "representation"


We got brains, a learning system better than anything we could ever hope to create using current AI/ML technologies.


The average Joe is the descendant of millions/billions of years of evolution. He is designed specifically to navigate this planet.

The fact that humans will be teaching computers to do the same kinds of navigation, with just a few decades of work, speaks to how impressive the above-average Joe is.


Humans actually aren't that great at driving - tens of thousands die on the road every year. Autonomous cars do great in ideal conditions (the suburbs of Phoenix AZ for example) but in conditions where the model lacks training data (i.e. bad weather) they have the same struggles as human drivers.


A) We can't put a brain-equivalent computer in the car, and can't program it to be as good as a brain at driving

B) We have higher standards for autonomous cars. The people saying it can never happen are saying it will never be perfect in all conditions, not that it will never be better than humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: