Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The easy part is relatively easy, but it's hard to conceive how the hard part will be solved.

Just driving around New York City for a while makes me think that generalized autonomous driving, as a problem, is essentially "solving" strong AI. Consider the case of approaching a complex intersection during rush hour. There's a traffic cop in the intersection waving his hands around. You reach the intersection, the light is green, you want to proceed straight, but cars are blocking the way because they're backed up into the intersection on the cross street. The cop points directly at you, making eye contact, blows a whistle, and shouts at you, pointing and yelling "right right right". You're uncertain whether he means you should try to weave around the blocking cars, but he blows the whistle again and it becomes clear he is telling you you cannot go straight, and you must divert and make a right turn right at the intersection. He gestures again, indicating that he wants you to turn into the nearest lane on the cross street, then points to the car behind you and indicates that it should also turn, but into the center lane. You nod, and he looks away to another car.

This happened. So a self-driving car would presumably have to understand and interpret shouted commands, realize that they are the one being shouted at by someone with the right authority, recognize gestures, somehow be able to engage in the equivalent of recognized eye contact, be able to make an OK gesture, and have some sort of theory-of-mind about the traffic cop as well as the drivers of other cars.

Not easy.



Even worse: I've been on several mountain roads with stretches of one-way traffic, where either an officer has to signal a switch in lane direction every few minutes (which might not be clear otherwise, especially around a bend), or cars have to occasionally reverse to let opposing traffic through. Don't think I'll ever be letting an AI do that!


I was in a small bus on a switchback mountain road in Peru. The bus stopped at a low lying turn, and we could see that a muddy stream was racing across the road at its low point, pouring away into the valley off the the downhill edge it was eroding. The pavement under the stream was gone. The driver got out and found a couple of what looked like 2-by-8 pieces of lumber, and placed them across the stream, adding some rocks and rubble underneath like track ballast. He then slowly tiptoed the bus across these creaking muddy boards, leaning out the window to stay on track. We were not swept over the edge, as far as I recall.

Needless to say, not a situation for "auto-steer".

I'm sure this has been thought through, and I imagine the solution involves zones requiring different levels of autonomy and capability, some means of zone discovery or classification for unmarked areas, and self-driving cars refusing to continue automatically when overmatched.


I think you can argue that at least those are outlier very rural sorts of places. It's harder to write off major US cities. (To be clear, interstates are still compelling uses but they're not universal self-driving.)


That kind of thing happens occasionally around Boston when snow piles turn two lane streets into one lane, and it's common anywhere where construction or an accident partly blocks a road.


I often think of Manhattan wrt the autonomous taxi believers. (Which while extreme for the US is precisely the sort of place they'd presumably have to handle.) I sorta go: "Have you ever been there and looked around?" Design an AI that can get cross-town in Manhattan at rush hour and I start erecting a shrine to our robotic overlords.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: