> The key to self-driving cars is to realize that they don't have to be perfect - they just have to be better than us.
Again, that's skirting the issue. Do you have any idea how close self-driving cars are to being "better than us" ? As someone who's done computer vision research: not close at all.
> I don't think the computers will ever match our judgement
That is exactly the problem.
> it's trivially easy for them to beat us on attention span and reaction time
Attention span and reaction time are not the hard parts of building an autonomous vehicle.
This kind of comment beautifully illustrates the problem with casual discussions about AI technology. Humans and computers have very different operating characteristics, and discussions all focus on the wrong things: typically, they look at human weaknesses, and emphasize where computers are obviously, trivially superior. What about the contrapositive: the gap between where computers are weak, and where humans are vastly superior? More importantly, what is the actual state of that gap? That question is often completely ignored, or dismissed outright. Which is disappointing, especially among a technically literate audience such as HN.
I suspect that the current google car is already safer than the overall average driver.
Don't forget some people speed, flee from the cops, fall asleep at the wheel, get drunk, text, look at maps, have strokes etc. So, sure at peak performance cars have a long way to go. However, accidents are often a worst case and computers are very good at paying attention to boring things for long periods of time.
PS: If driverless cars on average killed 1 person each year per 20,000 cars then they would be significantly safer than human drivers.
> Don't forget some people speed, flee from the cops, fall asleep at the wheel, get drunk, text, look at maps, have strokes etc.
Again you are falling into the same pit: a nonsensical comparison of human and computer operational/failure modes. Of course computers can't have strokes. And yes, they are good at "paying attention to boring things". That is trivially true. And that's not where the discussion should be focused.
I do hope self-driving cars will be generally available sooner rather than later. What's not to like about them? But what I'm really curious about is how that availability will be qualified. Weather, road, visibility conditions? Heavy construction? Detours? Will this work in rural areas or countries that don't have consistent markings (or even paved roads!)? Will a driver still have to be at the wheel, and what extent will the driver have to be involved?
What is really annoying are breathless pronouncements about a technology without critically thinking about its actual state and implementation. We might as well be talking about Star Trek transporters.
A car that can that 80% of the time can get a sleepy or drunk people home would be a monumental advantage and likely save thousands of lives a year.
Basicly an MVP that flat out refuses to operate on non designated routes, bad weather, or even highway speeds could still be very useful.
PS: Classic stop and go traffic is another area where speeds are low, conditions are generally good. But, because people can't pay attention to boring things you regularly see traffic accidents which creat massive gridlock cost people day's per year sitting in traffic.
> I suspect that the current google car is already safer than the overall average driver.
Based on what? Even Google admits that the car is essentially blind (~30 feet visibility) in light rain. They've done little to no road testing in poor weather conditions. The vast majority of their "total miles driven" are highway miles in good weather, with the tricky city-driving bits at either end taken over by humans.
Google often leaves the impression that, as a Google executive once wrote, the cars can “drive anywhere a car can legally drive.” However, that’s true only if intricate preparations have been made beforehand, with the car’s exact route, including driveways, extensively mapped. Data from multiple passes by a special sensor vehicle must later be pored over, meter by meter, by both computers and humans. It’s vastly more effort than what’s needed for Google Maps.
A self-driving car that can only do highways and can't drive in bad weather still destroys the trucker industry overnight. And that is today.
No, I'm not drinking the AI koolaid here thinking there will be some breakout solution to fuzzy visibility problems these cars have. The difficulty differentiating contextual moving objects correctly, knowing what is "safe" debris and what is not, etc, are all neural net problems that will take years of development and even then just be heuristics.
But you don't really need all that. All you need is something that, given a sunny day and a highway, beats the human at driving the truck, and suddenly you lose three million jobs when it hits the market.
>All you need is something that, given a sunny day and a highway, beats the human at driving the truck, and suddenly you lose three million jobs when it hits the market.
I don't see how, given a sunny day and a highway, an autonomous vehicle can quantifiably beat a human driver to the degree of 'destroying the trucker industry overnight.'
Unless you're assuming that there are few human drivers capable of managing a trip down a highway under the best possible conditions, the results between the two would have to be more or less equal. Those human drivers can, meanwhile, still manage night, sleet, snow, fog, and arbitrary backroads and detours.
It's going to be awhile before they can actually eliminate the human. They've had self-driving tankers and self-driving airplanes for awhile now, they still need human operators for various reasons. They just don't have to actually drive.
> A self-driving car that can only do highways and can't drive in bad weather still destroys the trucker industry overnight.
Destroying the trucker industry with a self-driving car which can't drive in bad weather?? I don't think so!
Trucker's clients usually care a lot about predictability, and they would NOT be happy to hear "sorry it is raining and the robot car couldn't arrive".
You might be right that they are safer, but you're totally failing to contest the point. Yes computers are lots better than humans at lots of things, but they are worse at other things. They can't reliably tell the difference between a cat and a bench. Things that may be important when the computer is going 80 mph with humans on board.
Why would a self driving car need to know the difference between a cat and a bench? All it really has to know is that it is an object of a certain size and not to hit it.
The things that the car needs to know are largely standardized: the lines on the road, road signs, speed limits, etc.
>The things that the car needs to know are largely standardized: the lines on the road, road signs, speed limits, etc.
These are things humans need to know as well, and yet autonomous car cheerleaders constantly argue that human drivers are death machines, despite the fact that most human drivers know perfectly well how to follow these norms, and even deviate from them, without incident, most of the time.
I suspect that in their enthusiasm to set the bar of human intelligence as low as possible in order to make the case for autonomous cars seem urgent and necessary, some vastly underestimate the actual complexity of the problem. An autonomous car that only knows to 'avoid the boxes' has worse AI than many modern video games.
Again, that's skirting the issue. Do you have any idea how close self-driving cars are to being "better than us" ? As someone who's done computer vision research: not close at all.
> I don't think the computers will ever match our judgement
That is exactly the problem.
> it's trivially easy for them to beat us on attention span and reaction time
Attention span and reaction time are not the hard parts of building an autonomous vehicle.
This kind of comment beautifully illustrates the problem with casual discussions about AI technology. Humans and computers have very different operating characteristics, and discussions all focus on the wrong things: typically, they look at human weaknesses, and emphasize where computers are obviously, trivially superior. What about the contrapositive: the gap between where computers are weak, and where humans are vastly superior? More importantly, what is the actual state of that gap? That question is often completely ignored, or dismissed outright. Which is disappointing, especially among a technically literate audience such as HN.