Computers have vastly lower reaction time than humans. Computers have sensory input that humans lack (LIDAR). Computers don't get drowsy or agitated.
And "almost" is always a good idea when talking about a future that looks certain. Takes into account the unknown unknowns. And the known unknowns (cough hacking cough).
Fast reaction times, good sensors and unyielding focus are not enough to drive safely. An agent also needs situational awareness and an understanding of the entities in its environment and their relations.
Without the ability to understand its environment and react appropriately to it, all the good the fast reaction times will do to an AI agent is to let it take the wrong decisions faster than a human being.
Just saying "computers" and waving our hands about won't magically solve the hard problems involved in full autonomy. Allegedly, the industry has some sort of plan to go from where we are now (sorta kinda level-2 autonomy) to full, level-5 autonomy where "computers" will drive more safely than humans. It would be very kind of the industry if they could share that plan with the rest of us, because for the time being it sounds just like what I describe above, saying "computers" and hand-waving everything else.
That's a sociopolitical question more than a technical one. I posit that:
1.) Road safety -- as far as the current operating concept of cars is concerned (eg., high speeds in mixed environments) -- is not a problem that can be "solved". At best it can only ever be approximated. The quality of approximation will correspond to the number of fatalities. Algorithm improvements will yield diminishing returns: the operating domain is fundamentally unsafe, and will always result in numerous fatalities even when driven "perfectly".
2.) With regards to factors that contribute to driving safety, there are some things that computers are indisputably better at than humans (raw reaction time). There are other things that humans are still better at than computers (synthesising sensory data into a cohesive model of the world, and then reasoning about that world). Computers are continually improving their performance, however. While we don't have all the theories worked out for how machines will eventually surpass human performance in these domains, we don't have a strong reason to believe that machines won't surpass human performance in these domains. The only question is when. (I don't have an answer to this question).
3.) So the question is not "when will autonomous driving be safe" (it won't be), but rather: "what is the minimum level of safety we will accept from autonomous driving?" I'm quite certain that the bar will be set much higher for autonomous driving than for human driving. This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is. Look at the disparities in sociopolitical responses to, say, plane crashes and Zika virus, versus car crashes and influenza. Autonomous vehicles will be treated more as the former than the latter, and therefore the scrutiny they receive will be vastly higher.
4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
5.) Personally, I think that the algorithms won't be able to pass this public-acceptability threshold on their own, because even the best-imaginable algorithm, if adopted on a global basis, would still kill hundreds of thousands of people every year. That's still probably too many. I expect that full automation eventually will become the norm, but only as enabled by new types of infrastructure / urban design which enable it to be safer than automation alone.
> This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is.
This is a wonderfully concise way of describing a phenomenon that I have not been able to articulate well. Thank you.
OK, this is a very good answer- thanks for taking the time.
I'm too exhausted (health issues) to reply in as much detail as your comment deserves, but here's the best I can do.
>> 4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
Or at least it won't be morally justifiable for them to be a thing at all, unless they're sufficiently safer than humans- whatever "sufficently" is going to mean (which we can't really know; as you say that has to do with public perception and the whims of a fickle press).
I initially took your assertion to mean that self-driving AI will inevitably get to a point where it can be "sufficiently" safer than humans. Your point (2.) above confirms this. I don't think you're wrong, there's no reason to doubt that computers will, one day, be as good as humans at the things that humans are good at.
On the other hand I really don't see this happening any time soon- not in my lifetime and most likely not in the next two or three human generations. It's certainly hard to see how we can go from the AI we have now to AI with human-level intelligence. Despite the successes of statistical machine learning and deep neural nets, their models are extremely specific and the tasks they can perform too restricted to resemble anything like general intelligence. Perhaps we could somehow combine multiple models into some kind of coherent agent with a broader range of aptitudes, but there is very little research in that direction. The hype is great, but the technology is still primitive.
But of course, that's still speculative- maybe something big will happen tomorrow and we'll all watch in awe as we enter a new era of AI research. Probably not, but who knows.
So the question is- where does this leave the efforts of the industry to, well, sell self-driving tech, in the right here and the right now? When you said self-driving cars will almost certainly be safer than humans- you didn't put a date on that. Others in the industry are trying to sell their self-driving tech as safer than humans right now, or in "a few years", "by 2021" and so on. See Elon Musk's claims that Autopilot is safer than human drivers already.
So my concern is that assertions about the safety of self-driving cars by industry players are basically trying to create a climate of acceptance of the technology in the present or near future, before it is even as safe as humans, let alone safer (or "sufficiently" so). If the press and public opinion are irrational, their irrationality can just as well mean that self-driving technology is accepted when it's still far too dangerous. Rather than setting the bar too high and demanding an extreme standard of safety, things can go the other way and we can end up with a diminished standard instead.
Note I'm not saying that is what you were trying to do with your statement about almost certainty etc. Kind of just explaining where I come from, here.
Likewise, thanks for the good reply! Hope your health issues improve!
I share your skepticism that AIs capable of piloting fully driverless cars are coming in the next few years. In the longer term, I'm more optimistic. There are definitely some fundamental breakthroughs which are needed (with regards to causal reasoning etc.) before "full autonomy" can happen -- but a lot of money and creativity is being thrown at these problems, and although none of us will know how hard the Hard problem is until after it's been solved, my hunch is that it will yield within this generation.
But I think that framing this as an AI problem is not really correct in the first place.
Currently car accidents kill about 1.3 million people per year. Given current driving standards, a lot of these fatalities are "inevitable". For example: many real-world car-based trolley problems involve driving around a blind curve too fast to react to what's on the other side. You suddenly encounter an array of obstacles: which one do you choose to hit? Or do you (in some cases) minimise global harm by driving yourself off the road? Faced with these kind of choices, people say "oh, that's easy -- you can instruct autonomous cars to not drive around blind curves faster than they can react". But in that case, the autonomous car just goes from being the thing that does the hitting to the thing that gets hit (by a human). Either way, people gonna die -- not due to a specific fault in how individual vehicles are controlled, but due to collective flaws in the entire premise of automotive infrastructure.
So the problem is that no matter how good the AIs get, as long as they have to interact with humans in any way, they're still going to kill a fair number of people. I sympathise quite a lot with Musk's utilitarian point of view: if AIs are merely better humans, then it shouldn't matter that they still kill a lot of people; the fact that they kill meaningfully fewer people ought to be good enough to prefer them. If this is the basis for fostering a "climate of acceptance", as you say, then I don't think it would be a bad thing at all.
But I don't expect social or legal systems to adopt a pragmatic utilitarian ethos anytime soon!
One barrier it that even apart from the sensational aspect of autonomous-vehicle accidents, it's possible to do so much critiquing of them. When a human driver encounters a real-world trolley problem, they generally freeze up, overcorrect, or do something else that doesn't involve much careful calculation. So shit happens, some poor SOB is liable for it, and there's no black-box to audit.
In contrast, when an autonomous vehicle kills someone, there will be a cool, calculated, auditable trail of decision-making which led to that outcome. The impulse to second-guess the AV's reasoning -- by regulators, lawyers, politicians, and competitors -- will be irresistible. To the extent that this fosters actual safety improvements, it's certainly a good thing. But it can be really hard to make even honest critiques of these things, because any suggested change needs to be tested against a near-infinite number of scenarios -- and in any case, not all of the critiques will be honest. This will be a huge barrier to adoption.
Another barrier is that people's attitudes towards AVs can change how safe they are. Tesla has real data showing that Autopilot makes driving significantly safer. This data isn't wrong. The problem is that this was from a time when Autopilot was being used by people who were relatively uncomfortable with it. This meant that it was being used correctly -- as a second pair of eyes, augmenting those of the driver. That's fine: it's analogous to an aircraft Autopilot when used like that. But the more comfortable people become with Autopilot -- to the point where they start taking naps or climbing into the back seat -- the less safe it becomes. This is the bane of Level 2 and 3 automation: a feedback loop where increasing AV safety/reliability leads to decreasing human attentiveness, leading (perhaps) to a paradoxical overall decrease in safety and reliability.
Even Level 4 and 5 automation isn't immune from this kind of feedback loop. It's just externalised: drivers in Mountain View learned that they could drive more aggressively around the Google AVs, which would always give way to avoid a collision.
So my contention is that while the the AIs may be "good enough" anytime between, say, now and 20 years from now -- the above sort of problems will be real barriers to adoption. These problems can be boiled down to a single word: humans. As long as AVs share a (high-speed) domain with humans, there will be a lot of fatalities, and the AVs will take the blame for this (since humans aren't black-boxed).
Nonetheless, I think we will see AVs become very prominent. Here's how:
1. Initially, small networks of low-speed (~12mph) Level-4 AVs operating in mixed environments, generally restricted to campus environments, pedestrianised town centres, etc. At that speed, it's possible to operate safely around humans even with reasonably stupid AIs. Think Easymile, 2getthere, and others.
2. These networks will become joined-up by fully-segregated higher-speed AV-only right-of-ways, either on existing motorways or in new types of infrastructure (think the Boring Company).
3. As these AVs take a greater mode-share, cities will incrementally convert roads into either mixed low-speed or exclusive high-speed. Development patterns will adapt accordingly. It will be a slow process, but after (say) 40-50 years, the cities will be more or less fully autonomous (with most of the streets being low-speed and heavily shared with pedestrians and bicyclists).
Note that this scenario is largely insensitive to AI advances, because the real problem that needs to be solved is at the point of human interface.
Where is the almost-certainty coming from that the fatalities would be fewer compared to humans driving? And what does "almost" mean in this case?