I think it's unacceptable for automation to produce a worse result than a human in the same situation, with the same information. i.e. it's not acceptable for automation to fail danger, it must fail safe, including even if all it can do is give up (disconnects, warning tone, hands control over to the human driver).
I think it's reasonable, in the narrow case where primary control is asserted and competency is claimed to be as good or better than a human, to hold automation accountable the same as a human. And in this case, if this driver acted the way autopilot did based on the same available information, we would say the driver committed suicide or was somehow incompetent.
I see this as possibly a case of automation committing involuntary manslaughter (unintentional homicide from criminally negligent or reckless conduct).
> ... it's not acceptable for automation to fail danger, it must fail safe...
The scary thing is, by definition of what is being automated there can be no fail-safe. The only "safe" is to stop the car. If the car detected it was going to crash, then it wouldn't have had the failure in the first place. By the time the failure might be detected, it is already too late for "safe".
you cant expect the human driver to be ready to take over autopilot in any meaningful way. if the driver is alert enough to the conditions and physically ready at the appropriate controls to do that, then they are already driving. still, despite it not being based on science, but legal reasons, companies pretend that humans can "take over" when alerted to. maybe a lawsuit will change that idiocy.
First, a human who is paying attention makes better decision than a human who is surprised by being given control during a crisis. Therefore if automation sees a crisis, it's standard has to be that it is worse than a surprised human, and not how it compares to an alert driver.
Second, the analysis that concludes that we're now in a specific situation where a human would do better is often beyond the powers of automation. For example it can recognize that we are now in a class of cases where, on average, automation does better. But whether it will in any particular case may be beyond its reasoning power.
This is called the Halting Problem. We don't waste time trying to make machines that do that because it's theoretically undecidable.
Furthermore, you're then throwing more neural network (that has to be trained improvably across the dataset which we have no proof is the full dataset required to fully enumerate the problem space), to successfully detect the problem areas in the data going into the driving NN for edge cases. Repeat ad absurdum or until most of your self-driving car's energy is consumed trying to solve the Silicon equivalent of Plato's Cave.
Or... You just teach people to drive the blessed car, which is what the absolute best a theoretical NN implementation would converge to anyway.
The only benefit to a theoretically perfect NN which simulates the average driver being the car doesn't run on a processing system that is adversely effected through the recreational and voluntary (by the processing hardware) application of ethanol to the processing matrix.
Well... Thermally speaking, it could end up causing serious problems if someone were dumb enough to build the capability to do that into the car for some daft reason, but we're talking existential failure of computing hardware rather than gradual degradation of functionality like you'd see with a brain, but still I think everyone gets the point and I can stop torturing this poor analogy.
What you say has just enough technical detail to seem entirely plausible to someone who doesn't understand technology.
It is also entirely wrong.
The problem that I am describing is not the Halting Problem. A theoretically perfect NN is also able to do a lot better than any person could. The ability to have binocular vision in multiple directions, and mix that with radar awareness of the local environment and electronic communication with other cars to be aware of obstacles that are out of sight. This technology can drive more quickly and safely than humans ever could.
This sort of thing is admittedly some time off. But it is doable, and it is almost certain to happpen within our lifetimes.
It's not. You're asking for a Neural network capable of making generic inferences about the output of another neural network, in real time.
That's Halting Problem. Will the data I'm feeding in cause the program I'm running to encounter an edge case (return/not return).
You can't generify a neural network beyond it's training set. Cheekyness of the previous post aside, I stand by my statement.
Comments that hail the almighty neural network as being anything more than an interesting exercise in feature extraction/input output mapping/information synthesis woefully underestimate the fundamental limitations of the technology.
With current technology, short of everyone participating in your network of street cars (hint: many won't) and is a good agent (they won't be), you'll be at the mercy of the same forces that make "driving" such an interesting task today, just in different forms, with more processors involved,and when the NN rewrites happen to need beefier hardware, everybody's vehicle gets recalled.
Throw in second and third order social effects (I.e. implementation of multi-spectrum panoptic surveillance networks for exploitation, possibility of remote exploitation of the driving software, decay of the actual skills required to drive safely, and the sudden stranglehold position that the self-driving vehicle manufacturer gets when their customer base gets large enough) leads me to the conclusion that there are a lot more problems to be solved before self-driving anything becomes a no-brainer turn key solution.
Consider that now, with thermal cameras being ubiquitous, there is debate over whether your thermal signature is considered "public information" that can be collected and analyzed by law enforcement sans warrant. Next we'll get LIDAR on cars that will have a software application made to tap into the LIDAR feed which would then be capable of reading vibrations off glass with a bit of setup. Does everything you say in your home become public information too just because the cars we drive become mobile labs equipmentwise?
That sound cool? Not to me. People need to think beyond first-order outcomes. As programmer or system developer or User alike.
A neural network capable of making generic inferences about the output of another neural network, in real time, just needs to be able to run a simulation of that other network. While we do not currently have the resources to do it with a human brain, it is in principle quite doable.
The critical difference between this and the Halting problem is that solving the Halting problem by running a simulation would require simulating in finite time what another system does in infinite time. This requires simulating in finite time what the other system also does in finite time. Which requires a better system, but not an impossibly better system.
Moving on, you are over-estimating the requirements of the system that I describe. Today, with humans, a system like Waze can provide warnings about the road ahead which provides a useful assist to human drivers. This with a very small fraction of humans using the program, and even fewer actively registering hazards like "object on road". And yet, it is useful.
Automated driving systems can participate in a similar system. Only they will be more likely to provide information, and their information can be more detailed. Such as what lane the foreign object is in. It doesn't take a lot of data about what is out of view ahead to improve driving by a lot. But humans are built to only pay attention to one thing at that speeds. Automated systems can integrate information from multiple. Which means that they can be better.
Yes, what they are doing is simply feature extraction/input output mapping/information synthesis - but in principle nothing more is needed to drive better than is possible for humans.
At the moment, humans are better. But it is not impossible that computers can become better. It is, in fact, inevitable.
Well, this brings to light the concept from the movie I, Robot. Is it a crime when an ai kills someone, unintentional or not, or is it an industrial accident? If it's suppose to replace a human task due to a "level of intelligence", is it still taxed as equipment or as an employee? Is the company (due to equipment failures) or the ai (choosing to do an action on its own) at fault?
To be fair, these questions need to be hard lined pretty soon.
> these questions need to be hard lined pretty soon
I’d argue that they’re a long way from being an issue.
Firstly, current AI are far less ‘intelligent’ than almost any animal. Even a mosquito has more brain power, and we don’t think twice about killing mosquitoes.
Secondly, even if an AI had similar intelligence to a human, there is no reason to believe it would be a moral creature, capable of making moral judgments, and being judged as such. Our morality evolved over thousands, probably millions, of years (or if you prefer, it was granted by some divine power). Either way, intelligence and morality aren’t synonymous.
> it's unacceptable for automation to produce a worse result than a human in the same situation, with the same information
I don't think anyone disagrees on that. The question is: is the software worse than a human driver? Do we have enough data for a statistically significant judgement on that? Is it even autonomous enough to say anything either way, like, if the driver is required to be at attention anyway, can the software be blamed for anything in the first place? Those are the questions, I don't think there is a point saying "software must be good!"
>> it's unacceptable for automation to produce a worse result than a human in the same situation, with the same information
> I don't think anyone disagrees on that.
I disagree on that. If there's an autonomous vehicle that is better than a human in most situations, and worse in a few situations, such that the overall accident/death rate is lower, and there is no reasonable away to identify the rare dangerous situations in time to disable the autopilot, I would want to drive that car and would advise others to do so.
In fact, if there was an autonomous vehicle that was almost exactly as safe as a human but slightly more dangerous (say, a 10% higher death/accident rate), I would frequently use it because the large benefits outweigh the minor statistical costs. (Indeed, I use a car at all because of its benefits over walking, busing, or staying at home, despite the higher rate of death.) If other people understood the risks, I would also suggest that they to do likewise.
We don’t expect all kinds of drivers and vehicles to fit the same safety bell curve. You’ve made an assertion, but what legal framework are you are using to treat this particular human-machine interaction differently without introducing a whole new class of liability for humans and traditional manufacturers?
>but what legal framework are you are using to treat this particular human-machine interaction differently without introducing a whole new class of liability for humans and traditional manufacturers?
I think it's reasonable, in the narrow case where primary control is asserted and competency is claimed to be as good or better than a human, to hold automation accountable the same as a human. And in this case, if this driver acted the way autopilot did based on the same available information, we would say the driver committed suicide or was somehow incompetent.
I see this as possibly a case of automation committing involuntary manslaughter (unintentional homicide from criminally negligent or reckless conduct).