I am not disputing your assessment, but please don't discount liability. Planes can pretty much fly themselves today - there are no significant technology issues with the idea of "taxi away, take off, fly to destination, land, taxi to gate". all of this happens in what is perhaps the most regulated traffic environment on the planet.
The issue is with creating the code that deals with "oh shit" scenarios. Whilst is is probably possible, and even feasible, to write code to cover every possible failure scenario, who is going to be left holding the can when this fails (all systems have a non-zero probability of failure)?
Who will be held responsible? The outsourced company that coded the avionics/flight control software? The airplane manufacturer? The airline company? The poor fucker that wrote the failing logic tree that was supposed to deal with that specific failure scenario, but was forced to work overtime the 47th day in a row when that particular code was cut?
It is a liability nightmare, and when you add up the cost of creating a software system that must never fail, the increased insurance premiums, the PR/marketing work to convince the unwashed masses that this is actually safer, and the whole rest of the circus required to make this a reality, you will find that pilot costs are not all that bad. Especially since pilots have significant downward pressure on real earnings these days anyway.
> The issue is with creating the code that deals with "oh shit" scenarios.
So they fly themselves except they don't?
That's kind of my point: what makes anyone think truly driverless cars are going to happen anytime soon when a human is required to deal with these "oh shit" scenarios? What's more, I think the "oh shit" scenarios for cars are FAR more complicated. With planes someone else deals with scheduling for take off and landing. While in flight, the plane simply needs to not fly into other objects and maintain speed, direction and altitude.
As for liability, I agree. It's a nightmare, particularly when the standard will probably be "did the software cause injury or death?" when the standard should be "what is the incidence of injury or death compared to a human driver?"
I mean that'll be little comfort to the family of someone killed in an accident. We humans seem to have a weird tolerance humans negligently killing other humans.
> We humans seem to have a weird tolerance humans negligently killing other humans.
Really? If anything I'd have said it was the other way round. Humans get jailed for negligently killing other humans with vehicles, and they sometimes get jailed or banned from driving for negligently driving in a way that might have endangered another human. On the other hand, the prevailing opinion in this thread seems to be that whilst it's entirely appropriate to punish bad driving by humans, similarly egregious errors made by software should be tolerated provided their average accident rate is lower than the humans'
You could argue that in the "oh shit" scenarios for a car, the proper action is to always stop. Most human drivers will instinctively stomp on the brakes if they see anything they're not expecting, and this is pretty much what today's autonomous software does.
Recovering from the "oh shit" scenario is the difficult part, but human pilots often can't recover, after all it makes little sense to try and fix an engine on fire while flying, instead opting to land.
It's not. But it's a reasonable first reaction which is why we end up doing it. (That or swerving.)
But as soon as we realize the thing that made us twitch is a squirrel or a plastic bag, our forebrain takes the foot off the brake or straightens the wheel.
So why is it unreasonable to think that a computer can do this? This, being take a reasonable first reaction to a situation, namely stop, then follow up with a proper action once more data is available.
You don't stop though. You start to put your foot on the brake and then you take it off. Presumably, for a computer which doesn't really have different classes of reaction times in the same way, should never brake in the first place.
I don't think that presumption is true, it's a high bar that doesn't really provide much benefit to achieve. If a computer decides to tap the brakes because it thinks an "oh shit" scenario is coming up, why is that suddenly a huge transgression?
The point is that computers don't really have the same type of reflexes that humans have. The theory is that everything is pretty fast. (OK, they can run a background analysis in the cloud but that's presumably too slow to be useful.) Computers are generally not going to respond with "reflexes" and then change they're minds once they've had time to think about it for half a second.
Computers could possibly be designed with these sorts of decision making patterns if there were a need to but I'm not aware of that being done today.
> Computers are generally not going to respond with "reflexes" and then change they're minds once they've had time to think about it for half a second.
Well I disagree on this point, as that's essentially how regressions work, so indirectly how neural networks work. The data the car gets isn't available immediately, all that information it takes in in half a second is useful data that aids in classification and decision making.
Just as a quick example, take https://tenso.rs/demos/rock-paper-scissors/ and think of the classifier as "making a decision", and it switches its decision based on the most recent information.
The point is that all presumably happens "instantaneously" from a human perspective. Hence the claims that autonomous vehicles have no lag in responding to events.
Right, so why does the negative opinion towards self-driving cars seem to be that a computer isn't allowed to slow down to give it more time to react, which it would just treat as business as usual?
Well, for one you're passing your incompetence off to other drivers to deal with, something that will inevitably lead to accidents behind the car that slows down without any actual reason, for another because driving is a lot more complex than flying when it comes to automation. You might expect the opposite but pilots routinely describe their careers as 30 years of boredom punctuated by 30 seconds of sheer panic.
I am not disputing your assessment, but please don't discount liability. Planes can pretty much fly themselves today - there are no significant technology issues with the idea of "taxi away, take off, fly to destination, land, taxi to gate". all of this happens in what is perhaps the most regulated traffic environment on the planet.
The issue is with creating the code that deals with "oh shit" scenarios. Whilst is is probably possible, and even feasible, to write code to cover every possible failure scenario, who is going to be left holding the can when this fails (all systems have a non-zero probability of failure)?
Who will be held responsible? The outsourced company that coded the avionics/flight control software? The airplane manufacturer? The airline company? The poor fucker that wrote the failing logic tree that was supposed to deal with that specific failure scenario, but was forced to work overtime the 47th day in a row when that particular code was cut?
It is a liability nightmare, and when you add up the cost of creating a software system that must never fail, the increased insurance premiums, the PR/marketing work to convince the unwashed masses that this is actually safer, and the whole rest of the circus required to make this a reality, you will find that pilot costs are not all that bad. Especially since pilots have significant downward pressure on real earnings these days anyway.