Hacker News new | past | comments | ask | show | jobs | submit login

What befuddles me is that in all these discussions about self-driving cars seemingly no one refers to the massive body of knowledge in this area that comes from the aviation world.

I've posted variants of this same comment several times and I'm starting to feel like a broken record.

Look at studies of efforts to make planes safer by removing the human element. While efforts like autopilot have made things safer it reaches the point where more automation can reduce safety as pilots are no longer alert and/or don't trust the instruments and/or can't fully manually override the automation.

Call it the uncanny valley of automation safety.

Bridging that last few percent for true automation (ie where vehicles aren't designed to have drivers or pilots at all) is going to be _incredibly_ difficult, to the point where I'm not convinced it won't require something resembling a general AI.

All of this is why I think driverless cars are going to take much longer than many expect.




There's a big difference: commercial pilots are highly trained, even-tempered, and take their job seriously. Most drivers are lazy, distracted, and apt to do something stupid in an emergency. It's very hard to make something safer than a commercial pilot. It's much easier to make something safer than a typical driver.


Yeah that validates his point. If the arguably most highly trained vehicle operators around lose situational awareness and fail to recover after automation fails, what do you think will happen with untrained drivers?


That's actually a confirmation bias being missed. Aircraft systems fail and commercial pilots take over and land without mishap all the time. Components inside airplanes, at airports, and even inside air traffic control can and do fail.

One example is the attack on the Chicago air traffic control system. Dozens and perhaps even over a hundred aircraft suddenly were flying around with no oversight. Every single pilot took local control, negotiated with the other pilots, and collectively were able to either land or divert without incident.


most drivers are lazy, distracted and accident prone.

If this had any element of truth roads would be empty.

This kind of sweeping and bigoted dismissal of other people is a bit too self serving in the context of self driving cars and is made too casually and too often on HN now to allow balanced discussion.


I agree. I think it will be very hard to make something that drives better than a really good human driver who is focused on driving. Getting better than the 50+% of people I see with their phone clapped to their faces while they drive (despite that being illegal in Austin) is much easier (though still difficult) task.

The other thing is that something going wrong with a plane in the air is a pretty big deal. You can't just pull over and wait. If you assume a "first, do no harm" principle of robotics for driverless cars, the failure mode should in most cases by "pull over and wait." This can still cause problems in many situations, but people do it now.


> Most drivers are lazy, distracted, and apt to do something stupid in an emergency.

Er, citation needed. I think the vast majority of drivers are good drivers — otherwise, vehicular transport would be a disaster.


There are around 1.25M vehicle fatalities every year worldwide [0]. It is a disaster. Driving has killed more people than the world wars.

"Good drivers" -- we have no benchmark to measure against. Maybe it's amazing that 10x more people aren't killed, or maybe it's dismal that anyone is killed at all. When we have autonomous vehicles, we'll have a reference to compare against. I predict that "bad" will be the only word to describe the current situation.

[0] https://en.wikipedia.org/wiki/List_of_countries_by_traffic-r...


Yeah, that's true at least where I live. Most times I drive I'll see someone do something stupid or inattentive (or even do something stupid myself!), but I see thousands of people driving normally, i.e. driving well.

Countless times I've seen pedestrians or cyclists throw themselves in front of traffic without warning, and every time the drivers have stopped without incident. A collision is by far the more exceptional case.


Call it the uncanny valley of automation safety.

I am not disputing your assessment, but please don't discount liability. Planes can pretty much fly themselves today - there are no significant technology issues with the idea of "taxi away, take off, fly to destination, land, taxi to gate". all of this happens in what is perhaps the most regulated traffic environment on the planet.

The issue is with creating the code that deals with "oh shit" scenarios. Whilst is is probably possible, and even feasible, to write code to cover every possible failure scenario, who is going to be left holding the can when this fails (all systems have a non-zero probability of failure)?

Who will be held responsible? The outsourced company that coded the avionics/flight control software? The airplane manufacturer? The airline company? The poor fucker that wrote the failing logic tree that was supposed to deal with that specific failure scenario, but was forced to work overtime the 47th day in a row when that particular code was cut?

It is a liability nightmare, and when you add up the cost of creating a software system that must never fail, the increased insurance premiums, the PR/marketing work to convince the unwashed masses that this is actually safer, and the whole rest of the circus required to make this a reality, you will find that pilot costs are not all that bad. Especially since pilots have significant downward pressure on real earnings these days anyway.


> Planes can pretty much fly themselves today

but

> The issue is with creating the code that deals with "oh shit" scenarios.

So they fly themselves except they don't?

That's kind of my point: what makes anyone think truly driverless cars are going to happen anytime soon when a human is required to deal with these "oh shit" scenarios? What's more, I think the "oh shit" scenarios for cars are FAR more complicated. With planes someone else deals with scheduling for take off and landing. While in flight, the plane simply needs to not fly into other objects and maintain speed, direction and altitude.

As for liability, I agree. It's a nightmare, particularly when the standard will probably be "did the software cause injury or death?" when the standard should be "what is the incidence of injury or death compared to a human driver?"

I mean that'll be little comfort to the family of someone killed in an accident. We humans seem to have a weird tolerance humans negligently killing other humans.


> We humans seem to have a weird tolerance humans negligently killing other humans.

Really? If anything I'd have said it was the other way round. Humans get jailed for negligently killing other humans with vehicles, and they sometimes get jailed or banned from driving for negligently driving in a way that might have endangered another human. On the other hand, the prevailing opinion in this thread seems to be that whilst it's entirely appropriate to punish bad driving by humans, similarly egregious errors made by software should be tolerated provided their average accident rate is lower than the humans'


You could argue that in the "oh shit" scenarios for a car, the proper action is to always stop. Most human drivers will instinctively stomp on the brakes if they see anything they're not expecting, and this is pretty much what today's autonomous software does.

Recovering from the "oh shit" scenario is the difficult part, but human pilots often can't recover, after all it makes little sense to try and fix an engine on fire while flying, instead opting to land.


>the proper action is to always stop

It's not. But it's a reasonable first reaction which is why we end up doing it. (That or swerving.)

But as soon as we realize the thing that made us twitch is a squirrel or a plastic bag, our forebrain takes the foot off the brake or straightens the wheel.


So why is it unreasonable to think that a computer can do this? This, being take a reasonable first reaction to a situation, namely stop, then follow up with a proper action once more data is available.


You don't stop though. You start to put your foot on the brake and then you take it off. Presumably, for a computer which doesn't really have different classes of reaction times in the same way, should never brake in the first place.


I don't think that presumption is true, it's a high bar that doesn't really provide much benefit to achieve. If a computer decides to tap the brakes because it thinks an "oh shit" scenario is coming up, why is that suddenly a huge transgression?


The point is that computers don't really have the same type of reflexes that humans have. The theory is that everything is pretty fast. (OK, they can run a background analysis in the cloud but that's presumably too slow to be useful.) Computers are generally not going to respond with "reflexes" and then change they're minds once they've had time to think about it for half a second.

Computers could possibly be designed with these sorts of decision making patterns if there were a need to but I'm not aware of that being done today.


> Computers are generally not going to respond with "reflexes" and then change they're minds once they've had time to think about it for half a second.

Well I disagree on this point, as that's essentially how regressions work, so indirectly how neural networks work. The data the car gets isn't available immediately, all that information it takes in in half a second is useful data that aids in classification and decision making.

Just as a quick example, take https://tenso.rs/demos/rock-paper-scissors/ and think of the classifier as "making a decision", and it switches its decision based on the most recent information.


The point is that all presumably happens "instantaneously" from a human perspective. Hence the claims that autonomous vehicles have no lag in responding to events.


> after all it makes little sense to try and fix an engine on fire while flying

No, but you can ditch an engine that is on fire.

> instead opting to land.

That is supposed to be the outcome of any successful flight.

Autoland is possible with an engine out, even at low altitude (on final approach):

http://www.askcaptainlim.com/flying-on-the-boeing-777-flying...

So as far as the software goes that's business as usual and not even an 'oh shit' scenario.


Right, so why does the negative opinion towards self-driving cars seem to be that a computer isn't allowed to slow down to give it more time to react, which it would just treat as business as usual?


Well, for one you're passing your incompetence off to other drivers to deal with, something that will inevitably lead to accidents behind the car that slows down without any actual reason, for another because driving is a lot more complex than flying when it comes to automation. You might expect the opposite but pilots routinely describe their careers as 30 years of boredom punctuated by 30 seconds of sheer panic.


And why is that any worse than the current situation with humans?


This is a very important topic that I am surprised does not receive enough coverage. Thank you for bringing it up.

It will be particularly interesting if accident blame is placed on the 'dumb cars', and then insurance companies do a 180 and charge MORE for 'dumb cars' operated by humans. Once they put this information in their pricing models, I assume its 'stuck' in there until the next major NTSB report is published.

As complacency sets in over a couple months & years, the accident rates will likely swing from "dumb" human operated machines back to "Level 4 Highly Intelligent Teslas/UBERS/Argo AI", and that market might get a real shock when the pendulum comes back their way!


I agree, the devil is going to be in the almost infinite edge cases, the visual negotiation that goes on between drivers at box junctions, dealing with bad or aggressive drivers who ignore right of way or tailgate, ethical decisions in all the “Kobiashi Maru” no-win situations (extreme weather, black ice, highway pile up, mechanical failures).

An advanced AI may well be able to identify whether an object coming towards the windscreen is a bird, bat, leaf or a rock, but what will its intuition be about how much of a problem it is likely to be? Should it swerve to avoid a raccoon and risk whiplash for passengers? Should it aim to avoid large insects if the owner is vegan?

Also, people are very used to mild lawbreaking. We expect a cab driver to double park and let us out the car if there are no available parking spaces, but would an AI be authorised to bend the law or would it have to find the nearest parking space, which may be 5 blocks away and could be taken by the time it gets there?

I suspect we will get autonomous drone-like flying cars long before we get full autonomy in city centres or rural areas, because flying through a mostly obstacle-free space with an ability to avoid collisions on 3 axis seems much more reachable?


> Should it aim to avoid large insects if the owner is vegan?

Only if the other cars on the road around it are also owned by vegans.


>or don't trust the instruments

Thankfully this isn't as big of an issue with driving on the ground. Airplanes don't have sensors that give them the same precision as a car's wheel rotation or proximity to nearby objects.


On the contrary, this is a much larger issue on the ground. A commercial airplane spends most of its flight time in clear air, in an airway assigned to it by 24/7 air traffic control, safely separated from large objects with which it can collide, and largely safe from disruptions which necessitate sudden control inputs to avoid an immediate crash.

Cars don't operate in a comparable problem-space.


Proximity to other objects doesn't matter when your air speed indicator isn't accurate and you can't get lift.

https://en.wikipedia.org/wiki/Air_France_Flight_447




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: