Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This highlights why having automated pilots in spaceships is so critical. Software can be tested more thoroughly than humans, and cannot override safety protocols.

If the rules say to abort, the software aborts. Humans get to say "I think we'll be fine", and put lives on the line.



Automation is critical, but also the ability to override software that fails is critical, too. Armstrong overrode the failed LEM computer to land safely. Of the 3 737MAX software stab trim runaway incidents, only one crew followed the stab trim override instructions and saved their plane.

Pilots are never perfect, and neither is the software. Need both watching each other.


Reading https://www.rmg.co.uk/stories/topics/apollo-11-moon-landing-..., I don’t see the computer fail.

Also, if he followed that 1201 alarm, it seems they would have been safe, too, but they wouldn’t have gotten on the moon.


> If the rules say to abort, the software aborts. Humans get to say "I think we'll be fine", and put lives on the line.

But logic errors and bugs in "thoroughly-tested" (subjective, determined by humans/corporations) software-controlled systems kills too. Interesting thought experiment: How many humans will die in 2031, across all industries, because of bad software?

https://www.bugsnag.com/blog/bug-day-race-condition-therac-2...


In this case the human pilots were right. The red light was mostly a bureaucratic instrument that wasn't really a safety risk for the mission beyond getting in trouble with the authorities by going over an arbitrary line. The spaceship and crew landed safely and even the article points out that they were never in any real danger.


In contrast, humans can also say: "It's saying to abort because the sensor reading is incorrect, we'll be fine." It's tricky... in the airplane world, I think it is understood that the current state of the art is an extremely well trained and competent pilot coupled with a nice advanced cockpit is the gold standard. Where each of their responsibilities lie, and how well they work together, is where the rubber meets the road. The record is beyond question that increased automation has drastically reduced airplane fatalities, while there are also being many, many fatal accidents which are quite clearly a case of automation gone awry. Effectively putting both of them together is quite a fascinating problem, and one I think we are still in the infancy of fully understanding, particularly in brand new domains that are emerging like self-driving cars, etc.


> Software can be tested more thoroughly than humans, and cannot override safety protocols.

Humans have had millions of years of testing and can be robustly relied on to take action to avoid dying.

I’m not saying humans are perfect, because obviously they are not. But software isn’t either.

And software and software testing are done by humans too, so the issue is not fallible human vs machine. Pilot vs automation is just comparing two different modes of human fallibility.


A random cosmic ray burst can't bit-flip a human brain. Software works best supervised in a coddled environment.


Lots of things can more than bit-flip a human brain, though. Things like adrenalin, greed, hubris, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: