Right, but a problem with humans is that they tend to seriously over-estimate their capability. So we need to be statistics led on this
Given the choice between "One time in a million a human flies the aeroplane into the ground and everybody dies" and "One time in 10 million the computer flies the aeroplane into the ground and everybody dies" we ought to be hard-headed and take the ten times fewer deaths, but we prefer to say "Ah, but _my_ pilots are better than average".
Now, the reality gets very complicated, because we ask computers and humans to fly in different conditions. A pilot will refuse to try to land an aeroplane into thick fog, the Cat III ALS is happy to try this because it sees through fog. On the other hand when external circumstances are trying, the pilots may choose not to attempt ALS at all. But despite this complexity, we do need to accept that sometimes a smaller risk of automation error is preferable to the risk of human error even if _neither_ is perfect.
We are not in disagreement. My reply was to "why not automate everything?" and indeed the answer is "because sometimes humans are better," not "do not automate anything" :)