> plausibility [would] converge towards correctness
That is the most horribly dangerous idea, as we demand that the agent guesses not, even - and especially - when the agent is a champion at guessing - we demand that the agent checks.
If G guesses from the multiplication table with remarkable success, we more strongly demand that G computes its output accurately instead.
Oracles that, out of extraordinary average accuracy, people may forget are not computers, are dangerous.
That is the most horribly dangerous idea, as we demand that the agent guesses not, even - and especially - when the agent is a champion at guessing - we demand that the agent checks.
If G guesses from the multiplication table with remarkable success, we more strongly demand that G computes its output accurately instead.
Oracles that, out of extraordinary average accuracy, people may forget are not computers, are dangerous.