> Human brains are unpredictable. Look around you.
As it was mentioned by others, we've had thousands of years to better understand how humans can fail. LLMs are black boxes and it never ceases to amaze me how they can fail in such unpredictable ways. Take the following for examples.
Humankind has developed all sorts of systems and processes to cope with the unpredictability of human beings: legal systems, organizational structures, separate branches of government, courts of law, police and military forces, organized markets, double-entry bookkeeping, auditing, security systems, anti-malware software, etc.
While individual human beings do trust some of the other human beings they know, in the aggregate society doesn't seem to trust human beings to behave reliably.
It's possible, though I don't know for sure, that we're going to need systems and processes to cope with the unpredictability of AI systems.
Human performance, broadly speaking, is the benchmark being targeted by those training AI models. Humans are part of the conversation since that's the only kind of intelligence these folks can conceive of.
But we haven't solved it for human beings either.
Human brains are unpredictable. Look around you.