You’re moving the goalposts. LLMs are masquerading as superb reference tools and as sources of expertise on all things, not as mere “typical humans.” If they were presented accurately as being about as fallible as a typical human, typical humans (users) wouldn’t be nearly as trusting or excited about using them, and they wouldn’t seem nearly as futuristic.