Ironically, if you agree with pmarreck above, scarblac's comment can be seen as an example of a human hallucinating with confidence, precisely what they were arguing is less likely to occur in the organic side of the internet.
That “if” is doing a good bit of lifting though. Nobody is talking about the hallucination rate.
How many times have innocent people been wrongly convicted? The innocence project found 375 instances in a 31 year period.
How often do LLMs give false info? Hope it never gets used to write software for avionics, criminology, agriculture, or any other setting that could impact huge amounts of people…
I think this is overall a good criticism of the current generation of LLM's- They can't seem to tell you how sure (or not) they are about something. A friend mentioned to me that when it gave ChatGPT a photo of a Lego set with an Indiana Jones theme earlier today and asked it to identify the movie reference, it meandered on for 2 paragraphs arguing different possibilities when it could have just said "I'm not sure."
I think this is a valid area of improvement, and I think we'll get there.