Hacker News new | past | comments | ask | show | jobs | submit login

Ironically, if you agree with pmarreck above, scarblac's comment can be seen as an example of a human hallucinating with confidence, precisely what they were arguing is less likely to occur in the organic side of the internet.



That “if” is doing a good bit of lifting though. Nobody is talking about the hallucination rate.

How many times have innocent people been wrongly convicted? The innocence project found 375 instances in a 31 year period.

How often do LLMs give false info? Hope it never gets used to write software for avionics, criminology, agriculture, or any other setting that could impact huge amounts of people…


Yeah I was defin... perhaps guilty of sounding ver.. somewhat confident myself.

Luckily I only said humans add some doubt to what they say some of the time :-)


I think this is overall a good criticism of the current generation of LLM's- They can't seem to tell you how sure (or not) they are about something. A friend mentioned to me that when it gave ChatGPT a photo of a Lego set with an Indiana Jones theme earlier today and asked it to identify the movie reference, it meandered on for 2 paragraphs arguing different possibilities when it could have just said "I'm not sure."

I think this is a valid area of improvement, and I think we'll get there.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: