Hacker News new | past | comments | ask | show | jobs | submit login

Treating hallucinations as an error that can be corrected fights against the nature of the technology and is more hype than reality. LLMs are designed to be a bullshit generator and that’s what they are; it is a fundamental limitation. (“Bullshit” here used in the technical sense: not that it’s wrong, but that the truth value of the output is meaningless to the generator.) Thankfully the hype cycle seems to be on the down slope. Think about the term “generative AI” and what the models are meant to do: generate plausible-sounding somewhat creative text. They do that! Mission accomplished. If you think you can apply them outside that limited scope, the burden of proof is on you; skepticism is warranted.



Improving LLM's hallucinations is not a theory, it's a reality right now. In fact, developers do it all the time.

> the burden of proof is on you; skepticism is warranted.

I can prove it. You can test it too, try it: after LLM's answer, say 'please double-check if that answer is true'.

Now that I've proved it, right?

(I'm not saying it's perfect, I'm saying it can be improved. That alone makes it an engineering problem, just like any other engineering problem).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: