Hacker News new | past | comments | ask | show | jobs | submit login

With respect to hallucinating, I never read about training LLM’s to say: “I don’t know” when they don’t know. Is that even researched?



ChatGPT seems to be good about this. If you invent something and ask about it, like "What was the No More Clowning Act of 2025?", it will say it can't find any information on it.

The older or smaller models, like anything you can run locally, are probably far more likely to just invent some bullshit.

That said, I've certainly asked ChatGPT about things that definitely have a correct answer and had it give me incorrect information.

When talking about hallucinating, I do think we need to differentiate between "what you asked about exists and has a correct answer, but the AI got it wrong" and "What you're asking for does not exist or does not have an answer, but the AI just generated some bullshit".


Not sure why you are downvoted. It’s a difficult problem, but lots of angles on how to deal with it.

For example: https://arxiv.org/abs/2412.15176




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: