Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This isn't really a problem in tool-assisted LLMs.

Use google AI studio with search grounding. Provides correct links and citations every time. Other companies have similar search modes, but you have to enable those settings if you want good results.



Okay, but it's weird there is a "don't lie to me" button.


The "don't lie to me" button for a human is asking them, "where did you learn that fact?"

Grounding isn't very different from that.


How would that ever work? The only thing you can do is continue to refine high quality data sets to train on. The rate of hallucination only trends downwards on the high end models as they improve in various ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: