Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Tell that to Google...

Yeah, because Google's LLMs have an completely open question/answer space.

For e.g. a Kubernetes AI, you can nowadays just feed in the whole Kubernetes docs + a few reference Helm charts, tell it to stick close to the material, and you'll have next to no hallucinations. Same thing for simple data extraction tasks, where in the past you couldn't use LLMs because they would just hallucinate data into the output that wasn't there in the input (e.g. completely mangling an ID), which nowadays is essentially a non-issue.

As soon as you have a restrictable space in which the LLM acts, you have a lot of options to tune them that hallucinations are not a major issue nowadays.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: