Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And what if you let a human expert fact-check the output of an LLM? Provided you're transparent about the output (and its preceding prompt(s)) ?

Because I'd much rather ask an LLM about a topic I don't know much about and let a human expert verify its contents than waste the time of a human expert in explaining the concept to me.

Once it's verified, I add it to my own documentation library so that I can refer to it later on.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: