Hacker News new | past | comments | ask | show | jobs | submit login

Not really.

An LLM should have no problem replying "I don't know" if that's the most statistically likely answer to a given question, and if it's not trained against such a response.

What it fundamentally can't do is introspect and determine it doesn't have enough information to answer the question. It always has an answer. (disclaimer: I don't know jack about the actual mechanics. It's possible something could be constructed which does have that ability and still be considered an "LLM". But the ones we have now can't do that.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: