An LLM should have no problem replying "I don't know" if that's the most statistically likely answer to a given question, and if it's not trained against such a response.
What it fundamentally can't do is introspect and determine it doesn't have enough information to answer the question. It always has an answer. (disclaimer: I don't know jack about the actual mechanics. It's possible something could be constructed which does have that ability and still be considered an "LLM". But the ones we have now can't do that.)
An LLM should have no problem replying "I don't know" if that's the most statistically likely answer to a given question, and if it's not trained against such a response.
What it fundamentally can't do is introspect and determine it doesn't have enough information to answer the question. It always has an answer. (disclaimer: I don't know jack about the actual mechanics. It's possible something could be constructed which does have that ability and still be considered an "LLM". But the ones we have now can't do that.)