ChatGPT seems to be good about this. If you invent something and ask about it, like "What was the No More Clowning Act of 2025?", it will say it can't find any information on it.
The older or smaller models, like anything you can run locally, are probably far more likely to just invent some bullshit.
That said, I've certainly asked ChatGPT about things that definitely have a correct answer and had it give me incorrect information.
When talking about hallucinating, I do think we need to differentiate between "what you asked about exists and has a correct answer, but the AI got it wrong" and "What you're asking for does not exist or does not have an answer, but the AI just generated some bullshit".