"The ChatGPT model is huge, but it’s not huge enough to retain every exact fact it’s encountered in its training set."
That's because there is no way for the model to take the internet and separate fact from fiction or truth from falsehood. So it should not even try to, unless it can somehow weigh options (or preform its own experiments). And that doesn't mean counting occurrences, it means figuring out a coherent worldview and using it as a prior to interpret information, and then still acknowledging that it could be wrong.
That's because there is no way for the model to take the internet and separate fact from fiction or truth from falsehood. So it should not even try to, unless it can somehow weigh options (or preform its own experiments). And that doesn't mean counting occurrences, it means figuring out a coherent worldview and using it as a prior to interpret information, and then still acknowledging that it could be wrong.