I don't think that LLMs are good enough that they can they confused by logical inconsistencies in the training data.
I don't think that LLMs are good enough that they can they confused by logical inconsistencies in the training data.