As far as I understand LLM what is being asked is unfortunately close to impossible with LLM.
Also I find it disingenuous that apologists are stating thing close to "you are using it wrong". Where it is advertised that LLM based AI should be more and more trusted (because more accurate, based on some arbitrary metrics) and might save some time ( on some undescribed task).
Of course in that use case most would say to use your judgement to verify whatever is generated, but for the generation that is using AI LLM as a source of knowledge ( like some people are using Wikipedia as source of truth, or stack overflow) it will be difficult to verify, when all they knew is LLM generated content as source of knowledge.
Also I find it disingenuous that apologists are stating thing close to "you are using it wrong". Where it is advertised that LLM based AI should be more and more trusted (because more accurate, based on some arbitrary metrics) and might save some time ( on some undescribed task).
Of course in that use case most would say to use your judgement to verify whatever is generated, but for the generation that is using AI LLM as a source of knowledge ( like some people are using Wikipedia as source of truth, or stack overflow) it will be difficult to verify, when all they knew is LLM generated content as source of knowledge.