Treating hallucinations as an error that can be corrected fights against the nature of the technology and is more hype than reality. LLMs are designed to be a bullshit generator and that’s what they are; it is a fundamental limitation. (“Bullshit” here used in the technical sense: not that it’s wrong, but that the truth value of the output is meaningless to the generator.) Thankfully the hype cycle seems to be on the down slope. Think about the term “generative AI” and what the models are meant to do: generate plausible-sounding somewhat creative text. They do that! Mission accomplished. If you think you can apply them outside that limited scope, the burden of proof is on you; skepticism is warranted.