Hacker News new | past | comments | ask | show | jobs | submit login

The issue of hallucinations won't be solved with the RAG approach. It requires a fundamentally different architecture. These aren't my words but Yann LeCun's. You could easily understand if you spend some time playing around. The autoregressive nature won't allow the LLMs to create an internally consistent model before answering the question. We have approaches like Chain of Thought and others, but they are merely band-aids and superficially address the issue.



If you build a complex Chain if Thought style Agent and then train/finetune further by reinforcement learning with this architecture then it is not a band-aid anymore, it is an integral part of the model and the weights will optimize to make use of this CoT ability.


It's been 3.5 years since GPT-3 was released, and just over a year since ChatGPT was released to the public.

If it was possible to solve LLM hallucinations with simple Chain-of-Thought style agents, someone would have done that and released a product by now.

The fact that nobody has released such a product, is pretty strong evidence that you can't fix hallucinations via Chain-of-Thought or Retrieval-Augmented Generation, or any other band-aid approaches.


I agree: but I just wanted to say that there are specific subdomains where you can mitigate some of these issues.

For example, generating json.

You can explicitly follow a defined grammar to get what will always be a valid json output.

Similarly, structured output such as code can be passed to other tools such as compilers, type checkers and test suites to ensure that at a minimum the output you selected passes some minimum threshold of “isn’t total rubbish”.

For unstructured output this a much harder problem, and bluntly, it doesn’t seem like there’s any kind of meaningful solution to it.

…but the current generation of LLMs are driven by probabilistic sampling functions.

Over the probability curve you’ll always get some rubbish, but if you sample many times for structure and verifiable output you can, to a reasonable degree, mitigate the impact that hallucinations have.

Currently that’s computationally expensive, to drive the chance of error down to a useful level, but compute scales.

We may seem some quite reasonable outputs from similar architectures wrapped in validation frameworks in the future, I guess.

…for, a very specific subset of types of output.


I agree that the "forcing valid json output" is super cool.

But it's unrelated to the problem of LLM hallucinations. A hallucination that's been validated as correct json is still a hallucination.

And if your problem space is simple enough that you can validate the output of an LLM well enough to prove it's free of hallucinations, then your problem space doesn't need an LLM to solve it.


> your problem space doesn’t need an LLM to solve it

Hmmm… kinda opinion right?

I’m saying; in specific situations, you can validate the output and aggregate solutions based on deterministic criteria to mitigate hallucinations.

You can use statistical methods (eg. There’s a project out there that generates tests and uses “on average tests pass” as a validation criteria) to reduce the chance of an output hallucination to probability threshold that you’re prepared to accept… for certain types of problems.

That the problem space is trivial or not … that’s your opinion, right?

It has no bearing on the correctness of what I said.

There’s no specific reason to expect that just like you can validate output against a grammar to require output that is structurally correct, you can’t validate output against some logical criteria (eg. unit tests) to require output that is logically correct against the specified criteria.

It’s not particularly controversial.

Maybe the output isn’t perfectly correct if you don’t have good verification steps for your task, maybe the effort required to build those validators is high, I’m just saying: it is possible.

I expect we’ll see more of this; for example, this article about decision trees —> https://www.understandingai.org/p/how-to-think-about-the-ope..., requires no specific change in the architecture.

It’s just using validators or search the solution space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: