Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This article is accurate. That's why I'm investigating a bayesian symbolic lisp reasoner. It's incapable of hallucinating, it provides auditable traces which are actual programs and it kicks the crap out of LLMs at stuff like Arc-Agi, symbolic reasoning, logic programs, game playing, etc. I'm working on a paper where I show that the same model can break 80 on arc-agi, run the house by counting cards at blackjack, and solve complex mathematical word problems.




LLMs are also incapable of "hallucinating", so maybe that isn't the buzzword you should be using.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: