Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you need to re-read the paper.

The LLMs don't "reason" by any definition of the term. If they did, then the Tower of Hanoi and the river problem would have been trivial for them to handle at any level because ultimately the solutions are just highly recursive.

What the LLMs do is attempt to pattern match to existing solved problems in their training set and just copy those solutions. But this results in overthinking for very simple problems (because they're copying too much of the solutions from their training set), works well for the somewhat complex problems like a basic Tower of Hanoi, and not at all for the problems that would require actual reasoning because...they're just copying solutions.

The point of the paper is that what LLMs do is not reasoning, however much the AI industry may want to redefine the word to suit their commercial interests.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: