Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs aren't logically reasoning through an axiomatic system. Any patterns of logic they demonatrate are just recreated from patterns in input data. Effectively, they can't think new thoughts.


Do you think they sometimes hallucinate?

Do you think a collection of them can spot one another's hallucinations?

Do you think that, on occasion, some hallucinations will at least directionally be under explored good ideas?


> Effectively, they (LLMs) can't think new thoughts.

This is true only if you assume that combining existing thought patterns is not new thinking. If they can't learn a certain pattern from training data, indeed they would be stuck. However, their training data keeps growing and updating, allowing each updated version to learn more patterns.


The massive LLMs trained on webscale data aren't. But some are, in fact:

https://arxiv.org/abs/2407.07612




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: