Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok, that is memory. I am talking about hallucination vs human or even animal intent in an embodied meaningful experience.




All you’re doing is calling the same thing hallucination when an LLM does it and memory when a human does it. You have provided no basis that the two are actually different.

Humans are better at noticing when their recollections are incorrect. But LLMs are quickly improving.


So when I tell you I like vanilla ice cream I am just hallucinating and calling it a memory? And when chatgpt says they like vanilla ice cream they are doing the same thing as me? Do I need to prove it to you that they are different? Is it really baseless of me to insist otherwise? I have a body, millions of different receptors, a mouth with taste buds, I have a consciousness, a mind, a brain that interacts with the world directly, and it's all just words on a screen to you interchangeable with a word pattern matcher?

I’m not calling what you’re doing a hallucination. I’m saying that what an LLM does is in fact memory.

But it’s a memory based on what it’s trained on. Of course it doesn’t have a favorite ice cream. It’s not trained to have one. But that doesn’t mean it has no memory.

My argument is that humans have fallible memories too. Sometimes you say something wrong or that you don’t really mean. Then you might or might not notice you made a mistake.

The part LLMs don’t do great at is noticing the mistake. They have no filter and say whatever they’re thinking. They don’t run through thoughts in their head first and see if they make any sense.

Of course, that’s part of what companies are trying to fix with reasoning models. To give them the ability to think before they speak.


Can you just train one to have a favorite ice cream? You think training on a bunch of words saying I like vanilla ice cream is somehow equivalent to remembering times you ate ice cream and saying my favorite is vanilla? Just because an LLM can do recall when prompted to based on training data doesn’t make it the same as human memory, in the same way a database isn’t memory the way humans do it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: