Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ceci n'est pas une pipe.

We don't know enough about minds to ask the right questions — there are 40 definitions of the word "consciousness".

So while we're definitely looking at a mimic, an actor pretending, a Clever Hans that reacts to subtle clues we didn't realise we were giving off that isn't as smart as it seems, we also have no idea if LLMs are mere Cargo Cult golems pretending to be people, nor what to even look for to find out.



I don't think we need to know exactly what consciousness is or how to recognize it in order to make a strong case that LLMs don't have it. If someone wants to tell me that LLMs do something we should call reasoning or possess something we should call consciousness or experience themselves as subjects, then I'll be very interested in learning why they're singling out LLMs -- why the same isn't true of every program. LLMs aren't obviously a special, unique case. They run on the same hardware and use the same instruction sets as other programs. If we're going to debate whether they're conscious or capable of reasoning, we need to have the same debate about WinZip.


> If someone wants to tell me that LLMs do something we should call reasoning or possess something we should call consciousness or experience themselves as subjects, then I'll be very interested in learning why they're singling out LLMs -- why the same isn't true of every program.

First, I would say that "reasoning" and "consciousness" can be different — certainly there are those of us who experience the world without showing much outward sign of reasoning about it. (Though who knows, perhaps they're all P-zombies and we never realised it).

Conversely, a single neuron (or a spreadsheet) can implement "Bayesian reasoning". I want to say I don't seriously expect them to be conscious, but without knowing what you mean by "consciousness"… well, you say "experience themselves as subjects" but what does that even mean? If there's a feedback loop from output to input, which we see in LLMs with the behaviour of the context window, does that count? Or do we need to solve the problem of "what is qualia?" to even decide what a system needs in order to be able to experience itself as a subject?

Second, the mirror of what you say here is: if we accept that some specific chemistry is capable of reasoning etc., why isn't this true of every chemical reaction?

My brain is a combination of many chemical reactions: some of those reactions keep the cells alive; given my relatives, some other reactions are probably building up unwanted plaques that will, if left unchecked, interfere with my ability to think in about 30-40 years time; and a few are allowing signals to pass between neurons.

What makes neurons special? Life is based on the same atoms with the same interactions as the atoms found in non-living rocks. Do we need to have the same debate about rocks such as hornblende and lepidolite?


Technically, any sufficiently self-reflective system could be conscious, it's internal subjective experience might be a lot slower and even a lot different if that reflectivity is on a slower time scale.


WinZip isn't about to drop a paragraph explaining what it might think it is to you, though. Following your logic anything with an electrical circuit is potentially conscious


adding cargo cult golem to my lexicon...


hahaha. That's rich. Goyim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: