Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If someone wants to tell me that LLMs do something we should call reasoning or possess something we should call consciousness or experience themselves as subjects, then I'll be very interested in learning why they're singling out LLMs -- why the same isn't true of every program.

First, I would say that "reasoning" and "consciousness" can be different — certainly there are those of us who experience the world without showing much outward sign of reasoning about it. (Though who knows, perhaps they're all P-zombies and we never realised it).

Conversely, a single neuron (or a spreadsheet) can implement "Bayesian reasoning". I want to say I don't seriously expect them to be conscious, but without knowing what you mean by "consciousness"… well, you say "experience themselves as subjects" but what does that even mean? If there's a feedback loop from output to input, which we see in LLMs with the behaviour of the context window, does that count? Or do we need to solve the problem of "what is qualia?" to even decide what a system needs in order to be able to experience itself as a subject?

Second, the mirror of what you say here is: if we accept that some specific chemistry is capable of reasoning etc., why isn't this true of every chemical reaction?

My brain is a combination of many chemical reactions: some of those reactions keep the cells alive; given my relatives, some other reactions are probably building up unwanted plaques that will, if left unchecked, interfere with my ability to think in about 30-40 years time; and a few are allowing signals to pass between neurons.

What makes neurons special? Life is based on the same atoms with the same interactions as the atoms found in non-living rocks. Do we need to have the same debate about rocks such as hornblende and lepidolite?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: