Let's suppose we build the Cosine room. The room is full of 10 year olds that haven't yet taken trig. Each performs the function of a transistor, and has no idea of what they're doing, other than blindly executing the instructions they were given. None of the participants has the slightest clue about what a cosine is. Yet, the output of the room will still be cos(x). Thus I think it's fair to say that the room as a whole implements cos(x).
In the same way, I don't see why wouldn't we conclude the room is speaking Chinese. It doesn't matter how it manages to do so and what it's made of. If it quacks like a Chinese duck, then it's a Chinese duck.
I think Searle would agree with you that the room does in fact speak Chinese. His point is that the neither the person, the machine, or "the room" understands Chinese, at least in the usual sense of that word when it comes to understanding as implemented by humans.
>forget cells or atoms, none of your neurons or synapses understand chinese.
And yet some people seem to understand Chinese just fine. How can you explain that gap? It's only a stupid argument if you first assume that functionalism is true.
How do i explain the gap ? Obviously emergence. It's everywhere in nature. a single ant does not display the complexity/intelligence of its colony.
The argument is supposed/or often used to disqualify machines from consciousness but it's stupid because biological neurons don't understand the bigger picture anymore than an artificial neuron sampled from a model.
The entire argument of the Chinese Room stuff related to LLM falls apart really easy.
If you show a 2 years old kid, an apple and you say "This is an apple", now the kid knows the thing - the apple - is an apple in his world model. Automatically inherits LOTs of properties, you can ask the kid and see in real time how he began inmediately to associate the physical object - now named in its internal language model - to some other similar stuff he alredy knows, like "This is plant?", "It falls from a tree like an orange", "It has some skin like an orange", "can you cook apples like you cook bananas?", an so on.
But this requires a physical representation of the apple, now kids intelligence has some edge here, it can do the same thing just with words, you can teach them words, "apple", and say them "it's a fruit", if they have another fruit already "tagged", like a banana, they will say you almost inmediately, "is it tasty like bananas" ("tasty" is code for sweet in child's language models around the planet).
Hence, the LLM could have an emergent property of actually knowing what every word the "say" mean, if - like many have been inferring lastly - they also have a world model, it would relatively easy to just "plug", let's say 10 million of words to their exact meaning, and even to their relative meaning depending on the context they're being used.
And that's precisely what we may be seeing right now when we prompt something to chatGPT, and all the mathematic stuff, like "predicting the next word" is just some really, really low level process inside the LLM, not much different than the electric stuff - watchable by EEG - happening between neurons in the brain.
So if you look at the "EEG" from a LLM, the prediction thing happening inside the LLM, it won't probably tell you much, just like having a casual look to an EEG won't tell you much about what the person was thinking at the time of the capture of the EEG.
Along these lines, it seems the growing consensus is less that AI is more conscious than previously thought, and more than human minds are less conscious than previously thought.
Let's suppose we build the Cosine room. The room is full of 10 year olds that haven't yet taken trig. Each performs the function of a transistor, and has no idea of what they're doing, other than blindly executing the instructions they were given. None of the participants has the slightest clue about what a cosine is. Yet, the output of the room will still be cos(x). Thus I think it's fair to say that the room as a whole implements cos(x).
In the same way, I don't see why wouldn't we conclude the room is speaking Chinese. It doesn't matter how it manages to do so and what it's made of. If it quacks like a Chinese duck, then it's a Chinese duck.