> This is no different in essence than Searle's Chinese Room problem, which at its core asks "If the parts aren't conscious, how can the gestalt be?"
The answer to that question is “consciousness is a property of the interaction between the parts, not of the individual parts.” Or, alternatively, “consciousness is not a well-defined objective property, just a vague incoherent concept that has lots of emotional attachment, but which you can't analytically say is or is not present in any entity or aggregate.”
The Chinese Room is useless as anything other than an as an overly elaborate illustration that there isn't a useful, clear understanding of what “consciousness” means.
My take on the Chinese Room: it is a failed mental experiment. CR differs from humans by embodiment - namely - humans are agents in an external world, with certain limitations, such as need for food, shelter and avoiding pain and injury, which a room doesn't have. Thus the CR can't learn the same value system as a human. The CR has nothing on the line, humans have the protect their life.
By removing the world itself from the CR, it is limited in its growth. The world allows for exploration and testing of hypothesis.
The CR can't self reproduce, humans can - and reproduction brings a whole list of new constraints for humans that guide evolution. Genetic evolution is also a meta-learning algorithm that the CR lacks. Humans are born with a set of instinctive values which guide the evolution of the brain - like a program. CR has no such initial values (reward channels) and more generally, the problem of learning in CR is glossed over.
Searle should have compared humans with a frail robot that has to earn its electricity and raw materials to produce spare parts by its own endeavor, and be able to learn from and teach its knowledge to other robots. Such a robot might have a closer to human perspective on the world, being embodied and subject to limitations that force it to learn intelligent action.
The problem is not differences between Chinese room and human. The problem is in different perceptions of them. One is intuitively perceived as conscious, other not so much. If you can't perceive something as conscious because you see all the moving parts, it surely isn't, right?
I see this as "what we can program is not a mind" taken to the extreme.
The answer to that question is “consciousness is a property of the interaction between the parts, not of the individual parts.” Or, alternatively, “consciousness is not a well-defined objective property, just a vague incoherent concept that has lots of emotional attachment, but which you can't analytically say is or is not present in any entity or aggregate.”
The Chinese Room is useless as anything other than an as an overly elaborate illustration that there isn't a useful, clear understanding of what “consciousness” means.