Hacker News new | past | comments | ask | show | jobs | submit login

Um, "agents that perceive the environment" sounds a lot like homunculi, which effectively just punts the Hard Problem downstream, but doesn't address it.

This is no different in essence than Searle's Chinese Room problem, which at its core asks "If the parts aren't conscious, how can the gestalt be?"

We don't have an answer, but it must be true as long as the brain is involved. Individual neurons are unconscious electrochemical devices, but they still add up to experiencing the redness of red.




> This is no different in essence than Searle's Chinese Room problem, which at its core asks "If the parts aren't conscious, how can the gestalt be?"

The answer to that question is “consciousness is a property of the interaction between the parts, not of the individual parts.” Or, alternatively, “consciousness is not a well-defined objective property, just a vague incoherent concept that has lots of emotional attachment, but which you can't analytically say is or is not present in any entity or aggregate.”

The Chinese Room is useless as anything other than an as an overly elaborate illustration that there isn't a useful, clear understanding of what “consciousness” means.


My take on the Chinese Room: it is a failed mental experiment. CR differs from humans by embodiment - namely - humans are agents in an external world, with certain limitations, such as need for food, shelter and avoiding pain and injury, which a room doesn't have. Thus the CR can't learn the same value system as a human. The CR has nothing on the line, humans have the protect their life.

By removing the world itself from the CR, it is limited in its growth. The world allows for exploration and testing of hypothesis.

The CR can't self reproduce, humans can - and reproduction brings a whole list of new constraints for humans that guide evolution. Genetic evolution is also a meta-learning algorithm that the CR lacks. Humans are born with a set of instinctive values which guide the evolution of the brain - like a program. CR has no such initial values (reward channels) and more generally, the problem of learning in CR is glossed over.

Searle should have compared humans with a frail robot that has to earn its electricity and raw materials to produce spare parts by its own endeavor, and be able to learn from and teach its knowledge to other robots. Such a robot might have a closer to human perspective on the world, being embodied and subject to limitations that force it to learn intelligent action.


The problem is not differences between Chinese room and human. The problem is in different perceptions of them. One is intuitively perceived as conscious, other not so much. If you can't perceive something as conscious because you see all the moving parts, it surely isn't, right?

I see this as "what we can program is not a mind" taken to the extreme.


> "agents that perceive the environment" sounds a lot like homunculi, which effectively just punts the Hard Problem downstream, but doesn't address it.

Not really. I was referring to using regular multi-layer neural nets for perception, as they are commonly used today. Neural nets can "perceive" by detecting and locating objects in a scene (image goes in, object map goes out). The object map is being used in reinforcement learning to decide on actions.


That sounds like a different definition of "perception". The philosophical, psychological, and neuroscientific fields don't use the word like that. To output an object map wouldn't be described as the same thing as "perceiving" the object.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: