Hacker News new | past | comments | ask | show | jobs | submit login
Integrated Information Theory of Consciousness (utm.edu)
61 points by lainon on July 24, 2017 | hide | past | favorite | 17 comments



I'm a fan of Scott Aaronson's take on IIT: http://www.scottaaronson.com/blog/?p=1799


relatedly, Max Tegmark tried to do some estimates of Phi values of neural networks in his paper "Consciousness as a State of Matter" https://arxiv.org/pdf/1401.1219v2.pdf

And while he seems to take IIT pretty seriously, his conclusion sure seems like a refutation of the idea that IIT's definition of Phi means anything:

> Information stored in Hopfield neural networks is naturally error-corrected, but 10^11 neurons support only about 37 bits of integrated information. This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?


Amateur, though I have read Tononi's book and some of his work: My suspicion is that the answer is to do with how those bits are situated. Environmental complexity (or how it co-varies with ones percepts) seems to lend a lot to the richness of conscious experience.


Ten years ago I was a big fan of IIT and Giulio Tononi. But today, I prefer the reinforcement learning paradigm. It's much more powerful. Instead of consciousness, we have agents that perceive the environment and act, in order to maximize rewards. Agents are also endowed with the power to simulate / imagine possible futures so they can plan and reason. An agent is something concrete, consciousness doesn't even have a definition, that is why I appreciate the RL paradigm. It brings concreteness to an almost metaphisical research topic.


What you describe reminds me of Marvin Minsky's 1986 Society of Mind theory: https://en.wikipedia.org/wiki/Society_of_Mind

> A core tenet of Minsky's philosophy is that "minds are what brains do". The society of mind theory views the human mind and any other naturally evolved cognitive systems as a vast society of individually simple processes known as agents. These processes are the fundamental thinking entities from which minds are built, and together produce the many abilities we attribute to minds. The great power in viewing a mind as a society of agents, as opposed to the consequence of some basic principle or some simple formal system, is that different agents can be based on different types of processes with different purposes, ways of representing knowledge, and methods for producing results.

> This idea is perhaps best summarized by the following quote: "What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle." —Marvin Minsky, The Society of Mind, p. 308


I'm excited about the latest developments in AI too, but RL and consciousness are pretty extreme apples and oranges. I agree with Searle that it's a bit insane to wave away the role or even existence of consciousness ('metaphysical topic', etc.), especially in the context of developing a stronger AI. Just because we don't yet have a mapping of consciousness to types of structures, virtual or otherwise, that could be implemented in an AI system, doesn't mean we should pretend like we've moved beyond it.


Um, "agents that perceive the environment" sounds a lot like homunculi, which effectively just punts the Hard Problem downstream, but doesn't address it.

This is no different in essence than Searle's Chinese Room problem, which at its core asks "If the parts aren't conscious, how can the gestalt be?"

We don't have an answer, but it must be true as long as the brain is involved. Individual neurons are unconscious electrochemical devices, but they still add up to experiencing the redness of red.


> This is no different in essence than Searle's Chinese Room problem, which at its core asks "If the parts aren't conscious, how can the gestalt be?"

The answer to that question is “consciousness is a property of the interaction between the parts, not of the individual parts.” Or, alternatively, “consciousness is not a well-defined objective property, just a vague incoherent concept that has lots of emotional attachment, but which you can't analytically say is or is not present in any entity or aggregate.”

The Chinese Room is useless as anything other than an as an overly elaborate illustration that there isn't a useful, clear understanding of what “consciousness” means.


My take on the Chinese Room: it is a failed mental experiment. CR differs from humans by embodiment - namely - humans are agents in an external world, with certain limitations, such as need for food, shelter and avoiding pain and injury, which a room doesn't have. Thus the CR can't learn the same value system as a human. The CR has nothing on the line, humans have the protect their life.

By removing the world itself from the CR, it is limited in its growth. The world allows for exploration and testing of hypothesis.

The CR can't self reproduce, humans can - and reproduction brings a whole list of new constraints for humans that guide evolution. Genetic evolution is also a meta-learning algorithm that the CR lacks. Humans are born with a set of instinctive values which guide the evolution of the brain - like a program. CR has no such initial values (reward channels) and more generally, the problem of learning in CR is glossed over.

Searle should have compared humans with a frail robot that has to earn its electricity and raw materials to produce spare parts by its own endeavor, and be able to learn from and teach its knowledge to other robots. Such a robot might have a closer to human perspective on the world, being embodied and subject to limitations that force it to learn intelligent action.


The problem is not differences between Chinese room and human. The problem is in different perceptions of them. One is intuitively perceived as conscious, other not so much. If you can't perceive something as conscious because you see all the moving parts, it surely isn't, right?

I see this as "what we can program is not a mind" taken to the extreme.


> "agents that perceive the environment" sounds a lot like homunculi, which effectively just punts the Hard Problem downstream, but doesn't address it.

Not really. I was referring to using regular multi-layer neural nets for perception, as they are commonly used today. Neural nets can "perceive" by detecting and locating objects in a scene (image goes in, object map goes out). The object map is being used in reinforcement learning to decide on actions.


That sounds like a different definition of "perception". The philosophical, psychological, and neuroscientific fields don't use the word like that. To output an object map wouldn't be described as the same thing as "perceiving" the object.


> of consciousness, we have agents that perceive the environment and act, in order to maximize rewards

I don't believe "agents acting to maximize rewards" is a good description of any human I know, or if it is the reward function is certainly unknown.


> if it is the reward function is certainly unknown

It is a collection of reward channels related to the functioning of the body (food, shelter), learning (curiosity), socializing (and physical touch), physical integrity (avoiding harm). Even newborn babies like to be held and are curious about objects around them - they are already learning to maximize rewards.


Isn't this explicitly not a theory of consciousness?


Here's the web page of the main research group developing this theory at the moment:

http://integratedinformationtheory.org/

Giulio Tononi's work is very interesting. I suggest anyone interested in sleep/consciousness research take a peek at what his group is doing.


I find it hard to believe that "consciousness" can exist in a non-neuronal (or at least non-biological system) , i.e., phi greater than 0 outside of a nervous system. But IIT suggests it can, albeit a small amount, I guess because of back propagation. "If IIT is correct in placing such constraints upon artificial consciousness, deep convolutional networks such as GooGleNet and advanced projects like Blue Brain may be unable to realize high levels of consciousness."

http://www.iep.utm.edu/int-info/#SH4c




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: