Hacker News new | past | comments | ask | show | jobs | submit login
John Searle: Consciousness in Artificial Intelligence [video] (youtube.com)
71 points by nolantait on Dec 16, 2015 | hide | past | favorite | 59 comments



There isn't much new here. Skip ahead to the first audience question from Ray Kurzweil (http://www.youtube.com/watch?v=rHKwIYsPXLg&t=38m51s).

Kurzweil, in summary, asks "You say that a machine manipulating symbols can't have consciousness. Why is this different than consciousness arising neurons manipulating neurotransmitter concentrations?" Searle gives a non-answer: "My dog has consciousness because I can look at it and conclude that it has consciousness."


Yeah honestly I don't get what he is really contributing (and I'm sort of an AI skeptic). In 2000 in undergrad, I recall checking out some of his books from the library because people said he was important, and I learned about the "Chinese Room" argument [1] in class.

How is it even an argument? It doesn't illuminate anything, and it's not even clever. It seems like the most facile wrong-headed stab at refutation, by begging the question. As far as I can tell, the argument is, "well you can make this room that manipulates symbols like a computer, and of course it's not conscious, so a computer can't be either"? There are so many problems with this argument I don't even know where to begin.

The fact that he appears to think that changing a "computer" to a "room" has persuasive power just makes it all the more antiquated. As if people can't understand the idea that computers "just" manipulate symbols? Changing it to a "room" adds nothing.

[1] http://plato.stanford.edu/entries/chinese-room/


I was never satisfied with the chinese room thought experiment. Let's momentarily replace the thing in the chinese room with a human, to parse Searle's notion of "understanding". Searle would argue that a human trained in to emit meaningful chinese characters would still lack understanding. But I think this is backwards and speaks to your identification of Searle begging the question: the only way a human could emit meaningful chinese responses would be if it had an understanding of chinese. Consequently if a machine is outputting meaningful chinese, it too must already understand chinese, and any argument otherwise is kind of a pro-biology bigotry with a shaky underlying logic at best.

This then devolves into semantics. Can a person locked in a room really come to "understand" chinese culture, for example, if only non-experiential learning were used as data inputs? I think we have to say the answer is yes. I am a chemist. I have never seen an atomic orbital with my bare eyes, yet I can design chemical reactions that work with my understanding of chemistry. Because I have not experienced an atomic orbital does that mean I do not understand? Even, when I set up my first reaction, I did not have any experience, and knew what I was doing only through what could be described as sophisticated analogy. I would say my understading was low, but it was certainly non-zero. Where does one draw the line?


I have always felt that the human in the room would start to recognize patterns and develop an "understanding". Their "understanding" may have no basis in reality but I don't see that it is any less valid to them.

If Searle is right then we should be able to perform a MRI on a blind person while they are talking to someone and spot the point where their brain switches into "symbol manipulation mode" when the conversation subject becomes something visual.


The guy in the room can memorize all of the rules and the ledgers and give you the same responses the room did, and if you asked him in his native language if he knew Chinese, he'd honestly tell you no.

He could have an entire conversation with you in Chinese, and only know that what you said merits the response he gave. He doesn't if he's telling you directions to the bathroom, or how to perform brain surgery.


What about Latin? I learned Latin in a somewhat sterile environment, that in many ways is akin to symbol manipulation. I certainly never conversed with any native Latin speakers. Do I not understand Latin? Why or why not?


It's just obfuscation. In reality the 'room' would have to be the size of a planet and if one person was manipulating the symbols it might take the whole thing the life span of the universe to think 'Hello World'. But by phrasing it as a 'room' with one person in it he makes it look inadequate to the task, and therefore the task impossible.


Neither the size of the room nor the speed of the computation is important to Searle's argument. You could replace the person in the room with the population of India (except for those who understand Chinese), and pretend to the Chinese speaker that the communication is by surface mail. Or use a bank of supercomputers if Indians aren't fast enough.


Fair enough. In which case Sear's argument is that even fantastically complex, sophisticated information processing systems with billions of moving parts and vast information storage retrieval resources operating over long periods of time cannot be intelligent. If that's what his position boils down to, what does casting it as a single person in a room add to the argument? As Kurzweil asked, how is that different from neurons manipulating neurotransmitter chemistry? Searl doesn't seem to have an answer to that.


No, his position, as I understand it, is that it cannot be conscious. It certainly can be intelligent.

Searle does try to explain why there's a difference. Although the person in the Chinese Room might be conscious of other things, he has no consciousness of understanding the Chinese text he's manipulating, and will readily verify this, and nothing else in the room is conscious. Chinese speakers are conscious of understanding Chinese.


I thought the idea was that because the only part of the room actually doing things (the person), doesn't understand chinese?

I mean, agree with it or not, but I think that's a bit stronger than just, making it seem intuitively worse because its a room instead of "a computer"?

I think the important part isn't the swap of "room" for "computer", but instead the swap of "person" for "cpu"?


Yeah, but the system would be the person + the lookup tables, not just the person. The problem is, we don't tend to say "does a room with a person in it and several books have this knowledge?" Relying on a system that doesn't tend to get grouped together (there's no term for the system human + book inside room), and having only one animate object (so that people think of the animate object as the system instead of the animate and inanimate objects), as well as asking the question only about the animate part of the system, all seem to suggest that the purpose of the thought experiment is to mislead people.

A better example would be saying something like - does this company have the knowledge to make a particular product? We can say that no individual member of the company does, but the company as a whole does.


I think this is called the "systems response".

Which, well there's a whole series of responses back and forth, with different ideas about what is or is not a good response.

One idea describes a machine where each state of the program is pre-computed, and the computer steps through the states one by one, but in each state, if the next of the pre-computed states was wrong (i.e. would not be the next step of the program, following from the current state), if (e.g.) a switch was flipped, it would cause the program to be computed correctly despite the pre-computed states being wrong, and if the switch is not flipped, then it would continue along the pre-computer states. If the switch is flipped on, or if its flipped off, and all the pre-computer states are correct, the same things happen, and it does not interact with the switch at all. If all the pre-computed states are nonsense, and the switch is flipped on, then it runs the program correctly, despite the pre-computed states being nonsense.

So, suppose that if the pre-computed states are all wrong, and the switch is on, that that counts as conscious. Then, if the pre-computed states are all correct, and the switch is on, would that still be conscious? What if almost all the pre-computed states were wrong, but a few were right? It doesn't seem like there is an obvious cutoff point between "all the pre-computed steps are wrong" and "all the pre-computer steps are right", where there would be a switch between what is conscious. So then, one might conclude that the one where all the pre-computed steps are right, and the switch is on, is just as conscious as the one which has the switch on but all the pre-computed states are wrong.

But then what of the one where all the pre-computed states are right, and the switch is off?

The switch does not interact with the rest of the stuff unless a pre-computed next step would be wrong, so how could it be that when the switch is on, the one with all the pre-computations is conscious, but when it is off, it isn't?

But the one with all the pre-computations correct, and the switch off, is not particularly different from just reading the list of states in a book.

If one grants consciousness to that, why not grant it to e.g. fictional characters that "communicate with" the reader?

One might come up with some sort of thing like, it depends on how it interacts with the world, and it doesn't make sense for it to have pre-computed steps if it is interacting with the world in new ways, that might be a way out. Or one could argue that it really does matter if the switch is flipped one way or the other, and when you flip it back and forth it switches between being actually conscious and being, basically, a p-zombie. And speaking of which you could say, "well what if the same thing is done with brain states being pre-computed?", etc. etc.

I think the Chinese Room problem, while not conclusive, is a useful introduction to these issues?


> But the one with all the pre-computations correct, and the switch off, is not particularly different from just reading the list of states in a book.

The states were (probably) produced by computing a conscious mind and recording the result.

Follow the improbability. The behavior has to come from somewhere. That somewhere is probably conscious.

Similarly, authors are conscious, so they know how conscious characters behave.


I don't think it actually brings up any relevant issues. For instance, you mention a p-zombie, but that's another one with glaringly obvious problems that are immediately evident. Does bacteria have consciousness? Or did consciousness arise later, with the first conscious creature surrounded by a community of p-zombies, including their parents, siblings, partners, etc. Both possibilities seem pretty detached from reality.

Pre-computation is another one that seems to obfuscate the actual issue. No, I don't think anyone would think a computer simply reciting a pre-computed conversation had conscious thought going into it; but that same is true for a human being reciting a a conversation they memorized (which wouldn't be that different from reading the conversation in a book). But that's a bit of a strawman, because no one is arguing that lookup table-type programs are conscious (you don't see anyone arguing that Siri is conscious). And the lookup table/precomputations for even a simple conversation would impossibly large (run some numbers, it's most likely larger than the number of atoms in the universe for even tiny conversations).

So I don't see these arguments as bringing up anything useful. They seem more like colorful attempts to purposefully confuse the issue.


The person in the room performing the lookup is a red-herring and can be replaced by suitable algorithm, e.g. a convnet - which could learn the lookup task.

Consciousness resides in the minds that created the lookup tables: they were constructed by conscious beings to map queries to meaningful responses.

The lookup tables are the very sophisticated part of Searle's Chinese Room.

The recent emergent semantic vector algebra discovered in human languages by Mikolov's word2vec [Mikolov et al] demonstrate that some of the computational power of language is inherent in the language ( but only meaningfully interpretable by a conscious listener.

Meaning requires Consciousness but language is unexpectedly sophisticated and contains a semantic vector space which can answer simple queries ("What is the Capital of...") and analogise ("King is to What as Man is to Woman") algebraicly. [Mikolov et al]

This inherent semantic vector space is discoverable by context encoding large corpii.

Language is a very sophisticated and ancient tool that allows some reasoning to be performed by simple algebraic transformations inherent to the language.

-

[Mikolov et al] : Efficient Estimation of Word Representations in Vector Space Mikolov, Chen, Corrado, & Dean http://arxiv.org/abs/1301.3781

& https://code.google.com/p/word2vec/


Yeah but it just makes me more confused? How does that say anything about a computer then? There's no human being who doesn't understand something inside a computer.


The idea is, roughly, if a person in the place of the cpu does not understand chinese, then the cpu doesn't understand chinese.

And because the cpu is the part that does stuff, like the person, then if the system w/ the person doesn't understand chinese, then the computer w/ the cpu doesn't understand chinese.

Because there's nothing to make the cpu more understand-y, only things to make it less understand-y, and otherwise the systems are the same.


The Chinese room argument is actually needlessly convoluted. Just imagine a piece of paper on which three words are printed: "I AM SAD". Now is there anyone who believes that this piece of paper is actually feeling sad just because "it says so"? Of course not. Now, suppose we replace this piece of paper with a small tablet computer that changes its displayed "mood" over time according to some algorithm. Now in my opinion it is rather hard to imagine that all of a sudden consciousness will "arise" in the machine like some ethereal ghost and the tablet will actually start experiencing the displayed emotion. Because it's basically still the same piece of paper.


The Chinese room argument is actually needlessly convoluted. Just imagine a piece of paper on which I draw a face that looks sad. Now is there anyone who believes that this piece of paper is actually feeling sad just because it looks sad? Of course not. Now, suppose we replace this piece of paper with an organic machine made of cells, blood and neurons which changes its displayed "mood" over time according to some algorithm. Now in my opinion it is rather hard to imagine that all of a sudden consciousness will "arise" in the machine like some ethereal ghost and the organic machine will actually start experiencing the displayed emotion. Because it's basically still the same piece of paper.


AND YET, human brains are an implementation of such an algorithm. By all reasoning, we shouldn't be conscious.

Yet here I am, I am the one who is seeing what my eyes see and I am distinct from you. Science still has no idea how that happens, as far as I know.

So how knows, maybe all computer programs are in fact conscious in some way.


The Chinese Room argument has always seemed to me a powerful illustration of the problem that "consciousness" is so poorly defined as to be not subject to meaningful discussion dressed up as a meaningful argument against AI consciousness.

Its always distressed me that some people take it seriously as an argument against AI consciousness; it does a better job of illustrating the incoherence of the fuzzy concept of consciousness on which the argument is based.


As a believer in Weak AI the Chinese Room argument really gave me more understanding of my position. His argument is based on the concept that interpretation of symbols is not the same as the understanding that we do. As an example, say that a person learns 1 + 1 = 2. Because that person understands the concept, he can then go apply it to other situations, and figure out that 1 + 2 = 3. Whereas because the Chinese room is just interpreting symbols, so when the computer is asked the question "what is 1 + 1?" and can answer "2" via lookup table, but the person inside the room has gained no understanding of the actual question so he can't then use that knowledge in different circumstances and know without looking up that 1 + 2 = 3.

The Chinese Room argument is that because computers can't "learn", everything has to be taught to them directly, whereas humans are able to take knowledge given and apply it to other situations. While some computers can "learn" enough rules to follow patterns, the argument is that computers can't "jump the track" and that humans can.


Yeah I see that, but the problem is that we don't know how humans are conscious, i.e. where meaning rises. If you believe that brains are just atoms, then meaning arises from "dumb physics" somewhere.

Another way to think of it is: a fetus, or a sperm, or ova is not conscious. Some might argue that a newborn isn't really conscious. Somewhere along the line it becomes conscious. How does that happen? Where is the line? We have no idea.

You can't assert that meaning can't arise from "dumb symbol manipulation" without understanding how meaning arises in the former case. We simply don't know enough to make any sort of judgement. The Chinese room argument is trying to make something out of nothing. We don't know.


I've always thought that the Chinese Room proved just the opposite of what Searle thinks it does.

I think of it this way:

I have two rooms: one has a person who doesn't speak Chinese in it, but they have reference and books that allow them to translate incoming papers into Chinese perfectly.

The second room just someone who speaks Chinese, and can translate anything coming in perfectly.

Searle says that AIs are like person in room one: they don't know Chinese.

I would argue that is the wrong way to look at things. A better comparison is that an AI is like the system of room 1, which does know Chinese, and from observation is indistinguishable from the system of room 2. What's going on inside (a human with Chinese reference books vs a human who knows Chinese) doesn't matter, it's just internal processing.

If it walks like a duck and quacks like a duck, then it's a duck.

If a machine claims to be conscious, and I can't tell it apart from another conscious being, who am I to say it isn't conscious?


I think we are missing the gist of the Chinese room argument here.

The correct question to ask: How is a machine manipulating symbols (that someone says is conscious) different from any other complex physical system? Is New York city's complex sewer system conscious. What about the entire world's sewer & plumbing system?

Does a machine have to compute some special function to be conscious? Does the speed of computation matter? If so who measures the speed? (let us not bring in general relativity as speed of computation can be different for different observers.

Kurzweil et al's definition of consciousness is exactly as silly as Searle saying "My dog has consciousness because I can look at it and conclude that it has consciousness."


Those are good questions, but I don't see how the Chinese room argument helps with any of them. If anything, it confuses things by dragging a fictional/impossible construct into the argument, rather than just using real examples that people actually understand.


I thought the point was to construct an example that most people agree wouldn't count as consciousness, then ask how a computer is any different.


Maybe, but there are too many holes in the construct to make it useful IMO. It just provokes endless confused debate rather than illuminating anything.

IMO Kurzweil's response is actually spot on, although not that hard to come up with. You could make the argument: in your brain, there are just atoms following the laws of physics. The atoms have no choice in the matter, and know nothing about Chinese, or hunger, love, life goals, etc. Your brain is entirely composed of atoms so you can't be conscious.

Obviously "meaning" arises from mindless syntax or mindless physics somewhere in the process. We just don't know where. The Chinese room doesn't bring us any closer to that understanding, and doesn't refute anything.


Why can't meaning just be the sensation of a local minimum? When you find meaning, you temporarily pause because there's nowhere else to go in the local environment. Subsequently of course, you might be jolted out of that and be compelled to find a new minimum.


How does that explain consciousness, qualia et cetera?


I think that there's a fundamental cognitive-shift style problem with Searle's argument, because I remember encountering it when I was in my tweens and wondering why anyone thought there was any 'there' there.

I think that -- from memory -- this is approximately what Searle believes himself to be saying:

1. Imagine that you're having a conversation with a person in another room, in Chinese. You're writing stuff down on little scraps of paper and getting little scraps of paper back. It's 100% clear to you that this is a real live person you're talking to.

2. Except here's the thing, it isn't. There's actually just this guy in the other room, he doesn't speak read or write Chinese at all. He just has a whole bunch of books and ledgers that contain rules and recorded values that let him transform any input sentence in Chinese into an output sentence in Chinese that is completely indistinguishable from what a real live person who spoke Chinese might say.

3. So, it's ridiculous to imagine that someone could actually simulate consciousness with books and ledgers. There's no way. Since the guy doesn't understand Chinese, he isn't "conscious" in the sense of this example. So we can't describe him as conscious. And the idea that the books are conscious is ridiculous, because they're just information without the guy. So there actually can't be any consciousness there, even though it seems like it. Since consciousness can't be simulated by some books, it's clear that we're just interacting with the illusion of consciousness.

Meanwhile, this is what people like myself hear when he tries to make that argument:

1. Imagine that you're having a conversation with a person in another room, in Chinese. You're writing stuff down on little scraps of paper and getting little scraps of paper back. It's 100% clear to you that this is a real live person you're talking to.

2. Except here's the thing, it isn't. There's actually a system made up of books and ledgers of rules and values in the other room. There's this guy there who doesn't read or write Chinese; he just takes your input sentence in Chinese and applies the rules, noting down values as needed, transforming it until the rules say to send it back to you. That's the sentence that you get back. It's completely indistinguishable from what a real live person who spoke Chinese might say.

3. So, it's ridiculous to imagine that someone could actually simulate consciousness with books and ledgers, but we're doing it for the sake of argument because it's a metaphor that we can hold in our heads. No one would claim that the guy following the rules in the other room is the "conscious" entity that we believe ourselves to be communicating with. And no-one would claim that static information itself is conscious. So either the "conscious" entity must be the system of rules and information as applied and transformed, or else there is no conscious entity involved. If there is no conscious entity involved, and since this is a metaphor, we can substitute "books and ledgers" with "individual nerve cells with synaptic connections" and "potentials of activation", and the conclusion will still hold true; there will still be no consciousness there.

4. However, we feel that there is a consciousness there when we interact with a system of individual nerve cells, synaptically connected with various thresholds of potentiation: even if it's a system smaller by an order of magnitude or so* that the one in our skulls, like our dog has. Thus we must conclude that the "conscious" entity must be the system of rules and information as applied and transformed, or we must conclude that notion of consciousness is ill-founded and inarticulate, that our understanding of consciousness is incomplete, and that our sense of "knowing" that we or another person are conscious is likely an illusion.

*I am fudging on the figure, but essentially we're comparing melon volumes to walnut volumes, as dogs have thick little noggins.


That's what's weird about Searle. He posited a great strawman that exposes the fallacy of taking "machines can't think" as an axiom, but he claims the straw man is a steel man. It is as though he is a sacrificial troll, making himself look silly to encourage everyone who can reject this straw man.


> [...] we must conclude that notion of consciousness is ill-founded and inarticulate, that our understanding of consciousness is incomplete, and that our sense of "knowing" that we or another person are conscious is likely an illusion.

I think I mostly agree with you, but I would argue that if your notion of consciousness is ill-founded and inarticulate, you can't really decide whether it's an illusion either. After all, the subjective experience quite definitely does happen/is real, thus obviously not an illusion, while the interpretation offered for that subjective experience is incoherent, thus there is no way to decide whether it's describing an illusion or not.


Interesting. It's also unclear to me why a system of books and ledgers of rules couldn't be conscious if they are self modifying. Who knows what property of the system insides of heads gives it this sense of "self" and how could you even test that it has one?


The Chinese room argument is a nice thought experiment that strips away irrelevant details. For example, if you were to use a real humanoid robot (e.g. the Nao) in the argument, people would probably not get the argument and be confused because the robot looks fuzzy and cute.


So basically, the way to convince Searle (not that that is a real goal) is to build a robot automaton which passes the uncanny valley: very responsive eyes. A collection of tricks. Clever responses.

Searle would look at that and conclude it had consciousness.


I think Searle's mostly correct and Kurzweil's completely wrong on this. It took me a long time to understand Searle's argument, because Searle conflates consciousness and intelligence and this confuses matters. Understanding Chinese is a difficult problem requiring intelligence, but I don't think it requires consciousness.

It is important to distinguish between "understanding Chinese" and "knowing what it's like to understand Chinese". We immediately have a problem: knowing what it's like to understand Chinese involves various qualia, none of which is unique to Chinese speakers.

So I'll simplify the argument. Instead of having a room with a book containing rules about Chinese, and a person inside who doesn't Chinese, we have a room, with some coloured filters, and a person who can't see any colours at all (i.e. who has achromatopsia). Such people (e.g. http://www.achromatopsia.info/knut-nordby-achromatopsia-p/) will confirm they have no idea what it's like to see colours. If you shove a sheet of coloured paper under the door, the person in the room will place the different filters on top of the sheet in turn, and by seeing how dark the paper then looks, be able to determine its colour, which he'll write on the paper, and pass it back to the person outside. The person outside thinks the person inside can distinguish colours, but the person inside will confirm that not only can he not, but he doesn't even know what it's like. Nothing else in the room is obviously conscious.

A propos of the dog, this is the other minds problem. It's entirely possible that I'm the only conscious being in the universe and everyone else (and their pets) are zombies. But we think that people, dogs, etc. are conscious because they are similar to us in important ways. Kurzweil presumably considers computers to be conscious too. Computers can be intelligent, and maybe in a few years or decades will be able to pass themselves off over the Internet as Chinese speakers, but there's no reason to believe computers have qualia (i.e. know what anything is like), and given the above argument, every reason to believe that they don't.


This is basically just the Hard Problem of consciousness. It's been a hard problem for decades, and we're no closer to having an answer.

>But we think that people, dogs, etc. are conscious because they are similar to us in important ways.

Specifically, mammals have mirror neurones. More complex mammals also seem to have common hard-wired links between emotions and facial expressions - so emotional expression is somewhat recognisable across species.

I'm finding the AI debates vastly frustrating. There are basic features of being a sentient mammal - like having a body with a complicated sensory net, and an endocrine system with goal/avoidance sensations and emotions, and awareness of social hierarchy and other forms of bonding - that are being ignored in superficial arguments about paperclip factories.

It's possible that a lot of what we experience as consciousness happens at all of those levels. The ability to write code or find patterns or play chess floats along on top, often in a very distracted way.

So the idea that an abstract symbol processing machine can be conscious in any way we understand seems wrong-headed. Perhaps recognisable consciousness is more likely to appear on top of a system that models the senses, emotions, and social awareness first, topped by a symbolic abstraction layer that includes a self-model to "experience" those lower levels, recursively.


This might be very tangential, but I had a very acute sensation of learning something new the other day. There are these stereographic images, where they put the left and right eye's intended image next to each other -- and with a little practice, you can angle your eyes so each eye looks at a separate picture. Suddenly your double vision starts to make MORE sense than regular vision, because you're now able to combine the two images to get a sense of depth in the image. The tricky part is now to focus your eyes; at first, you'll reflexively correct your eyes too and the illusion (or the combination rather) goes away.

But bit by bit, you learn to control your eye's focal length independent of "where" in space you want to look. It really is astonishing.

It made me think of consciousness as a measure of ability to integrate information, because this process is truly fascinating to anybody who tries it (and I really think you should!) Perhaps that's because with this trick, you were able to integrate more information, and thus tickle your brain more?


> conflates consciousness and intelligence and this confuses matters

I think this is an excellent point. I like your example with colors, which shows that there is a difference between seeing (i.e. experiencing) colors and producing symbols which give the impression that an entity can see colors.

I don't follow any argument that proposed that computers can be conscious but other machines (e.g. car engines) cannot. In the end, symbols don't really exist in physical reality - all that exists is physical 'stuff' - atoms, electrons, photons etc. interacting with each other. So how can we say that one ball of stuff is conscious but another is not? And why isn't all of the stuff together also conscious? Why not just admit we don't know yet?

Consciousness may be hard to define, but lets take something simpler - experience, or even more specifically - pain. I can feel pain. While I can't be 100% sure, I believe other humans feel pain as well. However I don't believe my laptop has the capacity to feel pain, irrespective of how many times and in how many languages it can say 'I feel pain'.

Perhaps the ability to experience is the defining characteristic of consciousness?


I disagree completely. After time the color filter will start to associate various concepts and feelings add images with various colors. This association is what starts making the colors themselves have meaning even if they can't see the colors the same way that you and I can. There's no way to prove that we all see colors the same way anyway. But that doesn't mean that we don't believe that were conscious. I think I see that you're saying we cannot make any claims about others perhaps but only can talk about how we feel. But I feel like the room example is actually misleading in this respect. Another way of thinking about it is our brain starts to associate things and if those clusters of associations that give those things meaning. The experience of experience and color is only important because experience and color has a web of other associated experiences that those colors remind us of. So extending the room experiment to the experience of a baby who throughout the entire life sees colors or the filter image version of these colors at various moments to associate with various things. In this example we can imagine that the baby will in fact associate let's say blue with I don't know this great unknown half of our outside ceiling that we see during the day. And then that will take on something more to it but it is difficult admittedly to explain.


> After time the color filter will start to associate various concepts and feelings add images with various colors. This association is what starts making the colors themselves have meaning even if they can't see the colors the same way that you and I can.

The filters are just pieces of transparent coloured plastic. How are they capable of forming associations?

Also, associations on their own (e.g. blue with sky, red with blood, green with grass) don't give you any idea what colours are like. Knut Nordby (and many other people with achromatopsia) knew these associations as well as you or I know them, but made it quite clear that he had no idea what it was like to see in colour.


> The experience of experience and color is only important because experience and color has a web of other associated experiences that those colors remind us of

So what about those original experiences? How are they important at all if there is nothing to associate them with?


I can only recommend reading this paper: http://www.scottaaronson.com/papers/philos.pdf

It really lives up to its title. Suddenly computational complexity is not just a highly technical CS matter anymore, and the Chinese Room paradox is explained away successfully, at least for me.


Searle makes two assertions:

1) Syntax without semantics is not understanding. 2) Simulation is not duplication.

Claim 1 is a criticism of old-style Symbolic AI that was in fashion when he first formulated his argument. This is obviously right, but we're already moving past this. For example, word2vec or the recent progress in generating image descriptions with neural nets. The semantic associations are not nearly as complex as those of a human child, but we're past the point of just manipulating empty symbols.

Claim 2 is an assertion about the hard problem of consciousness. In other words, about what kinds of information processing systems would have subjective conscious experiences. No one actually has an answer for this yet, just intuitions. I can't really see why a physical instantiation of a certain process in meat should be different from a mathematically equivalent instantiation on a Turing machine. He has a different intuition. But neither one of us can prove anything, so there's nothing else to say.


I think Claim 1 is actually more about determinism; that if by knowing all the inputs you can reliably get the same outputs what you have isn't consciousness.

Neural nets are somewhat starting to escape that dynamic but there still isn't a neural net that reliable pulls in a continuous stream of randomness to generate meaningful behaviour like our consciousness does.

Now to be honest; I'm not entirely sure if John Searle would agree that that is consciousness when we do get there but I do agree with him that deterministic consciousness is essentially a contradictio in terminis.


I wouldn't be so critical of GOFAI. Much high-level reasoning either does or can involve symbol manipulation. There are some impressive systems, such as Cyc, which do precisely that. It isn't useful for low-level tasks like vision or walking, so other approaches are needed to complement it.

> but we're past the point of just manipulating empty symbols.

We've now reached the point where we can manipulate large matrices containing floating point numbers. I don't see how this makes systems any more conscious.


Regards claim 2, Searle repeats the phrase "specific causal properties of the brain" quite a few times without spelling out just what he's referring to, but from other remarks he makes it seems clear he means actual electrochemical interactions, rather than generic information processing capabilities. I think his view is that consciousness (most likely) doesn't arise out of "information processing", which he would probably class as "observer-relative", but out of some as yet not understood chemistry/physics which takes place in actual physical brains.

So the question, to Searle, is not "about what kinds of information processing systems would have subjective conscious experiences", but "what kinds of electrochemical interactions would cause conscious experiences".

The intuition/assumption of his questioners seems to be that whatever electrochemical interactions are relevant for consciousness, they are relevant only in virtue of their being a physical implementation of some computational features, but plainly he does not share this assumption and favours the possibility that the electrochemical interactions are relevant because they physically (I think he'd have to say) produce subjective experience - and that any computational features we attribute to them are most likely orthogonal to this. Hence his example of the uselessness of feeding an actual physical pizza to a computer simulation of digestion. His point is that the biochemistry (he assumes) required for consciousness isn't present in a computer any more than that required for digestion is.

Another example might be: you wouldn't expect a compass needle to be affected by a computer simulating electron spin in an assemblage of atoms exhibiting ferromagnetism any more than it would be by a simulation of a non-ferromagnetic assemblage.

To someone making the assumption that computation is fundamental for explanations of consciousness, these examples seem to entirely miss the point, because it's not the physical properties of the implementation (the actual goings on in the CPU and whatnot) that matter, but the information processing features of the model that are the relevant causal properties (for them.)

But to Searle, I think, these people are just failing to grok his position, because they don't seem to even understand that he's saying the physical goings on are primary. You can almost hear the mental "WHOOSH!" as he sees his argument pass over their heads. In an observer-relative way, of course.

As you imply, until someone can show at least a working theory of how either information processing or biochemistry can cause subjective experience the jury will be out and the arguments can continue. I won't be surprised if it takes a long time.

(Edited to add the magnetic example and subsequent 2 paragraphs.)


The systems response is pretty much the right answer. You can put yourself at any level of reductionism of a complex system and ask how in the hell the system accomplishes anything. If you imagine yourself running a simulation of physics on paper for the universe, you may ask yourself, how does this simulation create jellyfish.

I think people fall for Searle's argument the same way people fall for creationist arguments that make evolution seem absurd. Complex systems that evolve over long periods of time have enormous logical depth complexity and exhibit emergent properties that really can't be computed analytically, but only but running the simulation, and observing macroscopic patterns.

If I run a cellular automaton that computes the sound wave frequencies of a symphony playing one of Mozart's compositions, and it takes trillions of steps before even the first second of sound is output, you can rightly ask, at any state, how is this thing creating music?


Consciousness and understanding are human created symbolism. Talking about it seriously is a waste of time.

I could be an empty shell imitating a human perfectly, all other humans would buy my lack of consciousness, and nothing would be different, from their perspective I exist, from mine, I don't have mine.

How does one know that I really understand something? Maybe I can answer all the questions to convince them?


It's pretty frustrating to watch. Feels like an endless repetition of "well humans and dogs are conscious because that's self evident". There's no sufficient demarcation criterion other than "I know it when I see it" that he seems to apply. [I guess having a semantics is his criterion but he doesn't elaborate on a criterion for that]

The audience question about intelligent design summed up my frustration nicely (or rather the amoeba evolving part of it).


I think what it boils down to is that Searle believes consciousness is a real thing that exists in the universe. A simulation of a thing isn't the same as the thing itself, no matter how accurate the outputs. The Chinese Room argument just amplifies that intuition (my guess is that the idea of a room was inspired by the Turing Test).

I think studying the brain (as opposed to philosophical arguments) is the thing that will eventually answer these kinds of questions, though.


I think the argument about consciousness is vacuous. Searle admits we might create an AI which acts 100% like a human in every way.

Nothing Searle says stands in the way of creating intelligent or super-intelligent entities. All Searle is saying is those entities won't be conscious.

No can prove this claim today. But more significantly I think it's extremely likely no one will ever prove the claim. Consciousness is a private subjective experience. I think it's likely you simply cannot prove it exists or doesn't exist.

Mankind will create a human-level robots and we'll watch them think and create and love and cry and we'll simply not know what their conscious experience is.

Even if we did prove it one way or the other, the popular opinion would be unaffected.

Some big chunk of people will insist robots are conscious entities who feel pain and have rights. And some big chunk of people will insist they are not conscious.

It might be our final big debate. An abstruse proof is not going to change anyone's mind. Look at how social policies are debated today. Proof is not a factor.


So, supposing there's any chance that it has consciousness, is there any sort of movement doing all it can to put the brakes on AI research? If it's true, it's literally the precursor to the worst realistic (or hypothetical, really) outcome I can fathom, which has been discussed before on HN (simulated hell, etc). I'm not sure why more people aren't concerned about it. Or is it just that there's "no way to stop progress" as they say, and this is just something we're going to learn to live with, the way we live with, say, the mistreatment of animals?


We are sufficiently far away from creating machines that humans would consider to have consciousness that it's not really a problem so far. Eventually we'll probably have to think about robot rights, but I guess we still have a few decades until they're sufficiently advanced. But judging from how we treat, eg. great apes, who are so very similar to us, I wouldn't want to be a robot capable of suffering.


I'd think that if there are people forward thinking enough to consider the consequences to humans (Elon Musk, Singularity Institute), there should be people forward thinking enough to consider the consequences to the AIs.


This guy so smart but at the same time such an idiot. SYNTAX and SEMANTICS are essentially SAME THING. It's only a context-dependent difference, and this difference is quantitative, even if we still don't have a good enough definition of what those quantitative variables underlying them are. You must have a really "fractured" mind not to instantly "get it". And "INTRINSIC" is simply a void concept: nothing is intrinsic, everything (the universe and all) is obviously observer dependent, it just may be that the observer can be a "huge entity" that some people choose to personalize and call God.

It's amazing to me that people with such a pathological disconnect between mind and intuition can get so far in life. He's incredibly smart, has a great intuition, but when exposed to some problems he simply can't CONNECT his REASON with his INTUITION. This is a MENTAL ILLNESS and we should invest in developing ways to treat it, seriously!

Of course that "the room + person + books + rule books + scratch paper" can be self conscious. You can ask the room questions about "itself" and it will answer, proving that it has a model of itself, even if that model is not specifically encoded anywhere. It's just like mathematics, if you have a procedural definitions for the set of all natural numbers (ie. a definition that can be executed to generate the first and the next natural number), you "have" the entire set of natural numbers, even if you don't have them all written down on a piece of paper. Same way, if you have the processes for consciousness, you have consciousness, even if you can't pinpoint "where" in space and time exactly is. Consciousness is closer to a concept like "prime numbers" than to a physical thing like "a rock", you don't need a space and time for the concept of prime numbers to exist in, it just is.

His way o "depersonalizing" conscious "machines" is akin to Hitler's way of depersonalizing Jews, and this "mental disease" will probably lead to similar genocides, even if the victims will not be "human" ...at least in the first phase, because you'll obviously get a HUGE retaliation in reply to any such stupidity, and my bet it that such a retaliation will be what will end the human race.

Now, of course the Chinese room discussion is stupid: you can't have "human-like consciousness" with one Chinese room. You'd need a network of Chinese rooms that talk to each other and also operate under constraints that make their survival dependent on their ability to model themselves and their neighbors, in order to generate "human-like consciousness".


Well, it's Searle after all. It's always funny to re-read Derrida's attack on his problematic line of thought[0].

0. https://en.wikipedia.org/wiki/Limited_Inc




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: