I believe that parent's point is precisely that it is indeed unconvincing and so we should in turn be mindful that our own idea that the brain is a computer will, it is possible, given further technological advances, come to strike us as just as unconvincing as Leibniz' argument. Metaphors of the brain follow along with the state of the arts of technologies. For the ancient Greeks, the human composition (they didn't talk yet of brains and consciousness) was the city-state; for early modern natural philosophers, the mill and the like; during the industrial revolution, machines; and now, with us, computers and computation. It is unlikely that our current analogy is any more or less true than previous ones, that is to say, they may be illuminating in some way and not entirely useless in our thinking about brains and consciousness, but as it goes, they are not and cannot be true because the conjectures cannot even in the end be shown to be wrong. Consciousness and the brain may not be in the end inexplicable, but I very much doubt that we will find that the brain and consciousness are something else other than the brain and consciousness, like, for instance, a mill or a computer. It would be the most surprising thing of all were we to find out that evolution produced a kind of technology before in fact that technology as a human tool was ever produced -- but that is a different argument and I'm not even sure what I mean by it.
Your reasoning is wrong. There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so. That is, that a computer can, in principle, simulate mills as well as city-states and all the other objects you listed.
"It would be the most surprising thing of all were we to find out that evolution produced a kind of technology before in fact that technology as a human tool was ever produced"
Evolution has produced technologies long before humans invented them. For example, bats and whales use sonar, humans use lenses, plants use photosynthesis. In fact, one could argue evolution does not much else than to invent technologies.
"There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so."
That depends entirely on what you think a brain is, and it just so happens that a prevalent view these days is that a brain is a computer, and so of course it follows very easily that a computer can simulate a brain.
But once that assumption is gotten rid of, it doesn't follow so easily at all. People are ignoring a good lot of scientific method when they propose that were a computer to every simulate a brain the veracity of that simulation could ever be verified. The only data that could ever be provided would be a very limited set, namely, to the external behaviors. What would essentially be needed in the simulation, that is the production of consciousness, could never empirically tested. Consciousness as an object isn't even a scientific entity, it's something that can only be corroborated internal to the agent that has it. It isn't analyzable into parts and it isn't able to be abstracted out of its environment, and so cannot be turned into a scientific object. The brain, on the other hand, we might suppose is an object fitted to empirical methods, but what people really want to get at is consciousness, and the brain without that is probably not the problem most people have in mind. So one could simulate the brain but only on a very restricted very of what a brain is.
On the other hand, on second thought I do agree with you that evolution produces technologies and even maybe that that's all evolution has ever done. I'm afraid I'll have to think about that point more if I am to respond to it specifically, though I think it's tangential to the rest of the argument I am trying to make above (which is admittedly sketchy).
I'm having a hard time with your argument. Some things jump out:
The only data that could ever be provided would be a very limited set, namely, to the external behaviors. - This isn't true at all. We slice open brains to find out how they work. We stick things in them to measure and stimulate them. One fellow even had a camera installed to produce phosphenes via electrodes on his visual cortex. Science has gone well beyond external behaviours. (We are talking about the mechanics of brains here, so B. F. Skinner doesn't get an invitation. Once the brain simulation is running and babbling then he can psychoanalyze it all he likes.)
Consciousness as an object isn't even a scientific entity, - That's good news, not bad. If it's not a scientific entity, we don't have to worry about it or hold it up to scientific scrutiny. Similarly we can stave off worrying about whether it has a soul, and which God it belongs to should it in fact have a soul--whatever that means.
but what people really want to get at is consciousness, and the brain without that is probably not the problem most people have in mind. So one could simulate the brain but only on a very restricted very of what a brain is. - A brain is whatever science says it is. The word you want to (and are free to) play with is consciousness. For maximum convenience, you may define it as the quality a brain produced by sexual human reproduction has and that which an artificial brain made by humans has not. Whatever it is it makes us very special indeed!
There was more sarcasm there than I would have liked, but I can't resist. So I apologize, but I hope you see my point: arguing that a simulation is impossible because of some mysterious scientific non-entity is simply not a scientifically valid argument.
(Bio-/electro-)mechanically speaking there is no known inherent barrier to simulation. Roger Penrose likes the idea that there is some sort of quantum entanglement responsible for human consciousness but it's not supported and he's having a hard time even demonstrating that his specific claims are possible. So for now, I am happy to assume that (robust, human-like) brain simulation is possible in principle. I don't know, of course, and I probably will not live to see it. Then again, computing is a runaway train and there are some compelling players in the simulation field at this time--so who knows?
Consciousness as an object isn't even a scientific entity, - That's good news, not bad. If it's not a scientific entity, we don't have to worry about it or hold it up to scientific scrutiny. Similarly we can stave off worrying about whether it has a soul, and which God it belongs to should it in fact have a soul--whatever that means.
Ah, but this is the crux of the Chinese Room argument. You're saying that since we can't measure it or detect it, it doesn't exist, and therefore isn't important. When you have a brain simulation going and having a conversation with it, that's enough for you. It says it's conscious, and we can't measure consciousness, therefore it is conscious.
I disagree. Strongly. That brain simulation is a zombie. The man in the room does not speak Chinese.
But we'll see. I actually think we will, Kurzweil might be somewhat of a crackpot with his singularity, but his extrapolations of increases in processing power say that we'll have the power to simulate a human brain in a pocket-sized thing in 2030-something, and the power to simulate all of humanity in 2040-something. I'll be around 60 years old then, and hopefully still sticking around. :-)
No, you're putting words in my mouth. I'm arguing the scientific facts of the matter, and I believe I did a good job of making my personal beliefs and opinions separately.
It's especially important to note that my treatment of the term "consciousness" was predicated on the given assumption that it is not a scientific entity. At the moment the term has a handful of scientific meanings, but none are what people are after. My argument is that you can't argue meaningfully about it until you can define it meaningfully.
P.S. What do you mean by "zombie" -? Traditionally it means something different than what I think you mean by it.
My argument is that you can't argue meaningfully about it until you can define it meaningfully.
Agree. The problem is that I think consciousness is inherently subjective. We experience qualia (http://en.wikipedia.org/wiki/Qualia) and you just can't objectively describe those. For example, could you describe the colour blue to a blind person? Every time you look at a blue object, you know it is blue, you experience that it is blue, but you can't describe it, and you can't compare it with for example my experience of blue objects. I have no idea if blue objects "look" the same in your mind as it does in mine. We can both agree that a certain object is blue, we both associate the effect blue light has on our eyes to blue objects, but that's it.
So imagine now that we make an AI with sensory inputs, perhaps we make a robot, perhaps we simulate a human brain, and we teach it that blue objects are blue. If we then show it objects, it should be able to correctly tell us if they are blue or not. But does that AI experience blue? We don't know. We can't ever know.
Imagine then that we "upload" your mind into a machine and put it in a humanoid robot. That thing would then walk like you, talk like you, remember like you, laugh like you, joke like you, cry like you, etc.
But would it be you? Would it be alive? Would it have consciousness? In my opinion - no. It's moving, but it's dead, so it's a zombie, a philosophical zombie:
http://en.wikipedia.org/wiki/Philosophical_zombie
I was curious about the mention of the zombie before because this line didn't seem right: That brain simulation is a zombie. The man in the room does not speak Chinese.
I'm fairly certain that qualia--whatever it is--is not a necessary quality for the room to be capable of comprehending Chinese. It sounds like you would agree. The problem with qualia though is that there's no way for you or I to know whether the other participates in the phenomenon.
I'd like to mention that Dan Dennett has done a wonderful job of arguing that qualia is a bad term. I disagreed at first, but by the end of reading his argument I could only agree that the term is hopelessly ruined. You'll see his name in the Wikipedia article you linked. If you read his Quining Qualia[1] (I believe that is the one, though it may have been a follow up that I found so convincing) you might see the problem with arguments about whether something has or hasn't qualia.
As far as I'm concerned, if my brain were adequately simulated in a computer it would be like a plaster mould of a plaster mould. They're not the same thing, but functionally they are. If qualia exists I'm happy to assume that it's the product of a physical process which can be simulated. If it does not then I am quite certain the thing people mistake for it can be simulated--and perhaps already has been.
My attitude on the whole qualia matter is: how important can a thing be if no one can define it well enough to measure it? More importantly, I don't believe in magic. If something special is going on there I believe it's a product of natural processes, not a cause of them. "If it's there it can be reproduced."
You're saying that since we can't measure it or detect it, it doesn't exist, and therefore isn't important
If you can't measure it or detect it, how would you know it's there and it's the source of the phenomenon? If I tell you that it's not consciousness but magic pixie dust, invisible and non-detectable magic pixie dust, would you start searching for Neverland?
I am conscious. I experience being conscious every waking hour of my life. I don't know why, I can't objectively describe the experience of it, but I know it is there.
I can't speak for you, but given that we're of the same species, I'm gonna assume that you are conscious like me.
Conscious machines? No, that'll require a lot more convincing than a song and a dance.
I don't understand. Is consciousness a feeling or an external manifestation? It's something purely internal or something also visible on the outside? If it's only internal you cannot, by defition, say if a machine is conscious, but also every other human. If tomorrow we would discover an alien civilization and start a meaningful communication, would you question their consciousness?
Personally, I don't think that all the discussion about consciousness is meaningful. Consciousness could be the human feeling of abstract reasoning, like cold is the human feeling of registering a lower temperature. A machine could reproduce abstract reasoning, and asking about its consciousness would be like asking if a thermostat feels cold.
(Brilliant, this is the second time I argue against strong AI here, and only received downvotes instead of comments. That's not how this place is supposed to work.)
>>There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so.
>That depends entirely on what you think a brain is
No. It depends on what you think a computer is. You see, the reason why many scientist today believe that a brain is computer or can be simulated by a computer is that computers are universal machines. In contrast to a mill, which can only mill grains, a computer can simulate earthquakes, the weather, cars, other computers, and, you guessed it, mills.
>People are ignoring a good lot of scientific method when they propose that were a computer to every simulate a brain the veracity of that simulation could ever be verified.
Thinking there is a difference between believing that a computer can simulate a brain and believing that a windmill can do so doesn't depend at all on what you think a brain is, or on what a brain actually is.
The case that was made by the parent is that since a computer can simulate a windmill, there is a distinct difference in the relative likelihood of the beliefs, not that either was actually true. Depending on the idea that the brain was a computer to prove that it could be simulated by a computer would be pretty much assuming the conclusion.
Consciousness as an object isn't even a scientific entity, it's something that can only be corroborated internal to the agent that has it.
If this were really true I would have no basis for thinking that other human beings were conscious, and thus I would have no reason to think that computers couldn't perfectly simulate other human beings.
>There is a crucial difference between believing that a computer can simulate a brain and believing that a windmill can do so.
Leibniz didn't say a mill can simulate the human mind. He's talking about any contraption that can the simulate the human mind (which is also capable of simulating many things given proper arithmetic and [infinite?] time) one of which would be, for the sake of illustration, the size of a windmill. You have to realize that dissection of the human body was still extremely taboo in the 17th century (not to mention technological impediments). You may also be missing the point.
it isn't a matter of metaphors but of abstractions. the brain as signal processor is a more accurate abstraction than the brain as city-state in that it makes better predictions.