Hacker News new | past | comments | ask | show | jobs | submit login

I think this is called the "systems response".

Which, well there's a whole series of responses back and forth, with different ideas about what is or is not a good response.

One idea describes a machine where each state of the program is pre-computed, and the computer steps through the states one by one, but in each state, if the next of the pre-computed states was wrong (i.e. would not be the next step of the program, following from the current state), if (e.g.) a switch was flipped, it would cause the program to be computed correctly despite the pre-computed states being wrong, and if the switch is not flipped, then it would continue along the pre-computer states. If the switch is flipped on, or if its flipped off, and all the pre-computer states are correct, the same things happen, and it does not interact with the switch at all. If all the pre-computed states are nonsense, and the switch is flipped on, then it runs the program correctly, despite the pre-computed states being nonsense.

So, suppose that if the pre-computed states are all wrong, and the switch is on, that that counts as conscious. Then, if the pre-computed states are all correct, and the switch is on, would that still be conscious? What if almost all the pre-computed states were wrong, but a few were right? It doesn't seem like there is an obvious cutoff point between "all the pre-computed steps are wrong" and "all the pre-computer steps are right", where there would be a switch between what is conscious. So then, one might conclude that the one where all the pre-computed steps are right, and the switch is on, is just as conscious as the one which has the switch on but all the pre-computed states are wrong.

But then what of the one where all the pre-computed states are right, and the switch is off?

The switch does not interact with the rest of the stuff unless a pre-computed next step would be wrong, so how could it be that when the switch is on, the one with all the pre-computations is conscious, but when it is off, it isn't?

But the one with all the pre-computations correct, and the switch off, is not particularly different from just reading the list of states in a book.

If one grants consciousness to that, why not grant it to e.g. fictional characters that "communicate with" the reader?

One might come up with some sort of thing like, it depends on how it interacts with the world, and it doesn't make sense for it to have pre-computed steps if it is interacting with the world in new ways, that might be a way out. Or one could argue that it really does matter if the switch is flipped one way or the other, and when you flip it back and forth it switches between being actually conscious and being, basically, a p-zombie. And speaking of which you could say, "well what if the same thing is done with brain states being pre-computed?", etc. etc.

I think the Chinese Room problem, while not conclusive, is a useful introduction to these issues?




> But the one with all the pre-computations correct, and the switch off, is not particularly different from just reading the list of states in a book.

The states were (probably) produced by computing a conscious mind and recording the result.

Follow the improbability. The behavior has to come from somewhere. That somewhere is probably conscious.

Similarly, authors are conscious, so they know how conscious characters behave.


I don't think it actually brings up any relevant issues. For instance, you mention a p-zombie, but that's another one with glaringly obvious problems that are immediately evident. Does bacteria have consciousness? Or did consciousness arise later, with the first conscious creature surrounded by a community of p-zombies, including their parents, siblings, partners, etc. Both possibilities seem pretty detached from reality.

Pre-computation is another one that seems to obfuscate the actual issue. No, I don't think anyone would think a computer simply reciting a pre-computed conversation had conscious thought going into it; but that same is true for a human being reciting a a conversation they memorized (which wouldn't be that different from reading the conversation in a book). But that's a bit of a strawman, because no one is arguing that lookup table-type programs are conscious (you don't see anyone arguing that Siri is conscious). And the lookup table/precomputations for even a simple conversation would impossibly large (run some numbers, it's most likely larger than the number of atoms in the universe for even tiny conversations).

So I don't see these arguments as bringing up anything useful. They seem more like colorful attempts to purposefully confuse the issue.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: