Hacker News new | past | comments | ask | show | jobs | submit login

> I'm less interested in the question of "can computers be conscious?", and more interested in "what is consciousness". I believe taking the direction of the first question leads to a dead end, because even /if/ you build a consciousness computer, you still wouldn't have a way to prove it.

I agree with you, but I don't think the first question is a dead end... suppose you devise and install a computer chip that can replace a tiny portion of your brain by interfacing with neurons around it, and over time it learns to perform the same function as the piece of brain tissue it replaced. If you think this is physically possible (even if not feasible with current technology), then by extension, over a long enough period of time you could replace every bit of biological nervous tissue with synthetic components. Assuming the synthetic components were able to wholly serve the same functions as the biological ones that they replaced, you would have a computer which was conscious, and you would have an inside perspective on it. :)

On the other hand, in theory it might be possible to do that complete nervous system replacement without actually understanding what consciousness is. So I agree that "what is consciousness?" is still a more interesting question. I found The Ego Tunnel[0] to be a good jumping off point for defining consciousness.

At this point I think the problem of consciousness is one of scope: any description of consciousness that fits into the the mental capacity allocated to our intuition will lack explanations for aspects of our conscious experience that we can easily identify with a few moments of introspection, and any more comprehensive description of consciousness will exceed our ability to intuitively grasp as a whole. I don't think this is a fundamental limitation; I think it's just a reflection of how much base knowledge is necessary to build up adequate intuitions because of the complexity of the system.

[0]: https://www.amazon.com/dp/B0097DHVGW/ref=dp-kindle-redirect?...




I've never found that argument convincing.

It seems to suppose that consciousness is an either/or phenomenon. Either you're conscious or you're not. But what if consciousness is more of a continuum and the intensity of your conscious experiences can increase or decrease. Further let's suppose that the source of consciousness is somehow distributed throughout the brain. When the first small set of circuits is replaced your conscious experiences became ever so slightly less intense, by such a small amount that you didn't even notice. This continues every time another small brain region is replaced until at the end you no longer have any conscious experiences at all.


Yeah but that's still considering the epiphenomenon hypothesis and not other possibilities. What if the only thing you can explain with the epiphenomenon idea is the Philosophical Zombie (mind) but you still didn't explained a bit about consciousness.

Ref: https://www.youtube.com/watch?v=NK1Yo6VbRoo


My intent was actually not to take any firm position on the nature of consciousness but rather to present one possible model, that I think many would find plausible, as a counterexample to the progressive replacement argument (ie. the argument that a machine can be conscious because you can gradually turn a human brain into a machine).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: