The real answer is that we don't yet know enough about how the brain works to work effectively on this problem. We don't know what questions to ask or how to break down the problem into smaller problems.
We may get there. Read something about how vision works from a century ago, when nobody had a clue. The first real progress came from "What the Frog's Eye Tells the Frog's Brain" (1959).[1] That was the beginning of understanding visual perception, and the very early days of neural network technology. Now we have lots of systems doing visual perception moderately well. There's been real progress.
(I went through Stanford CS at the peak of the 1980s expert system boom. Back then, people there were way too much into asking questions like this. "Does a rock have intentions?" was an exam question. The "AI winter" followed. AI finally got unstuck 20 years later when the machine learning people and their "shut up and calculate" approach started working.)
I wholeheartedly agree on your first point. I was a philosophy major and it was frustrating how so much of the philosophy of mind field were attempting to "run" off with their ideas before they could "stand".
I realize this could be true for a lot of other schools of thought, but it seemed especially prominent when arguments about what makes a person seem to rely on a lower-level assumption of how the brain works.
The way I see it, that's basically the definition of philosophy. When some sub-discipline of philosophy becomes clear enough to define its questions, they give it some other name (cf linguistics, economics, "natural philosophy")
What's left as "philosophy" is always the stuff where we don't even really know what questions to ask. So we kick them around for a few centuries, or millennia, in the hopes that something will eventually take on a shape that can be pursued in a better-defined fashion.
> So we kick them around for a few centuries, or millennia, in the hopes that something will eventually take on a shape that can be pursued in a better-defined fashion.
This process has a name : The Great Conversation [0]. Bit presumputious to me to think Plato would have any clue as to what we're saying, but it is a good name for a thing.
An interesting observation, but there's also an analytical side to philosophy: bringing in information from other fields, figuring out the implications, possibly drawing conclusions, and pointing out new directions to try.
I don't know that learning more about the brains operation will satisfy people who resist the notion that their consciousness is a property of a physical system. Since it is an emergent property of a complex system, even if we understood the functioning of each independent piece, we would still be left with the question of why all of those pieces in concert have the property of consciousness. While we gather more and more data on the brain, we barely even have a beginning notion of how emergent properties actually work. If we had to take everything we know about atoms and their interactions and explain where phase transitions and states of matter come from, we would be stymied. We know from experimentation that very large groups of many atoms do experience distinct phase transitions. We know every way individual atoms can interact. But bridging that gap from components to being able to predict large-scale complex nonlinear dynamics... we have more proofs of our inability to do such things (with conventional tools) than hints of how we might tackle it.
We are very much in the infancy of the understanding of the Brain. New tools like optogenetics, CRIPSR-CAS9, and Clarity should assist greatly in understanding the brain and the body in general. Still, we are starting to know that we don't know a lot about the brain.
To me "consciousness" feels mostly like early chemists talking about pholgiston. Or like early biologistis discussing elan vital. Or even current physics' "dark matter". These are words that don't really point to single externalities with sharp borders, instead they are terms we apply to disparate, but seemingly (somehow) related phenomena.
> The real answer is that we don't yet know enough about how the brain works...
I don't think the "problem of consciousness" is one of missing empirical evidence so much as simply a fuzzy (ill-posed?) question. Though enough evidence might be sufficient to forcefully dissolve an irreal question. On the flipside, a lot of the current ML research does a good job at addressing consciousness by breaking it into concrete, communicable actions.
Instead of asking, "What is consciousness?" try asking, "What actions could XYZ take that would convince me it's conscious?" A related question is "What actions (in minute detail) are involved as I believe/think/feel I'm conscious?" Those two questions are similar, but tend to evoke quite different sets of "external" experiences and actions.
It might turn out that there is some mathematically invariant property of things that are capable of acting as convincing conscious agents, a la the physically precise definition of heat turning out to behave kind of like a phlogiston. In such a case, we might in fact find a "thing" that deserves the label "consciousness"---such as Hofstadter's idea of a strange loop---but for now I think the term is used pretty much used in a way that's synonymous with "magic".
>A related question is "What actions (in minute detail) are involved as I believe/think/feel I'm conscious?"
I think we can define a generalized version of what we call a "conscious thought" as a thought that can be "consciously reflected" about. With "conscious reflection" I mean using some representation of the thought as an input for another thought in such a way that the new thought, including the usage of the original thought, can be used as an input for another thought in the same way. The representation doesn't have to correspond particularly closely to the execution. (We remember to think in natural language, but maybe the thought process is just converted into natural language after the fact for the purpose of saving and reflecting.)
Computer programs can also be conscious according to this generalized definition if the process of executing it is saved in some way and can, depending on the circumstances, be used in some way, which is saved in the same way.
That different "ways" of using a thought as an input for another thought are possible means that there can be different consciousnesses in our brain. One of them (if there are multiple), which is the one responsible for our output, at least on a higher level, or does at least significantly inform the output, is what we call "my consciousness". The reason why we don't know whether "my consciousness" is responsible for the output, despite having the appearance, is that it may be mostly a "post-hoc rationalization engine" for decisions made on another level, possibly for other reasons. But it does at least inform the output, since thoughts of "my consciousness" in the past inform our later output. For example, if we are asked about our past thoughts, we talk about the ones that are remembered in "my consciousness". This is the thing that puts it in such a special position compared to other hypothetical consciousnesses, which don't inform our output in this way, and are therefore invisible to other people, and obviously also to "my consciousness".
Think about it like this: What would have to change to call an conscious thought unconscious or an unconscious thought conscious? The ability of "my consciousness" to reflect about it.
> On the flipside, a lot of the current ML research does a good job at addressing consciousness by breaking it into concrete, communicable actions.
I wasn't aware ML had anything to do with consciousness.
> In such a case, we might in fact find a "thing" that deserves the label "consciousness"---such as Hofstadter's idea of a strange loop---but for now I think the term is used pretty much used in a way that's synonymous with "magic".
So you consider your experiences of color, taste, pleasure, etc. to be akin to "magic"? Because those sensations are what make up our conscious experiences.
> "Does a rock have intentions?" was an exam question.
What does a good answer to this question look like in this context? Genuinely curious what they were looking for.
Imo the real question is whether humans have intentions. It seems like if you look at it rationally, we're just collections of chemicals reacting with each other. Set the initial conditions and then the whole thing is deterministic. It's pretty uncomfortable to think this though, so I think it's best if we avoid the subject.
I would encourage you to think on this some more until the discomfort diminishes. Just because things are deterministic (at a level of complexity that is difficult or even possible to imagine, let alone predict with our current understanding), doesn't mean your experience is any less real or important for you.
Imagine you are on a rollercoaster: you know your course is pre-determined, but you can't see too far ahead, and it sure is a fun and surprising ride along the way.
I think that's unlikely to happen. Even if it all is deterministic, the amount of variables at play will make it very difficult to determine the outcome beforehand (if not even impossible, eg Conway's Game of Life).
I think the question one has to answer first is what "intention" is in itself, at least some informal definition to work with is necessary. how you define it will shape the answer to the rock intention.
> Set the initial conditions and then the whole thing is deterministic
If quantum physics theories are correct than there's always some amount of pure randomness in the game, making it impossible to create perfectly deterministic and repeatable system of any significant complexity.
Randomness is not an objection to determinism. The problem is not that decisions can not be predicted, the problem is that some people think decisions don't exist. For that vision is irrelevant if they happen by chance or by rules.
The real answer is that decisions exist in a different frame of reference. Some philosophers are stuck in a model in which decisions were taken by the soul, an inmaterial entity. So if decisions are generated by physical processes, they're an ilusion.
But that's as idiotic as it can be. Our brain is a material system and of course decisions are generated by physical processes inside it. The interesting question is how much of us is malleable and how we can make our decisions to change ourselves and our own decision making process.
Where does the randomness come from? If we could rewind the universe back to the same starting conditions, would it be any different the second time through and if so, where did that difference come from?
If you suppose an infinite multiverse where every possible thing happens in parallel, a typical observer will find themself in a universe with events that seem random. There are a lot more random-looking numbers than orderly-looking numbers.
Videogames are deterministic too, but that doesn't mean all of the things that a player's character does are predetermined inside the videogame. I like to think that there is a similar analogy for our universe, where a soul is controlling our body the same way we might control a game character.
Well... From a game designers perspective a player has very little free will... Yes they can make choices but in the grand scheme all the choices are predetermined.
Good level design is basically when a player traverses a predetermined path while not feeling that he is on rails...
Example og good level design, Half Life games. A player Traverse the world as if they can go anywhere but they always pick the right way.
Other example is Dark souls. Keeps you in a loops so you never hit a "dead end"
Thanks, I like to see the other side when I believe something.
I once read Dennett's critiques and I remember I didn't find them very convincing. But right now I don't remember his arguments. I won't analyze them right now as to be able to comment in this topic. If you mention Dennett's main argument against what Harris says about freewill I'd thank you.
Marvin Minsky said we are 'meat machines'. What does Dennet say against that? (considering machines as deterministic).
Dennett's overall claim is that incompatiblists are making a category error when they say that determinism at the physical level precludes free will at the level of subjective experience. He argues that free will, and really any kind of freedom at all, emerges from the layers upon layers upon layers that make up the existence of what we call a living creature. (And that there are layers, or degrees, of freedom.)
I'm afraid I don't remember the exact contents of the Harris/Dennett debate; I should probably reread it myself. :)
any kind of freedom at all, emerges from the layers upon layers upon layers that make up the existence of what we call a living creature.
I don't see why layers upon layers would imply freedom. A complex Java web framework may have layers upon layers of abstractions, and that may make its operation hard to understand fully. But that doesn't mean it isn't deterministic.
Spinoza wrote: Men are mistaken in thinking themselves free; their opinion is made up of consciousness of their own actions, and ignorance of the causes by which they are determined.
This was a wonderful book. As I recall, it catalogs a lot of convincing evidence that things come into conscious awareness basically upon a certain level of global activation in the brain -- when enough parts of the brain are "talking" about it, typically when different parts of the brain are having conflicting activation patterns. It likens this to the "workspace model" of awareness. And it's clear why the brain would need to resolve such a conflict, why it would need something like focus or attention to do so, and how this would relate to all sorts of information processing needs of organisms that behave in ways that keep them alive.
But there's just nothing that I recall in that book that suggests or even hints at any reason for this to result in a subjective experience. And I don't believe it rules out an electron having a nano-unit of subjective experience, for example.
The book suggests that something is a subjective experience if an organism can report it as one, and goes to great detail about what's going on in the brain when an organism is able to report a subjective experience, and makes the very reasonable suggestion that something is probably a subjective experience in a brained organism that can't report it as one so long as it exhibits all the same patterns in that organism's brain (infants, other animals).
But I don't think it does any work at all to show that there is no subjective experience in plants, or rocks, or even nano-units of subjective experience in individual electrons. It can't do this, because, as far as I recall, there is simply no progress on the problem of why this global activation in the brain would produce a subjective experience.
The book suggests that something is a subjective experience if an organism can report it as one
Also, just because someone reports having a subjective experience, doesn't mean that they actually have one. I do have subjective experience, but I have no way of telling whether someone else has one, even if they claim so.
Eh, this still doesn't go beyond the fundamental "I think therefore I am." I can prove to myself that I have consciousness, but for everyone else, ¯\_(ツ)_/¯
You can't prove even that.
If you examine it closely, the argumnent only proves that "it" thinks.
"It" is not necessarily an "I", for what restricts the thinking to an I (i.e a part of reality)? It could be the whole reality that does the thinking as far as the argument goes.
It proves itself if you define "I" as the same as the experiencer of the thinking, but "I" (as many other complex word) is much more overloaded.
Unfortunately all our word definitions seems shaky, if we want to describe something that is the base requirement of those very definitions.
Maybe the best we can do is to deconstruct the above using more simple or base terms, but the meaning of those terms maybe also depends on the content of experience not the mere fact of experience:
I experience thought -> experience of thoughts exists -> experience exists -> something exists
So upon experiencing thought you may conclude that "something exists", or "there IS something"...
Of course you can. You know it to be true that you posses consciousness because you experience it directly. What is impossible (empirically) is knowing that about anyone or anything else.
We know there is thinking. There is no reason to believe that subject-object duality has any basis in reality, or that any individual, including our "self", has a sufficient delineation to consider it an independent entity.
More fundamental than thinking is experience itself.
Regardless of whether there is a "you" or if it's some amalgamation of state that is loosely bounded together and "fooled" into thinking it is a unity, something is
there experiencing. At least in my frame there is.
This isn't something you can prove because it comes any sort of structure capable of doing proving. It's just something that's a given and you start from there.
Descartes' "Meditations on First Philosophy" is the originator of this idea. While it is dated, the form of its principal argument hasn't changed.
With regards to conscious unity, there is at least a weak form of it in the sense that you can't experience others' experiences. While it is possible that your own experience may not be fully unified, it is (very likely) disjoint from others' experiences.
You can experience another person's experience when you see them smile or cry. We call it empathy in modern parlance. The hogan twins joined at the head have an even more direct connection to each others' experiences:
I've never been very moved by ideas that I can't know or share other people's feelings, or other fanciful ideas like their blue is my green. It's more reasonable to assume they are like me because we share similar hardware (DNA) and software (Culture). Others hands look like mine, more or less. Others legs are like mine, more or less. And so others perception of green is like mine, more or less.
Telling me that my experience is prime, or fundamental doesn't tell me much. Similarly, saying I think therefore I am doesn't tell me much. What then am I and what is existence? I think therefore I am only as much as I think I am. And sometimes I forget myself.
>The real answer is that we don't yet know enough about how the brain works to work effectively on this problem.
If we imagine there's a God and I could ask any 1 question about the brain and get an anaswer, all I would need to ask is:
"Hey God: w.r.t. the human brain, are there any special shenanigans like is connected to a soul that's responsible for consciousness, or is it WYSIWYG, just a bunch of cells and that's it?"
Luckily for you, there is no God and I can answer definitively: no, there are no shenanigans. It's just some cells and that's it. I am telling you this definitively. There are no metaphysical shenanigans going on in the human brain.
Note: you might wonder why, for this answer, I decided to phrase it in terms of asking God. The answer is in order to activate the natural scientist's reaction "that's silly, God can't tell you there's no God like that". Well, brain metaphysics is exactly and equally silly. It's just some cells, that's all.
It's true that it isn't something we can say definitively. It's more than a philosophical belief, however.
There isn't any evidence that there is anything happening that's not biological despite 1000's of years of searching for that evidence.
And there is all kinds of evidence that biological processes can explain our experience. More all the time.
It's not definitive in the sense that we're anywhere near completely understanding our biology.
But it's the most likely explanation, given what we do know, but such a huge margin that there is no real alternative explanation outside of mythology.
>There isn't any evidence that there is anything happening that's not biological despite 1000's of years of searching for that evidence.
Do you consider software in a computer to be mechanical? You could record the 1010101 in an electricity stream and conclude there is no meaning to that electricity beyond getting from point A to point B in a chip.
> There isn't any evidence that there is anything happening that's not biological despite 1000's of years of searching for that evidence.
If we're looking for something supernatural like a soul, then we shouldn't expect to see natural evidence. Thus the lack of that evidence does not imply its nonexistence. Relevant xkcd: https://www.xkcd.com/638/
Its also the kettle calling itself black. The kettle, the calling, the self, that's all G..Nature..Universe, whatever you wanna call it. You cannot be sure you are anything in this universe. Even a sentence like "I think therefore I am" requires short term memory. The universe could have just created you at this very moment. Its all subject to corruption, subterfuge, and undecidability. Big woop, you find out that the brain is connecting to your thinking, that still doesn't explain why you, why the universe, why the brain, and why the thinking.
You simply don't know that is my point. By continually asserting it you are demonstrating a lack of actual scientific thinking. It is convenient to trust one's memory and the physics of the universe as consistent and indubitably pure. But you have been around less than 100 years in a universe of billions of years. So claiming any kind of self assuredness about the universe or consciousnesses's patterns is total foolishness. A glimpse doesn't warrant authority to describe the entire landscape.
Hey you're right I forgot to mention that MAYBE the brain might have a long cord going off into soul land and that's where the real stuff happens. You know - in the cloud. It's just that maybe we just haven't found that cord yet. It's like super super transparent! Hard to see. Wireless, even. That's why you can't think in a faraday cage. No access to the cloud.
see how silly that sounds.
so on one level sure you're right, on the other hand it's obvious. no, there's obviously no super transparent cord going off to soul land. it's just a bunch of sell-contained cells. It's not wireless. It's not even networked. It's just limited to within your body. You can literally plunge yourself in water and still think.
you have no cord going off into ether. you are nnot a networked component. you are just a bunch of cells in one little package and that's it. That's you. your output is what your body does robotically (move, make sounds) your input is sensory. and your consciousnes is whatever your braincells are doing.
You forgot to mention that your theory requires that _nothing_ gave birth to _everything_ via _undefined behaviours_ with intelligence and consciousness being mere side effects. That's at least as silly as the other explanation.
> there's obviously no super transparent cord going off to soul land.
You make it sounds extra silly by using this type of language. Remember when hardcore scientists were saying "my dear you must be mad to think the earth is round", or "yeah OBVIOUSLY the stars are pinned to crystal spheres, they can't just fly, that would be silly".
Anyway, the only thing we can be sure of is that we just don't know. Everything else is simply stories we tell ourselves to make us feel better. We play little games, but not all games offer the same experiences.
I make it silly because it's silly. either there's a wireess soul organ connecting us to the cloud, where the REAL thinking happens, or it's just some meat: exactly what it looks like.
what do you think the chances are of finding a small metaphysical soul organ in the brain, that connects us to the ether where all the real consciousness happens, is?
Your whole logic is based on materialistic principles. You seem to be looking for a physical "cord" linking us to a physical "place" where consciousness "happens". You can't consider what I'm hinting if you stay in that paradigm.
What if your base assumptions are flawed ? The current materialistic approach is based on flawed 15th century intuitions. Just like the previous assumptions were based on other flawed intuitions (gold can be made from any element, stars are on crystal spheres, earth is the center of the universe, &c.)
What makes you think our current assumptions are the absolute truth ?
An example of how it could happen is:
- the material universe is a simulation/mechanism that is taking place in a higher level universe with unknown properties.
- conscious processes are associated with that higher level.
- the "wetware" aspects of consciousness as neural software in the brain are not fundamental and/or are maintained as part of the simulation.
I can tell you definitively that even if there is a simulation, it doesn't treat the brain specially and differently from any other matter in the (in our) universe. the brain has no special properties. it is not a radio into a higher level universe. it's just stuff, same as all the other stuff. no shenanigans, I guarantee it for you.
I don't believe it's possible to be definitive about such matters. The way I interpret your "definitive" is "I very much believe this to be so." That is well and good, but I do not.
Consider the fact that brain damage can change a person's personality. That strikes me as a powerful indication that there is not a higher level on which the consciousness lives with an interface to the wetware.
hey God here, just a note that I did put a small radio in everyone's brain for communicating with higher dimensions that mere physics can never touch. When this radio gets damaged, it falls back to 802.11n, and this slower connectivity is what causes the changes in behavior and personality. It's basically a connectivity issue. No, the processing isn't going on in the brain, but you still need a good, fast connection to the soul realm and at the moment the only way to do that is with the consciousness organ I designed. the brain is basically a thin client and the soul organ is the network card. hope this heps.
--
okay so now what are the chances that I'm God and really just said that? If you said anything over 0.00000000% you're totally wrong. There is no chance of that because it's stupid. the above paragraph is obviously satire, because it's stupid.
Your opinions here are very arrogant. You are presuming that lack of evidence is proof of a negative. Pretty sure that so far outside of the scientific method that it's as much quackery as homeopathic "medicine"
You are presuming that the physical is all there is to existence. You fail to consider the possibility that there are portions of reality that we don't have the physical capability to perceive or the mental capability to truly understand.
There's a difference between saying "there is no evidence of X" versus "there is no evidence of X, so X is impossible"
what do you put the chances at that there's a soul organ in the brain that acts like a radio into a soul dimension, where consciousness occurs? (rather than as an emergent property of the matter, with the brain being no different than any other matter in our Universe.)
I estimate that there is a non-zero chance that consciousness itself is something we will never truly understand. Would you ever expect a piece of software to be able to truly understand the things that drive its actual consciousness, should we ever figure out how to create truly sentient AI? I don't honestly think one could, without speaking directly to their creator. And since the existence of a creator of human consciousness is purely a thing of speculation, I don't see us ever being able to do such a thing (should they exist) until we pass through what we know as death. At that point, I feel that there's a non-zero chance that our consciousness does indeed continue on in some form of existence. What that form is, where it resides, or if it even has physical properties, I don't know, and I don't think we'll ever know, until we cross the threshold of death as individuals.
I feel that consciousness itself is something non-physical. Whether it be a specific cocktail of neurotransmitters working in concert to give us the characteristics that we attribute to sentience, or a "core" form of existence that exists outside of our physical existence, I don't know, and I don't presume to know. I also don't presume I should be going around and acting like I can say with complete authority and accuracy "X doesn't exist in any way, shape, or form, because there is no evidence". I mean, what of the many other "scientific facts" humans have revised and subsequently rejected over a few millennia?
I don't object to what you've just written. I expect even if we had conscious robots that we programed with AI software and which are connected to sensors and aware of themselves (similar to boston dynamics humanoid and dog-like robots, if we also add in a large neural software brain), having them be conscious by obvious virtue of running software we developed/coded/used genetic algorithms on, wouldn't mean we understand that consciousness.
I'd say we would have the chance to have a much better understanding of that type of consciousness than we would our own, unless such an AI were to come about spontaneously from a long string of machine learning such that we don't have any clue about the inner machinations.
Interesting. Three points one from science, one from spiritual and one from literature.
1) This article talks about "causal entropic force" which can give "intentions" to inanimate object. https://www.wired.com/2017/02/life-death-spring-disorder/
2) Read somewhere that instead of figuring out "self" exist or not, Buddha suggested to observer and realize how concept of "self" arise.
3) In the book "of human bondage" author writes, the concept of self arises from pain.
> I went through Stanford CS at the peak of the 1980s expert system boom. Back then, people there were way too much into asking questions like this. "Does a rock have intentions?" was an exam question. The "AI winter" followed. AI finally got unstuck 20 years later when the machine learning people and their "shut up and calculate" approach started working.
Isn't it the other way round, though?
They were twiddling their thumbs back then because they had no other option. There was no way to do machine learning back then. I've played with perceptrons on '90s hardware and it was basically just a toy.
And then Moore's law opened the flood gates some decades later.
I used a Cray Y-MP back then but you're still right... My iPhone is faster than that hardware now, which is pretty neat.
It wasn't so much "thumb twiddling" though, there was a lot of work being done on systems which were more focused on knowledge representation (like Cyc [1], which still exists). Also a lot of work was being done from a more Psychological direction (mental models, scripts etc) and from a physical/neuro-science direction (brainz!).
These were all happening simultaneously and it wasn't clear (partly because of the MIPS issue you mention) that ML was the winning pony (for now) and I still appreciate the broad spectrum of knowledge covered in my particular Cognitive Science program.
It seems obvious now, but back then it wasn't obvious that "AI" required a learning system at all. Knowledge Engineering was a popular approach, and rules based systems running over knowledge bases were supposed to be the path to AI.
And don't forget Minsky's decimation of neural network research at the start of the 1970s [1], which led to major research centers like MIT ignoring them completely.
Personally, I think the brain as a data processing unit is a model that's seriously limiting the way many philosophers and neuroscientists think about this problem. You may be able to correlate an individual's reported experience of periwinkle down to the exact number of potassium ions crossing the cell membrane of every relevant neuron upon there acknowledgement of the color, but you still will know nothing about why it feels like something to see periwinkle.
Maybe the focus on the brain is part of the problem. Sure, neuroscience has yielded some interesting results, but consciousness is a social and behavioral phenomenon, and the brain evolved to satisfy such social and behavioral constraints. The acceptance per se of the Turing test suggests the brain may be irrelevant here.
We do. We know enough to state with absolute certainty that it's an emergent property. Nothing in your head is conscious. Nothing. Not even the whole of the human mind. It's in the "software", it's something you learned (and therefore did not have even when you were born, not even until quite a bit after that).
It's equally clear that most of what we associate with consciousness, such as thinking, awareness of the body and the moment and time and decision making and ... doesn't exist either. Because time and time again studies prove that when a decision is made (this is well studied in traffic for instance) there are no conscious reasons. Reasons only happen afterwards.
Is it therefore such a stretch to say that consciousness simply doesn't exist until long after the fact, and it is only once we ask one of these bags of mostly water to explain themselves (or ... well when we ask them something) that any trace of consciousness, at least the way humans understand it, is actually forthcoming ?
Consciousness is a trick. A learned trick. Human minds are not conscious and it is most definitely not a certainty that they, even when born fully formed and healthy that they will become conscious (read the reports on children raised by animals. They are old, sometimes 20 years old and they most definitely aren't conscious, not even on the level that a primate is conscious. The 12 year old boy they found in the wild in France never learned to speak only to articulate 2 words).
This is weird, because this is not most humans experience. Everyone around them always had consciousness. But let's compare. Everybody who has kids realizes that memory, firstly, isn't actually memory. We are very much not storing events when they happen in our brain. We learn a trick, because our parents keep referring to our past and "what we've done". We learn to calculate back from our current state of mind to what happened before.
That and of course philosophers have a millennium or 3 of history of ... philosophers getting consciousness wrong. Consciousness has been accepted in history to be being religious, to being able to rhyme, to composing music, to being able to talk and explain ourselves, to being able to love, to convince a professor (via chat) that you are conscious, to solve problems (all kinds), to walk around, to play chess, to ... all of these are now of course considered wrong. Why ? Mostly because things that definitely aren't conscious, from little dumb tricks, even mechanical contraptions in some cases, to rule based engines, to deep learning and now reinforcement learning algorithms can do this.
So can we please just conclude that whatever this article claims is ... wrong ? Just wrong. Nothing of value, other than perhaps interest a few people for stories with enough alcohol present. The current consensus seems to be that more details will be forthcoming the first time a reinforcement learning algorithm gets far enough to explain it's actions. So you want to know more ? Start there.
You assume that there is a physical reality "out there", because you perceive and you experience it (or you don't, in which case you are a p-zombie). The theory you are proposing is probably the most widely accepted theory in scientific circles, which is that consciousness is an emergent property of the brain, but that ultimately everything that causes this consciousness are just impersonal, physical events devoid of an inherent quality of "experiencing". When you die, that consciousness ceases to exist. Fair enough.
However, I think the article is simply suggesting to invert this assumption about physical reality. It proposes that for something to be "out there", you first need an "in here" (rather than afterwards), i.e. an experiencing of forms. This would be consciousness. At this point we are not even talking about decision making, thinking, memory, intellectual pursuits... Just subjective experience. So everything you're taking into the discussion regarding how thinking and decision making and memory happens is really a bridge further; not immediately relevant to the point the author is making.
I understand if this line of reasoning feels uncomfortable. You were literally pleading people to think that this is wrong. I think that is a mistake. There is value in challenging your assumptions, even if only philosophical with ethical/moral ramifications.
> It's equally clear that most of what we associate with consciousness, such as thinking, awareness of the body and the moment and time and decision making and ... doesn't exist either. Because time and time again studies prove that when a decision is made (this is well studied in traffic for instance) there are no conscious reasons. Reasons only happen afterwards.
Since a person can tell what they experience and what they do not, the distinction between conscious experience and unconscious processing must have a base in physical reality (brain activity). With sufficiently advanced technology, one could analyze the brain processes and see which processes are associated with the reported conscious experience.
The fact that not all brain activity is associated with conscious experience in no way implies that conscious experience does not exist.
No, the fact that people have grown up to very limited or even arguably nonexistent consciousness, but still perfectly functional, capable and alive, means that. Functional enough to survive 7+ years by himself in a French forest. Add that these cases prove that perfectly healthy and normal human minds might never achieve consciousness, or at least nothing exceeding whatever consciousness a cat or dog achieves. There was no medical problem preventing consciousness, we don't even really know what the problem was, or perhaps I should say which of the many, many problems this child faced caused this. Lack of human contact ? Upbringing by wolves (assuming that actually happened) ? Was it surviving by himself ? Was it the water ? Perhaps a forest is just a uniquely bad environment for kids ? Perhaps even that specific forest ? Perhaps there was a human or animal or even something else in that forest that somehow further traumatized this kid ?
You'd have to give definitions of consciousness that don't include human contact, don't include language, symbols, any human other than yourself at all, or any thoughts at all not related to short-term survival, don't involve realizing you (as a human) are obviously not a wolf, ...
It also means that there is a period where you can be taught consciousness, and clearly if it doesn't happen before 7 years of age, you will never learn it.
I agree with most of your points, but I see no reason to assume that someone who cannot use language is not conscious. After all, use of language is just one of the brain’s functions. It’s not an emergent result of logical reasoning or something like that. On the contrary, it’s a task the brain evolved to perform, and has “hardware acceleration” for – that is, regions of its genetically encoded floor plan which are dedicated to it. If that accelerator is disabled due to not having been initialized properly… that’s no reason to conclude that the rest of the system is also defective.
It does seem clear that language assists consciousness in most people – e.g. most people report experiencing an internal narrative. But some people don’t. And even if everyone did, I don’t think that would be strong enough evidence to conclude that language is required for consciousness.
Given that most definitions of consciousness I've seen essentially express that you are capable of symbolic/abstract thinking, I'd say:
1) (extreme) autistic person that doesn't speak, but can, and arguably thinks too abstract, rather than not enough : yes, conscious. Probably more conscious in some sense than "normal" people, whose consciousness is more a group thing, or at least less independent.
(also: not speaking is a pretty extreme form of autism, certainly not something you'd see in your average school)
2) person that grew up without ever having any reason to learn symbolic or abstract thinking ? No, not conscious
But it's going to be a sliding scale thing. By some measures a cat and a dog are conscious because, well, because they are certainly capable of making humans think they are suffering (and therefore they both think and feel, which is where consciousness definitions are going on now. Fish, for instance, are not). This seems to me a really bad way to define it but it's certainly widely used.
Didn't all organic life arise from inorganic molecular structures like rocks?
As a result of the theory of the big bang, molecular structures have progressively evolved in complexity, eventually becoming so complex that the boundaries of physics and chemistry are transcended into biology, life, and consciousness.
This suggests that "rocks" -- inorganic molecular structures -- indeed have "intentions" to the extent that they are primordial building blocks of consciousness
Please don't say things like this, the theory of the big bang says no such thing at all. What it says is how the universe expanded from a very high-density and high-temperature state.
There are entirely separate theories to explain how that matter, after it arose, interacts with itself to give rise to chemistry. Then there is the origin of life, which is another problem. And then we get to evolution, which is how the initial life modified itself to become the species we have today. And then we have a bunch of other theories that explain how the brain operates.
Any one of these theories could be wrong, but that wouldn't invalidate any of the other ones. Some of them we have much more data and certainty on then others. But the only people that talk as if they were the same thing are creationists, not scientists.
It is probably interconnected, but it would be a darn shame if all of our data on how suns are created gets "invalidated" by pop sci articles if we find data that changes our origins of the universe theory.
... and prescient life. Don't forget the next stage on from sentience. The line between sentience and prescience also seems blurred, given how many humans nowadays report having flashes of the future.
There's no reason to think that the laws of physics are "perfectly" tuned for consciousness. This could be the universe where there's a trillion-to-one shot of its evolving, and we beat the odds.
The Anthropic Principle rightly points out that the question of "why does the universe support life" is fundamentally circular.
Who is suggesting that consciousness is completely dependent on having "the laws of physics perfectly tuned"? Obviously there are other conditions outside the scope of just physics that must be met to foster life
The idea that consciousness comes first has been known as in eastern philosophy as non-dualism (advaita vedanta) -- everything is consciousness. The basic idea is that it is impossible to experience anything outside of our consciousness--thus any assumption there is something outside of consciousness is just that -- an assumption or belief. We can theorize, we can argue, but it will always remain a belief, because it's not possible to experience anything outside of consciousness.
I'd like to share Rupert Spira, a modern non-dualist teacher that holds this view-point. Here is one video in which he explains the consciousness-first approach to someone, a scientist, who holds to the materialist approach: https://www.youtube.com/watch?v=Qgcfa0LFKXc
Perhaps someone will find it interesting and peruse some of his other videos, which I find very enlightening.
I think the controversy of consciousness arises from (and is deeply tied to) the history of Western philosophy and science: the "death of God", matter/spirit and mind/body duality. Something that's not widely acknowledged is how Indian philosophy (Hindu, Buddhist) had significant influence in the course of that history.
A major assumption of the currently dominant worldview is that there's no God, spirit, and even "mind" is questionable. Everything must be explainable as physics, and layers above like mechanics, chemistry, biology. Psychology as a field - in the "West", which is basically a global culture now - is based on that assumption.
The word "consciousness" is so ill-defined and the concept so misunderstood, mainly because it's mixed up with ideas of free will, mind, spirit - the animating principle. It's just the most modern term for categorizing and trying to understand a class of phenomena.
Seeing how "consciousness studies" is widely considered a pseudo-science, I suspect that it's actually related to some critical "flaw" in the fundamentals of the modern worldview, the assumption of a completely physical universe - "physical" meaning consistent with the science of physics.
What's fascinating for me is how quantum mechanics and its philosophical speculations about the role of the observer seems to be causing a paradigm shift, which is taking decades (almost a century) to sink in. We seem to be redefining consciousness as a fundamental property of physics, with some even theorizing that consciousness plays a role in bringing the universe into existence.
As a fan of both Indian philosophy and Western science, I'm greatly enjoying the battle of the ideas (often heated arguments and accusations of "woowoo" pseudo-scientific thinking), the struggle to understand the nature of consciousness deeply and rigorously, and the evolution of science and our worldviews.
> Everything must be explainable as physics, and layers above like mechanics, chemistry, biology. Psychology as a field - in the "West", which is basically a global culture now - is based on that assumption.
That's right. It's pretty amazing how much is based off of that assumption which has no realistic basis. I guess it's a "convenient" assumption.
But if we start to think that hey, maybe consciousness is the root of it all, not matter, then we can see why science doesn't understand consciousness at all: it's like trying to find the screen while studying the pictures on it. You can study all the biology, physics, and matter on the screen, but you won't find the screen in the details. In this analogy, consciousness is the "screen" in which all appears. I think mainstream science will shift MASSIVELY once they start looking into as a legitimate possibility.
> That's right. It's pretty amazing how much is based off of that assumption which has no realistic basis. I guess it's a "convenient" assumption.
Beliefs and speculations also have no realistic basis. We can't prove, reproduce or properly model them in an objective manner.
One definition of science states that it is "the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment."
I hold the view that philosophy precedes science, i.e., not all branches of philosophy can be regarded as science.
> But if we start to think that hey, maybe consciousness is the root of it all, not matter (...)
From your statement, there's nothing wrong going that way, but you'll have a hard time trying to defend it as science if it's based on beliefs.
Given that we and everything around us consists of particles obeying the laws of physics, I don’t think it’s odd that the burden of proof should lie with those suggesting the existence of something else.
Remember, we are to the best of our knowledge beings that have evolved from simple cellular organisms obeying the laws of physics. The idea that through that process of evolution we have somehow broken out of the sandbox is extraordinary enough that it would need pretty compelling evidence, no matter how attractive the idea might be.
> Given that we and everything around us consists of particles obeying the laws of physics, I don’t think it’s odd that the burden of proof should lie with those suggesting the existence of something else.
If anything, the burden of proof does lie on those who say the particles are "out there" and give rise to consciousness, because experience says otherwise. Everything you and the scientists may study happens within their own consciousness. It is not possible otherwise. We can only know the truth if we experience it, and anything else is a belief system until proven otherwise. Thus it is with those that claim there is an "out there" outside of consciousness that lies the burden of proof.
No one is denying the physical world exists and all that goes with it, including the evolution of physical matter. The question is what comes first: the consciousness or the physical?
> thus any assumption there is something outside of consciousness is just that
I've also watched the video you linked to. His argument is indeed strong, but the scientist was rather weak. They (seemed to) agree that experience is mediated by our physical bodies (brains). So did the universe (matter) exist when there was no evolved consciousness to observe it? If the matter did not exist before the mind, how did we come to be?
Yes, our whole world could be somebody's dream or we could be "brains in the vat", but then the question is only changed to "who is dreaming or maintaining the vat"? And since we cannot observe the dreamer, does he exist?
> The basic idea is that it is impossible to experience anything outside of our consciousness
Often in a crowded work environment people will be taking and yet I will have no conscious awareness of what they are saying. Yet, the second they mention my name, my attention will snap to what they are saying.
Clearly there is some unconscious (and yet intelligent and aware) part of me that is experiencing reality, just waiting for the right trigger to alert my consciousness to some important development.
Like sutterbomb said, focus isn't exactly the consciousness we're talking about here. Although clearly if your name hadn't entered your consciousness you wouldn't have noticed it :)
The question of focus is pretty interesting to me, though. It seems that we are consciousness that gets to "decide" where to put our attention. There are a ton of things that can attract our attention, all through the mechanisms created by the consciousness, namely the five physical senses, our thoughts, and feelings.
I thought it was fascinating as I read the thesis statement that over two hundred years later we still haven't left Immanuel Kant's orbit. The author ended the article citing Kant's proposition that space and time belong to the mind rather than as properties of external reality however Kant directly answers her question, paraphrasing, "What is it that lends perception the power of perceiving", to which Kant answers with a technical term, original apperception, which more concretely means that the structure of consciousness, no matter its belonging to subjectivity (so-called empirical apperception, your spontaneous sense of selfhood), is itself objective (those terms are actually one in Kant, universal === necessary). There are readings of Kant to go further and suggest that math, by extension, must be the descriptor of anything that can exist therefore.
Interesting enough, the grandfather of the modern Left, Michel Foucault, spent a considerable amount of his career trying to dislodge Kant's claim before coming upon the realization that power informs our perceptions.
In order to understand Kant, first read David Hume's Essay Concerning Human Understanding. Kant's major ideas are entirely a response to this essay (we English-speakers are lucky to have this crucial piece of the Enlightenment in our native tongue). Hume argues that cause and effect are entirely empirical concepts, which has the implication that we can't actually talk about "eternal laws of nature" with any sense. Kant wrote The Critique of Pure Reason and his subsequent critiques in the trilogy to argue that the laws of nature are laws because they are the laws of our ability to experience subjectivity at all. The Critique is very dense and technically written and the English translations do little to abate this. I would recommend reading it with a companion commentary text though unfortunately that wasn't the path I'd taken so I can't pick out a specific one.
Not the user you replied to, but there's no harm (and in my judgement great benefit) from diving right into Kant, or more generally, German idealism - so Kant, Fichte, Schelling and Hegel. Marx is also worth visiting for his "Hegelian" materialism (in this case opposed to idealism). That'll provide the basics to know what Foucault was talking about.
The Cambridge Companion to German Idealism (I forget which year) is also highly recommended though I haven't looked into that myself.
Also, to plug my own favorite dead German guy, Schopenhauer spends a lot of time in his writings explaining (his interpretation of) Kant's ideas, and his prose is much easier to understand than Kant's, even in translation. You won't get any love for Fichte, Schelling, and Hegel from him though.
It should be noted that even Kant said that Hume "awoke me [kant] from my dogmatic slumber." Hume is pivotal to Kant's work, so at least reading the Enquiry is a good thing to read before Kant.
While true, and I realize the irony of saying this since I mentioned building up to Foucault, I don't think one is harmed so much by simply starting. There's a lot of people who try to pile on more and more prerequisites and I think it's less productive; for every person who tells you to read Hume before Kant there are ten who will tell you to read Berkely before Hume. Personally I simply entered what interested me; first Marx (which is an ongoing love) then Hegel and now Kant. Heraclitus can come later.
What about people that have a distaste for authority? Would this distrust dissolve how power informs our perceptions?
Is there something in the transition period in the teenage years that also sets the foundation of perceptions? I ask because that is a time it seems where we take the most risks and question everything.
Disagreement is shaped as much by power as agreement, because in both cases you accept the framing of what gets discussed posed by the people in power. Truly claiming that power for yourself requires breaking out of the frame entirely and directing your attention where you want it.
For example, public educators have near-absolute power over K-12 students in the U.S. Many students rebel against this (I certainly did), and do things like argue about homework or refuse to go to class. But that accepts the educators' power as legitimate; if it weren't, you wouldn't bother to rebel against it! Someone truly intent on seizing power for themselves would devote the minimum amount of effort and attention to pleasing his teachers, and then go off and write a machine-learning based MP3 player that he can go sell to Microsoft for a million dollars.
> But that accepts the educators' power as legitimate; if it weren't, you wouldn't bother to rebel against it!
On the contrary, it only accepts that their power exists. That is not the same as accepting its legitimacy. If you accepted their power as legitimate then you wouldn't be rebelling! The rebellion occurs because of the this discrepancy between what is and what ought to be, as the student perceives it.
You're still accepting the frame: you're putting your energy and time into fighting against existing structures, which robs you of that energy for creating new structures.
People who actually hold power just go about their lives as if the world they wished to exist actually exists. That's what it means to have power - that you get to live in your version of the world.
I am not disputing any of that, but what you said before was that rebellion "accepts the educators' power as legitimate", which is incorrect. The act of rebellion indicates acceptance that the educators have power, but it rejects the idea that this power is legitimate. This is a far more constructive basis for realizing change than pretending that the very real power which the educators have over the students does not exist. If you want to move beyond fantasy and make your preferred version of the world a reality you first need to be willing to face the truth of the world as it actually exists. Accepting where you are is just as important as visualizing where you want to go.
Distrust of authority does not change the fact that power informs our perceptions.
Imagine trying to have a conversation in a loud room. You struggle to hear the person you're conversing with. The loudness of the room informs your perception of the conversation. You might not enjoy the loud room, but it's nonetheless there. And your frustration with the loud room is probably affecting your responses to the conversation.
This article begs the question of our conciousness not being a physical process which is cool I guess If your peddling thoughts from a dualist from over a century ago. I still have no reason however, not to believe our conciousness doesn't arise from the physical configuration and other processes therein related. To paraphrase "Science cant talk about this purely in physic terms." No, Science simply hasn't FOUND the way to talk about it in physical terms which I personally believe in time we will. To be absolutely fair, as you may have noticed I'm in the camp of people who think Kant is largely garbage so I'm have a natural bias against works using his thought on the matter.
Your assumption that consciousness is only physical is merely that - another assumption. It proves nothing, and claiming otherwise begs the question.
The problem is that all scientific results around the consciousness question derive from what people report about their personal experience. There's no other known way to answer any questions about consciousness, and science hasn't discovered any way to answer questions about the immaterial.
Hence, from a scientific perspective, it's not a question for which an answer can be deduced from observation -- so questions about it are left to philosophical inquiry (reasoning inductively from first principles, instead of deductively from observation) or religion-based worldviews (which can be coherently accepted/rejected based on their correspondence to reality and internal consistency).
Science is better then this. We don't need to directly observe something, it's OK to be able to just indirectly observe.
So, let's assume that consciousness is only physical. What would be the implications of that? It would imply that other physical objects can interact with it. We see plenty of evidence of that, with victims of brain damage, or when using drugs.
Now, to assume that consciousness is not physical, not only you need a mechanism for it to interact with our physical world (since it can order our bodies to do stuff) but also for the physical world to act on it.
Hence, from a scientific perspective, it seems pretty clear that consciousness is physical.
You are thinking about consciousness as it's contents. Drugs or brain damage change its contents, but don't change the presence of consciousness. Therefore that only proves that the contents are physical.
If you consider sleeping or fainting as loss of consciouness, I wouldn't be sure that's the case. Perhaps what happens then is that we lose the perception of the objects of consciousness and we are conscious of a blank state, and as we have no point of reference and nothing to know we mistakenly think that we were 'unconscious', while we were conscious of nothing.
> It would imply that other physical objects can interact with it. We see plenty of evidence of that, with victims of brain damage, or when using drugs.
And our senses. Another personal favourite example: being bludgeoned into temporary unconsciousness.
> to assume that consciousness is not physical, not only you need a mechanism for it to interact with our physical world (since it can order our bodies to do stuff) but also for the physical world to act on it
I've seen this argument before - a variation of it ties in the physical principle of conservation of energy - but I'm not sure it really holds. It assumes a pretty 'strong' dualistic model.
Even if we take the starting assumption that the mind arises from the physical world, we could say that the mind exists in a mind space, rather than a physical one. I believe David Chalmers' theory of mind takes a similar line (disclaimer: I haven't read it). [0]
If a dualistic model really does propose a suspension of the physical order, well, they've already lost.
Related: I like the way Dan Dennett answers the question of Is the mind physical?: it's physical the way a center of gravity is physical. It's not a particle, or something you can touch, but it arises from the physical world.
> Science is better then this. We don't need to directly observe something, it's OK to be able to just indirectly observe.
Indirect observations involve forming a testable (and falsifiable) hypothesis. What you're doing above is more like an attempt at proof by contradiction...
But I can assume the contrary viewpoint and deduce as well. Suppose human consciousness is non-physical. From observation (as you said) we know it can be affected by physical things -- brain damage, drugs, etc. So it must have a physical/nonphysical interface, probably in our brains.
You might say Occam's razor rules non-physicality out, since such an interface is a bit much to assume. But given that we don't have a meaningful way forward assuming physical-only, perhaps admitting one further assumption can help our inquiry - perhaps physical-only is too simple an explanation. As Einstein said, "Everything should be made as simple as possible, but not simpler."
What do you mean we have no meaningful way forward? If anything, recent advances in deep learning (and even the older neural networks) show, that we have a pretty good mathematical explanation of what consciousness could be. O-o
Imagine, if you will that the brain is an antenna, and consciousness is a soulful radio wave. If you destroy/make inert the brain, consciousness is lost, what have you shown? You may be tempted to claim that you demonstrated the fact consciousness arises from the brain, but this isn't the case here: The consciousness radio-wave still exists, but it is not being received.
The problem is a hard problem which may or may not be ill defined.
> The consciousness radio-wave still exists, but it is not being received.
So when someone dies, their consciousness continues, but is unseated from their body?
Presumably temporary unconsciousness is explained the same way?
How do you explain population increases or population decreases? Is there an infinite pool of consciousnesses, and only an infinitesimal proportion of them are being received at any given time?
Do drugs affect the receiver, or the consciousness (transmitter) itself? If it's the former, you've just conceded that some fundamental aspects of our consciousness are contingent on the receiver, and are independent of the transmitter. You can't very well answer the latter, as drugs exist firmly in the physical domain.
You'll also need to account for wildly different forms of consciousness (animals), the split-brain phenomenon, and why certain arrangements of molecules and their associated processes (i.e. living brains) can act as receivers but other closely related arrangements do not (dead brains, and living brains subject to general anesthesia). To steal a word from Dawkins, the whole thing seems unparsimonious in the extreme.
To mirror lostmsu's comment, this is a truly extraordinary claim, made in the total absence of supporting evidence. I'm not convinced it's even a coherent model.
Popper concerns itself with test-ability, not truth. Something can be both untestable and true.
Occams razor says more about human psychology and beliefs than it does about reality.
In any case I was not arguing that this scenario represents the true state of the universe, but am arguing against grand parent's argument that we can conclude consciousness is physical without making certain assumptions about the nature and design of the universe, even if empirically it is our best guess.
I am not sure I'd care about the definition of "true", that does not fulfill the Popper criteria. That is the whole point of it.
Occams Razor is a tool people use to pick the best theory (in terms of size) among theories otherwise describing the same universe. These theories are otherwise identical.
The same applies to your last point: we simply pick the best theory at hand, and that argument does exactly that.
> The problem is that all scientific results around the consciousness question derive from what people report about their personal experience.
There are also correspondence tests between experience and behaviour.
> There's no other known way to answer any questions about consciousness, and science hasn't discovered any way to answer questions about the immaterial.
Because there's no such thing in science. If it's observable, then it will be absorbed into a scientific explanation. If it's not observable, then it must obtain by logical necessity, or it might as well not exist.
> If it is non-physical then it cannot have any impact on the physical world by definition.
Well, that depends on your definition.
If non-physical things have no interface with physical things, then they might as well not exist -- their non-existence is tautological and the hypothesis is meaningless. So the only meaningful "non-physical" hypothesis is one that allows an interface with the physical.
> the only meaningful "non-physical" hypothesis is one that allows an interface with the physical.
I don't think that's meaningful, it sounds to me like a contradiction of terms. If it is non-physical but it can interact with physical objects as if it were physical, what is the label "non-physical" actually describing? In a world where there are physical entities but also non-physical entities with physical interactions, how is the non-physical entity distinct from the physical one in terms of observable reality?
> what is the label "non-physical" actually describing
In this case, it would be describing the consciousness phenomenon -- which we can't get at using normal, scientific, physical observations of the world.
If X exists, we all know it exists and can talk about it, but we have no way to observe it (in fact, all our observations are restricted to being through it) - then I'd venture we're on an edge of reality itself. In my opinion, a non-physical hypothesis here is allowable if it has more explanatory power than the alternative.
> which we can't get at using normal, scientific, physical observations of the world
Why? This would make it unusual compared to every other process in biology and everything we actually do know about consciousness. We know that most of the faculties we subjectively attribute to the conscious experience are rooted in physical biology (reasoning, instinct, emotion, memory etc), so I am not sure what the possibility of a non-physical component adds to the model.
The problem with the religion-based worldviews is that it opens us up to a vast amount of made up believes and leads to a huge variety of different axioms that become undebatable and completely mess up or public discourse. I think it's vastly preferable to just accept that we don't know the answer to some questions, rather than making something up.
I mean, the current state of science isn't that far from your description either.
Instead of "shut up it's magic" we have "shut up it's quantum physics", the experts in the field are the first ones to admit that they don't understand what they're doing/finding. "See that thing here, that's black matter", 10 years later: "Well actually that's some dark matter mixed with black matter, what is black matter your ask ? ¯\_(ツ)_/¯".
Science is too busy describing and analyzing every minute details that it doesn't offer anything useable in the real world / daily life (on a personal level), that's where beliefs/philosophy intervene imho.
> Science is too busy describing and analyzing every minute details
But that's what science is. It's the iterative process of observation and deduction -- a bottom-up ontology built from observation, experimentation, and interpretation. It's built on the philosophical assumption that reality is orderly, fundamental physical laws (and constants/quantities) are the same everywhere, and that they were the same in the past and will be the same in the future. Those are just working assumptions - we have no reason to assume they're universally true, but they've been very helpful in a practical sense.
Philosophy and religion, on the other hand, offer first principles and a system of inductive logic and reasoning based on them (well, those that are coherent - many are not). It's a top-down reasoning system based on (hopefully) just a few axioms, that (hopefully) provides coherent answers on questions of origin, meaning, morality, and destiny -- the sorts of answers the human heart needs to have a sense of context in life.
The problem with non-religion-based worldviews is that you still have to bridge the fact-value gap somehow, which necessarily entails "a huge variety of different axioms that become undebatable and completely mess up our public discourse". In fact it's worse: some religious worldviews are moored to truth claims, but secular-materialist moral philosophy fundamentally can't be.
With those as well, it's important to go for small axioms. For example Peter Singer's avoidance of suffering is a great example, much better for building a shared moral system than giant assertions like "abortion is evil".
Personal experience still counts as evidence. If you get a lot of different people reporting the same stuff then that’s a pretty solid indication that there’s some common basis for it.
From this, we know that the physical world affects consciousness in many ways. You can change its perceptions with alcohol. You can make it hallucinate with LSD. You can stop it entirely by applying force to the brain.
So either consciousness is physical or it is non-physical but somehow connected bidirectionally to the physical world. If it’s the latter, in what way is it actually non-physical? If there is some way in which it’s non-physical, shouldn’t that manifest as something that makes it act differently from physical objects? Some force that doesn’t perturb it or some attribute that remains constant when you’d expect it to change?
Sure it does. Imagine there’s a cave you can’t enter. You can send other people in, though. You send someone in and they tell you there’s a lion in there. You send someone else, they also say there’s a lion. You send a thousand people in and they all say there’s a lion. You send in people you’re certain have never met each other and they say there’s a lion. You send in people from cultures that haven’t contacted each other and they still say there’s a lion. You send in people who have no idea what a lion is and they say there’s a strange animal in there and the description matches a lion.
Put it all together and you have objective evidence that there is, in fact, a lion in that cave.
Without genetic testing you could not objectively know for sure if that was a lion or some other species.
Just because a lot of people agree on something based on shallow observations does not make it objective science. Thats moreso the realm of subjective "soft sciences" like sociology.
> Just because a lot of people agree on something based on shallow observations does not make it objective science. Thats moreso the realm of subjective "soft sciences" like sociology.
What if the people entering the cave make non-shallow observations and report it back to you?
If that's still not enough, then sadly science is not enough either since it relies heavily on cooperation (you can't test everything yourself).
Are you implying that nobody anywhere was ever objectively certain of the presence of a lion before the past few decades?
And how does genetic testing objectively tell you that it’s a lion? Genetic testing tells you that its DNA is similar to something else you’ve previously identified as a lion, but if there’s no way to be sure if that identification then you’re just moving the problem.
Yes, up until the recent advent of genetic sequencing, we humans have often mistakenly considered two organisms that look the same to the naked eye as being the same.
You didn’t address my second point: how does genetic testing give you an objective measure of lionness when it’s still ultimately based on observations and subjective assessments?
Genetic testing is far more scientifically revealing than just eyeballing something because it's based on actual objective tests, data, and math.
Just like radio astrology is far more scientifically revealing than just looking up at the night sky and declaring theres nothing more to the universe than meets the eye.
Typically a genetic variance of >2% indicates a different species
One reason to prefer the physical-causes-consciousness route is that it gives a fairly clear way to try to answer the question: investigate the physical. We'll know if we have useful answers when we can manipulate it / create a consciousness.
We don't currently have a clear direction to go to get answers. That's normal, and it doesn't at all imply that there is no direction. And while we investigate the physical, we get more side benefits, like better and better medications / treatments / tools / etc.
If consciousness is "something else", how do we make progress towards understanding it? What can we do with that information?
> it's not a question for which an answer can be deduced from observation
Another assumption. Throughout history, again and again, science has discovered ways to answer questions, and when science finally arrives, it brings the "final" answer since no other method can compete. Haven't we learned this lesson yet? Why is consciousness the final bastion ?
Of course, philosophizing on issues like the OP article is incredibly valuable. After all, science is just philosophy with extra steps.
I challenge the assertion that something immaterial (other than information) is at play here.
Consciousnesses as we experiment it is exactly what would appear if you were to instruct an intelligent system into thinking it had a subjective "I".
My theory is that consciousness is what appears when your mind theories (your model of other people's mind) become elaborate enough that you need to embed in them a model of your own mind into the model of their mind (aka "how do they see me" ?).
Once you reach that point, you have the ability of having a model of your own mind superimposed on your own perceptions. You also have your animal brain shouting to you that survival is essential and that you are unique (which are obvious beneficial traits to have) and lets your rational mind rationalize as it can.
Consciousness is just the (limited) ability to introspect your thoughts and model an "I". There is nothing happening there that can't be explained through information processing.
This article is talking about the question of consciousness as "experience" itself, not necessarily as the state of having some sort of intelligent subject (the "I").
You're very likely right that the functional characteristics of introspection and self-identification can be solved without anything "immaterial", but that still leaves open the question of how experience itself arises.
Well, I am arguing that no, it is not. And I don't see what makes qualia mysterious and different from raw sensor values.
"An organism has conscious mental states if and only if there is something that it is like to be that organism". I would argue that something that has a memory that can contain representations of its own internal states exhibits conscious mental states.
> Science simply hasn't FOUND [...] I personally believe in time we will
Lets call that Scientism. Beliefs about what may be discovered later are just beliefs, something that should be considered anathema to science itself. (Of course these beliefs may or may not come true, and curiosity about them may indeed provide insight that allows their eventual discovery.)
But in the meantime unfounded beliefs may indeed hamper further scientific insight and discoveries. Ironic, given the history of science & faith, that science might be stymied by limitations of its own faith.
> All questions that find an answer will have come through a science by definition of what science is.
No, that's a definition of scientism, which is very much pseudoscience. Your idea is so wrong that is excludes all answers to mathematical questions, which are, in fact, unscientific.
> No, Science simply hasn't FOUND the way to talk about it in physical terms which I personally believe in time we will.
Sure, but that's a largely useless belief.
People don't put enough value on the usefulness of a belief. They seem to think it is all about what is true from a data perspective, but truth is only valuable as both thought AND action, with emphasis on action over thought. Example:
- If you think gravity exists but don't behave as if it exists, you're gonna have a bad time because you will probably fall off a cliff, even though you're right about gravity in theory.
- If you don't think that gravity exists, but still behave as if gravity exists, you'll be alright because you won't fall off a cliff, even though you're wrong about gravity in theory.
If dualist thinking helps someone achieve the goals of a psychologist or therapist, then dualist thinking is more valuable than just waiting around for science to fill a gap it may never fill. And just because someone adopts a dualist perspective today doesn't mean they can't reject it tomorrow. There's no rule saying you have to believe the same thing your entire life. So if science comes up with a better explanation, the dualists can adapt then.
Science hasn't found or even begin to find a way to talk about this.
It's not like a physicist looking at an amoeba. That's a very complicated physical system, but it seems obvious that each of the parts can be progressively broken down into the fundamental physical forces and particles.
Consciousness....we can't even begin to talk about that in a fundamental physical way. The operation of the brain? Sure. But as far as we know, you could have an otherwise identical brain/computer firing circuits, producing actions, etc
without consciousness. Which gets to the heart of the issue: we have no way to observe consciousness outside our own.
> But as far as we know, you could have an otherwise identical brain/computer firing circuits, producing actions, etc without consciousness.
We don't know that at all. Can you describe how that would be like? Would a person with no consciousness not talk about their own consciousness? But talking about the self is an action. It seems pretty clear that the reason brains talk about their consciousness is their consciousness.
Nothing of the sort is clear. If you train a GPT-2 instance on a corpus of the philosophy of consciousness which has been rewritten into the first-person singular, and you then ask it a question which it answers with a discourse on "my consciousness", is it conscious? Your argument here says that it is.
You're assuming without evidence that consciousness is, and is only, an epiphenomenon of a sufficiently complex neural net, which you are of course welcome to do. But attempting to cloak that assumption in positive language, as you do here and elsewhere, is a bit of linguistic chicanery that doesn't deserve to pass entirely without comment.
Ok, so is there anything at all that a brain "with consciousness" can do that a brain "without consciousness" can't? If not, your definition of consciousness is entirely meaningless.
And, yes, no one really knows what that means. This is by way of being the whole point: that the question of what consciousness is and means is a matter for philosophy rather than science, as is everything else that can’t, or can’t yet, be precisely enough formulated to be susceptible to scientific inquiry.
(Personally, I take no position on the question, other than that as far as I’m concerned I have consciousness and that’s good enough for me, and I graciously extend the same privilege to all human and many nonhuman animals - dolphins, for instance, out of professional respect for a fellow species of highly successful bastards. I just think it’s fun to poke at unwarranted senses of certainty every now and again.)
I think you are very mistaken about an assumption, that GPT-2 can not experience qualia.
I am not even sure it would need any additional training.
Or I misunderstand what you mean by qualia. But the only other alternative interpretation of a word I can think of is identical to "being a specific physical system", which does not involve consciousness at all.
“Qualia” is a philosophical term of art referring to the subjective experiences that are considered to uniquely define consciousness. A creature without qualia perceives a noxious stimulus; a creature with them feels pain. If, as other commenters would have it, there is something of the “god of the gaps” about consciousness, then qualia are what fill those gaps.
This is miserably unsatisfying, but like I stated previously, consciousness cannot be externally observed with any known method.
> is there anything at all
The "what can it do" refers entirely to an internal characteristic known only to itself (or at least not known to me).
Is that then a meaningless concept? Perhaps. But in that case, I would suggest there simply isn't a "meaningful" definition of consciousness. It's just the Turning Test.
Consciousness has been observed externally many times using a very well known method: a vast number of analogous reactions to a large variety of stimuli. This is the very reason we aren't all solipsists and believe in human consciousness in others.
And that's all that any agreed upon measurements and observations have ever been. Just because you can't (yet) quantify it or model it formally doesn't mean you can't observe it.
Creatures which we’d all agree do not possess consciousness also display “a vast number of analogous reactions to a large variety of stimuli”. What do you imagine it proves about consciousness that we do, too?
I was being succinct. It is not the quantity of reactions that is important, but the type and nature. The quantity is simply what makes it a reliable measure.
The type and nature don’t seem like mattering, either. How does external observation of behavior shed any light on a phenomenon only perceptible through internal experience?
Can you cite a source for this argument? I’d be interested in better understanding it, but your presentation of it has thus far not aided this goal.
There are no phenomena that are only perceptible through internal experience. (A term that you've basically introduced in a True Scotsman fashion.)
Consider a radio antenna. A radio operates by responding to electrons sloshing around in a wire. The radio can 'experience external reality' iff that sloshing behaves analogously to something 'external': another antenna. This kind of analogy is what observation is, and it basically defines what an experience is. (Though in general, it doesn't need to be external: you can easily observe internal state or have a feedback loop.) Two antenna can be said to pick up the same signal only if their responses strongly correlate.
A brain doesn't work any differently, it's just a far more complex antenna that integrates more complex signals from more diverse sources. The only thing you have to do is establish a strong enough correlation (depending on the accuracy you care to assert) and to do that, you need to pick a number of aspects of the system, measure them, and ensure they correlate sufficiently nicely.
Again, this has nothing to do with the brain per se: all scientific measurements operate on the same idea. A large enough number of correlated observations is sufficient to establish whether two phenomena are the same. Humans are so good at doing this implicitly that we don't even question if other people have emotions, thoughts, ideas, or experiences that differ significantly from our own. In fact, human development involves a great deal of social mimikry and exploration which serve to constrain people's behavior to those things which communicate shared experiences particularly strongly. (Conversely, human behavior is not so unpredictable that we can't build an understand of it.)
"Humans are so good at doing this implicitly that we don't even question if other people have emotions, thoughts, ideas, or experiences that differ significantly from our own."
And yet other people often do, to the extent that the concept of "neurotypicality", and its converse, are necessary components of (the closest thing we yet have to) a complete theory of mind.
It's interesting to me that you accuse me of the No True Scotsman fallacy while introducing the concept of some sort of external signals for which a human brain functions exclusively as a receiver. That reads like an attempt to introduce a mind-body duality, but given your prior commentary I doubt that is the case. The closest I can come to making sense of it is that you seem to argue that consciousness consists entirely in experience of, and response to, outside stimuli from other humans and from the environment, with no de novo contribution arising from within the person who experiences a given instance of consciousness.
Considering that this appears to be a sneaky attempt to define the concept of consciousness out of existence altogether, I have to assume I've misunderstood you somehow, because I can't imagine anyone would engage in such chicanery under the color of forthright and intellectually honest discussion. But we're talking so much past one another at this point that I do doubt the use of continuing any further.
There is no such thing as "internal experience" IMHO. You draw the boundary arbitrarily. Me + Wikipedia have a different idea of myself, than me without one.
The building blocks of this universe are "things" vibrating. That is all I know.
Consciousness is a very tricky problem. I often question what happens when a man loses his "mind". Is the being now just a machine with stored memory which responds to stimuli?
What happens when a person loses his memory? What role does consciousness play in this scenario?
How do we let split personality disorder and consciousness to play together?
Also, I look around and see the geometry of flowers and seeds. Geometry that emanates from the universe. Everything that looks chaotic at one level becomes extremely beautiful and organized at another.
Also, I see that everything is terribly interconnected. If we think deeply enough we can easily see that a stone lying outside and us are all the same as far as building blocks are concerned. The stone is not an unnecessary object, but our existence and the stone's existence are inextricable.
The universe, whatever is visible to me, is absolutely too grand and too well engineered to not have some sort of intelligence working behind it.
Not much intelligent about gravity coalescing matter.
But that had the side effect of releasing atomic energy in the form of stars. The rubble attracted by these stars, orbited till it coalesced itself into planets.
Those planets are bombarded by atomic energy by stars. Chemical reactions happened to break down this energy. Life, aka chemical reactions that are able to persist, started happening as a side effect.
The more robust and more intelligent reactions were able to persist through fluctuations in environment. ie. ice ages, meteors, etc.
We are nothing more than a persistent, stubborn reaction. A fungus on a hot rock. Maybe someday we can send some spores to another hot rock, and continue our fungal infestation. Provided that we don't consume all the resources here before that happens and fizzle out. In any case, I'm sure there is a fungus, perhaps a more evolved one, somewhere else in the universe that will.
The brain is connected to the gut via the vagus nerve. It's possible that humans are just a vehicle for bacteria to more quickly and interact with their environment in a more substantial way in a similar way that human use cars as a vehicle. I feel like I am an individual with my own consciousness, but it's possible it is all one shared consciousness by a super network of bacteria
> The universe, whatever is visible to me, is absolutely too grand and too well engineered to not have some sort of intelligence working behind it.
I think everyone can appreciate this viewpoint, but it's an emotional, visceral reaction to size- and timescales that are literally incomprehensible to the human mind. The complexities of nature engage one's sense of awe and strongly suggest the existence of an intelligent agent in some people, but this belief says nothing about whether that's actually true.
I understand the point the author wants to make, but I think they fail to make it.
As an example, the idea that "there could be a mind that eats food but doesn't taste it" is silly. We were always going to evolve a way to "scan" food for it's properties. It just makes evolutionary sense. The more information the better. Not to mention the reward aspect (there is some reward for doing everything that contributes to survivial). Of course food tastes good.
Another example the author uses "red looks red" is equally unconsidered. It's a mental representation of light. There are evolutionary reasons for being able to distinguish colors, and they have to be represented mentally somehow. Why doesn't it look like blue? Who cares? All that matters is that it has a distinct representation.
Also in the article, the "why do rotten eggs smell bad" example... Because sulfurous compounds are the result of the metabolic processes of various bacteria. Because those bacteria are present in rotting things, which can cause illness, we have evolved to find them repellent.
Why are my experiences different from others? Because that's just how biological organisms beyond a certain complexity work. No two are alike.
A similarly obvious explanation exists for every example in the article. I see no compelling case that experience cannot be described through biological processes or that consciousness didn't arise from complexity.
I'm not saying there aren't interesting mysteries where consciousness is concerned, just that this article seems to completely fail to explore them.
I’m not sure of the point you’re making. The point is it’s entirely possible to conceive of a complex biological agent that can take actions on the basis of sensory input data without invoking the need for a subjective experience. That would be the ‘philosophical zombie’ described by David Chalmers.
However we have a subjective experience of what it ‘feels like’ to see red. Why is that needed?
Any agent which has the ability to perceive red must have some mechanism which corresponds to that percept. The percept of red has to be different to other percepts so that it is not mistaken for something that is not red. It is subjective because the agent has no mechanism for objective experience.
I think to conceive of a philosophical zombie, you have to say that consciousness is something uniquely special in that something possessing all its describing qualities is not it.
What reason is there to believe that subjective experience doesn't arise from the complex web of perceptions, sensations, neurochemical interactions and cognition that we call "I"?
It seems to me that the properties of consciousness would naturally follow from any generally intelligent system. An intelligent agent must be aware of phenomena in its environment, it must be able to distinguish phenomena (qualia), its experience is subjective to the extent of the limitations of its connectivity.
> The problem is that there could conceivably be brains that perform all the same sensory and decision-making functions as ours but in which there is no conscious experience.
I think before this can be said to be a problem, it should be explained how such a brain (with human-like intelligence) can exist without mechanisms corresponding to the properties of consciousness.
To me it seems obvious that any intelligent living thing that has some level of intelligence would be conscious... as in take in sensory data, is aware of it's environment, and can make decisions based on that.
But I would even argue that most living things above a certain neuron count are conscious. I think it's really flawed to assume that only we as humans have an awareness of self and are "conscious".
I don't see the distinction between my awareness of myself and my environment, and that of my dog for example. He is aware of himself, has ideas and acts on them, interacts with his world consciously. It's as if humans are grasping for some sort of uniqueness in nature. If you were a robot with sensors and cameras fed into a generally intelligent neural network. You wouldn't see a display of the data on a hud. You would be consciously immersed in the data. You would be the neural network. You would have an awareness of your environment, and you would be conscious of your existence in it.
I think consciousness is evolutionary. It allows living things to want to survive and preserve what they are. I think without consciousness a creature wouldn't have the strong drive for survival. In my opinion, it's what makes you long to continue your existence.
This is almost exactly what I would have written so thanks. It too me a lifetime while to combat my childhood bias to arrive at this point. And now I feel like a lot of people have to similarly overcome their own internal bias and realize that consciousness isn't all that special. We search for intelligent life with out in space but intelligent conscious life surrounds us.
I would actually argue the opposite using your same argument. It would take more assumptions to assume that, we, a single species on the tree of life, experience life differently from all other animals.
What evidence do you have that that other animals don't experience life in the same way we do? Why would we be any different from them?
Whoa, slow down :) I have no idea if anyone other than me is conscious. So no assumptions about humans v. animals.
It’s like playing a video game. Some of the other characters insist the game is multiplayer, but how can i know they’re not just bots pretending to be players?
I think the particular shape of normal human consciousness is a product of evolution, but I don't think consciousness itself is. People can experience altered states of consciousness which can be detrimental to their survival such as psychosis, disassociation, alexithymia (inability to perceive emotions of oneself and others), and aphantasia (inability to create mental imagery). Additionally some psychologists theorise that consciousness / intelligence is in fact a liability to survival because it allows us to ideate suicide as a solution to negative feelings, and that we have had to evolve mitigations to prevent this.
The problem I have with this is that you can then claim that anything and everything is conscious.
Create a turing machine out of marbles and levers, and it's suddenly "conscious" with the right configuration. You really believe that given enough space, a bunch of marbles running along tracks bouncing off levers can become aware that it is a giant marble machine?
The atoms in one pocket of the sun's chaotic fusion reaction might randomly and momentarily behave like an intelligent quantum computer - does that mean the sun is momentarily conscious from time to time?
Your comment got me thinking so I'm going to ramble a bit. The sun being conscious makes sense to me. Not as we are, but then again nothing is as we are. Cats communicate with each other, cleverly explore and learn about their environment but they aren't conscience like us.
Growing up, my vocabulary advanced waaay faster than my experience. I learned what the word "nostalgia" was well before I first felt nostalgic. In fact, I remember feeling it a few times about summers with friends that had moved before connecting the feeling with the word. It was a slap on forehead moment. I concluded that nostalgia was an inbuilt "thing", everyone else probably experienced it in the same way. It's easy for me to consider nostalgia as just an inbuilt reaction to a certain kind of signal. (Something periodic that makes you feel good, then it stops. Recalling the period creates a bittersweet feeling).
The space between consciousness and inanimate intuitively feels to me like a gradient. Various levels of brain damage might yield someone unresponsive to speech but responsive to pain. Then there are people who feel no pain, but otherwise are completely normal.
Therefore, I'd put on the lower end of the consciousness scale "reacting to changes" the more changes something reacts to, and the more varied their reactions, the more conscious it is. We're talking things between the sun and single celled organisms. Single cells don't seem to do much rumination, but they get hungry.
Advanced consciousness seems to require heritable lessons and skills. A feral human that somehow survived alone on an island from birth wouldn't be conscious like the rest of us are, but I bet it would still feel nostalgia if its favorite berry went extinct.
I'm comfortable ascribing feelings to things with full knowledge they aren't feeling it like we are. I bet red giant stars feel fat and old.
> Create a turing machine out of marbles and levers, and it's suddenly "conscious" with the right configuration. You really believe that given enough space, a bunch of marbles running along tracks bouncing off levers can become aware that it is a giant marble machine?
This is just defamiliarization. It's an excessively common belief that a computer with the right inputs, outputs, and software could realize that it is itself a computer program. The same software on a marble machine would be a lot harder to hook up to useful sensors, and would be too large to be at all practical, but it's the same thing.
Since you happened to use marbles as the analogy, I think you may find "I am a strange loop"[1] and the concept of simmballs and the careenium[2] interesting.
> The problem I have with this is that you can then claim that anything and everything is conscious.
The thing would have to have correlates of consciousness; mechanisms which perform the properties which constitute conscious thought. I see no reason why such a mechanism could not be created by such general machinery as a marble run, albeit a very large one. The sun however is a chaotic ball of plasma, so I can't see how it could play home to an arbitrary complex mechanism.
Consciousness is something separate from environmental awareness. Consciousness is what you have that lets you observe yourself carrying out the actions you are all while thinking yourself to be the one running the show, even though its entirely possible your behavior is not really "yours" to control, but the processes of your body and mind. In other words the 'thing' inside of you that's along for the ride of one quite immersive movie, is what consciousness is.
When you write a program to determine a pseudo-random number, I doubt there's any person that would seriously indulge the possibility that in that moment some entity puffs into existence, imagines itself picking a number, and then puffs back out of existence. But if this is true it makes any path towards artificial consciousness require some rather extensive handwaving and speculation that is not logically justifiable based on what we currently know.
Hold up, Is this thread being inundated by a religious group or something?
Where is the rational thought behind this consciousness discussion? If all of you "critical thinkers" are really responding with "You just don't want to accept that consciousness can't be explained with science" then I have lost my last remaining bit of hope in humanity's intelligence.
Claiming that science cannot explain consciousness is not "religious" or "irrational", it simply is going back to what science is, namely, that "ideas are tested by experiment". Science as we understand it today cannot really explain consciousness, the same way it cannot really explain what it feels like to be a bat. It can explain how bat's use senses perceive environment using physical principles of sound transmission, but it can't convey what bats feel when they perceive environment. Similarly, I cannot convey my internal conscious experiences to you. I can only try to describe them to make you interpret them in terms of your own conscious experiences, but I can never make you feel what I feel, or if I can, there's no way to tell that.
Of course, in future we might understand things like consciousness and qualia, and it's likely that people who do it will be scientists. It won't be "science" though as we understand it today, it will necessarily have to be something bigger. Let's call it "science+".
And, of course, just because science can't explain consciousness doesn't mean that religion can. It can certainly claim to be able to do it, but will be as convincing as the religion is at explaining everything else.
Science cant explain a lot of things, but it has a habit of not introducing needless entities in order to explain them away. There is a certain sense in which those ‘belief based’ entities are qualitatively similar to religious mythology.
Define consciousness in a way such that it can, even theoretically, be objectively measured.
If it cannot be so defined, then it cannot be explained within an empirical framework, or "science" if you insist, by definition. Empiricism requires objective measurement. The kind of consciousness people are talking about, the experience of being, cannot be objectively measured because it is a purely subjective concept.
If you want to talk purely about ideas that exist within an empirical framework, then you're talking about things like minds and brains, complexity of interactions, etc. All objectively measurable and perfectly scientific things.
But that isn't what people are talking about, because it isn't interesting. Because when you get right down to it what people really want to know is how badly we're allowed to treat things. If it isn't "conscious", then we need not concern ourselves over its apparent suffering need we?
Something is up. I haven’t seen this much scientific woo and pseudo-philosophical nonsense on HN at one time before. Unless this is a part of the community I’ve never noticed before.I think I’ve read at least 3 posts that end in “because quantum physics”.
There are two broad categories of comments that you've conflated together into an "irrational" label. The first is grounded in mystical woo-woo, and the second is grounded in philosophy of science (meta-science) and philosophy of mind.
I do think that "consciousness likely can't be explained by science" if you take science to mean the scientific method and models produced by it, and not just "rational thought" which I would argue is not the same thing.
Science at its core is a series of predictive models and they don't tell you how things "are" (whatever that means), but rather how things behave within some error bound. On the one hand I can see how it's dangerous to cast any sort of criticism in scientific results and theorems especially when they've been well established, but that doesn't mean they are "facts" or universal, only that they've withstood repeated criticism quite well.
The problem that this article is talking about is that you can't observe experience (consciousness) like you can observe "physical" phenomena, so you can't use the same sort of tools like you would use for conducting other sorts of science.
You obviously approach any intelligent discussion in consciousness with rational thought, and the people you can take seriously in this field do just that.
If you're interested in reading more about this I can point you to resources, which should probably start with Chalmer's "Hard problem of consciousness".
I'm also a bit confused here. It seems that a lot of the commentors here have either not read the article posted at all, or are intentionally trying to manipulate the HN comments so as to confound any future ML algos with what amounts to jibberish and noise. There seems to be an inordinate amount of people talking past each other and calling each other names (or something similarly in bad faith). I may just be tricking myself though and seeing things that are not out of place in any other thread.
Philosophers of mind still insist on metaphysical descriptions of consciousness Eventhough there is so much progress in neuroscience that makes it hard to take it seriously anymore. Afaik, most young neuroscientists abstain from these endlessly-goal-shifting discussions. Its a pity too because with recent progress in deep learning there couldnt be a more exciting time to do mind philosophy
(You should have realized that HN has been infiltrated a long time ago)
Me too. I think since around ~ 2013-15 there are so many management people and corporate drones that this place should no longer be called hacker news. Hackers are not conformists. Just today the groupthink was calling an employee to delete his comment and stop speaking his mind. Old accounts are getting older but they re not being replaced.
You're forgetting that Silicon Valley is saturated with new age beliefs and other woo. Just because someone's smart and is good with computers doesn't make them any less susceptible.
Right, it's the classic "God of the gaps". People invested in some kind of faith latch onto areas not yet explained by science. This relies on the false belief that if something isn't explained by science, then there's an opportunity for faith to provide value (aka faith is needed to provide an explanation).
But the many worlds interpretation is also a sort of gap-plugging, unfalsifiable argument to me. With it, no need to actually explain why the universe is one way or another: all the possibilities exist in different worlds.
I think people delude themselves that science, at least as currently practised, is always objective.
Sure maybe from a philosophical point of argument consciousness trumps all and physical existence and objective external reality should be viewed through that lense. But every single particle has a worldline tracing back to the Big Bang, where there were no conscious beings present yet. Only until the universe cooled and became less dense did consciousness become possible. So can consciousness really claim supremacy over external reality. To do so would require retroactivity or bootstrapping. Unless you accept the idea of a timeless universe where no point on a worldline (or collection of worldlines) is privileged, and now is a statistical reality more than anything in that there are more collections of worldlines with consciousnesses when the universe is relatively evenly made up of dark energy and regular matter+energy (highest amount observers compared to much closer or much further from Big Bang).
Conscious beings not being present does not mean that consciousness is not. This article basically asserts that your cognition of yourself as "conscious of the world" (as you imply here) may be inverted. The big bang may actually be the birth of consciousness, which we, as human beings, are gradually becoming more conscious of.
>In the same way, if the universe is to actually exist, its properties can’t be exclusively relational/dispositional
People who believe the mathematical universe hypothesis would take issue here (I'm not educated enough to make a full opinion but I know they would oppose this quote). They would say the universe is entirely relational - it's all one set of mathematical relations. Max Tegmark is one such person and he explains consciousness and self awareness as arising out of evolution (being aware and not a slave to sensory inputs can be beneficial), and subjective reality and the flow of time are just subjective/conscious illusions. Albeit he doesn't give a mechanism for consciousness to form in the brain, but no one has yet.
There is absolutely no evidence to support any claims at all regarding consciousness, in any direction.
We know literally zero about the natural basis for consciousness -- and I mean zero -- and so that claim (better known as "panpsychism" if you want to research it) is pretty much as plausible as any other at this point. And it's been taken pretty seriously by a great number of scientists and philosophers over the centuries.
If I understand this article correctly, what you and I consider to be the 'Big Bang' is itself an event filtered through the lens of our consciousness.
The big bang, the time that has passed since them, even the concept of time, are not in fact reality but rather the interpretation of reality made by our consciousness.
There is no argument that there is a fundamental reality that predates consciousness, but our problem is that everything that we view as 'reality' is just the subset of that true reality that is filtered through our consciousness.
What we perceive as reality starts with consciousness as its fundamental building block, because everything we have discovered has been built upon that foundation.
> The big bang, the time that has passed since them, even the concept of time, are not in fact reality but rather the interpretation of reality made by our consciousness.
I think that is very reasonable/likely
>There is no argument that there is a fundamental reality that predates consciousness, but our problem is that everything that we view as 'reality' is just the subset of that true reality that is filtered through our consciousness.
See to me the article was saying a fundamental, consciousnessless reality isn't possible when it stated everything can't only be relational (the spreadsheet example) - there must be some initial value/setting, i.e. consciousness.
> tracing back to the Big Bang, where there were no conscious beings present yet. Only until the universe cooled and became less dense did consciousness become possible.
This view neglects that matter responds to magnetic fields which surely existed prior to the big bang and could have been vibrating until a resonance or similar event triggered the matter to expand. There is an idea that expansion and contraction are an oscillation of their own; seems logical, but I can’t prove it.
Thanks for the response. I'm not a physicist but we can't speak of before the Big Bang usually. But you are right fields could have existed before neutrons and protons formed. As far as we know, consciousness requires protons, neutrons, and electrons, among other objects, and those were not present or in the right arrangements right at the Big Bang, so I think my argument still holds.
This is an ontological problem only. The goal here is to solve the problem of that we can't find a rational basis for knowing. If the physical world has primacy then you have to figure out how to solve the irreconcilable perception problem. It makes the philosophy easier to say conciseness is first. I don't understand why we need to have an absolute basis for knowledge as long as our approximations for knowledge are so astoundingly useful.
This is sad. We have good functional accounts of consciousness (Global Workspace theory [0], Attention Schema theory [1], recent robotics work on self-attention [2]). These explain much of the specific phenomenology of conscious experience. However Rawlette clearly is completely unfamiliar with this extensive and deeply empirical literature.
"Armchair philosophy" like this still gets published way too much, and is given way too much respect. Rawlette, typical of this genre, believes that "conceivability", thought experiments independent of empirical facts, and verbal theorizing can justify beliefs more strongly than actual research.
For good philosophy in this domain, read people like Andy Clark [3].
I looked into the article you linked regarding self-attention. I quote:
> Yes, we have a different definition that we use that is very concrete. It’s mathematical, you can measure it, you can quantify it, you can compute the error to what degree. Philosophers might say, “Well, that’s not how we see self-awareness.” Then the discussion usually becomes very vague. You can argue that our definition is not really self-awareness. But we have something that’s very grounded and easy to quantify, because we have a benchmark.
So he's redefining something in a way that merely satisfies his world view. That is not in any way consciousness. Consciousness inherently has the "knowing" quality of your experience: you know you are reading this. You can make the smartest, brightest AI robot, with the best machine learning algorithms, that functions the same as a human being, but in the end, it doesn't know it's doing that because it isn't conscious of it.
Hell you don't even need a robot for that. A person that has recently died has all the components you need for a functional human being. The only thing missing is that it isn't conscious anymore. You can fix/replace the heart, do whatever you need physically, but changing the physical building blocks, as per the materialist view, won't bring the consciousness back.
Not sure about that, the example was more like, you CAN restore the shattered glass 100%, but there will still be something missing from it even if you do.
Well, we know that heart transplants, for example, can give someone many more years to live. But if you transplant a good heart to a person who died 10 minutes ago in the same way you would to a living person, the person won't come back.
That's exactly how any heart transplants used to work: you stop the heart (now the patient is dead), cut it off, put the new one back. If the body is cold enough, 10 minutes might be OK.
The reason it does not work past 10 minutes without cooling is that brain cells start dying en masse, including ones running vital functions.
Our lack of understanding of the phenomenon of consciousness and subjective experience is so total that it's hard to even imagine what an "explanation" could sound like. The sensation of existence is primordial and weird. I'm curious what you think "research" in these conditions can consist of, other than trying to find ways to frame the problem that permit for some kind of forward progress.
How much of the empirical cognitive science and neuroscience research have you read? Graziano (discussed in my reference [1]) SPECIFICALLY addresses how and why our brains generate the primordial and weird experiences you mention.
We know lots of ways our brains generate weird experiences, some built in (like dreams) and some produced by special context (any number of illusions). Why should consciousness have some special status?
There is no neuroscience paper you can cite that can make anyone in this thread say "oh, so THAT'S why 'red' looks red!"
In this case, you're begging the question by saying "our brains." Why is there a 'we' at all? Why is there subjective experience? I don't expect to find the answer in a Hacker News thread, but I think waving off the question the way you do is ridiculous. We've had some of our best people on it for millennia for a reason!
And yet, you are waving off potential answers because you are convinced there are none.
I agree with you, the cited papers probably don't solve the hard problem of consciousness but to assume that qualia is just intrinsically unexplainable won't help solve anything either.
I can imagine that consciousness will either turn out to be something completely groundbreaking (i.e. the universe consists of consciousness) or completely mundane (i.e. it's an illusion all sufficiently complex information processing systems develop), but ignoring research is not the way find out.
The red looks "red" because cortex neurons representing red-related filters are better wired to vocal chord controlling neurons responsible for producing "r" "e" "d" sounds than to "b" "l" "u" sounds.
It evolved in animals because it is advantageous. It is known that even the simplest worms can make a difference between the part of their own body and the body of the other worm or the rest of the environment: they would eat another worm but not their own tail. Worms eating their body are typically not having an advantage (but the local optimums of scarifying a part of the body can of course still develop). So the "subjective experience" of distinguishing between "me" and "anything else" allows the living things to protect themselves from the rest of the environment.
Then if you question "why are there living things" the answer is again that they evolved exactly because the property of these units "preserving" themselves and duplicating that behavior -- these entities which don't have such properties remain lifeless and don't reproduce.
In short, a "subjective experience" is still a "simple" emerging property of the complex enough living organisms.
I intentionally quote "simple" as for us it is "of course" not simple, even if it can be reduced to such a simple explanation. The reason for that is again known today: the existence of life as we know it today is a result of the 14 billion years of the development of the environment during which effectively each moment and every detail happening just a bit different could have resulted in what we know as our environment not existing in this very form we know today and therefore "we" as we understand us to be wouldn't exist too.
Even starting from the basic "building blocks": today we know that for the atoms in our body to exist, they had to be produced in some exploding star, for all elements except for hydrogen which was produced shortly after the Big Bang (so only hydrogen atoms in our bodies are as old as the Big Bang, it's so amazing to be able to know that!). The stars from which we are made had to "happen" to "develop" "just right" for us to be able to exist. In the areas of the Universe where they developed a little different, we know that we can't search for the life like ours. So we are in "just right" part of the Universe, even if we know that there are many parts that could also be "just right." But the other "just right" places will still develop something not exactly the same like our place here even if there would be a lot of common properties. They aren't "we." We is the result of everything that happened in the part of the Universe relevant to us since the Big Bang.
At the end, all "why" questions have sense only in a predefined context where both the one asking and the one answering agree that the answer is satisfying. If the one asking asks knowing that no answer will be satisfying, or no shared context is known, there's seldom point in asking:
"Q: If you get hold of two magnets and you push them you can feel this pushing between them. Turn them the other
way and they slam together. Now what is it, the feeling between those two magnets?
RF: What do you mean what's the feeling?
Q: Well, there's something there isn't it?
I mean that's the sensation that
there's something there when you
push these two magnets together.
RF: Listen to my question: what is the meaning, when
you say that there's is a
feeling, of course you feel it, now what
do you want to know?"
Q: What I want to know is,
what's going on between the other
bits of these two bits of metal.
RF: They repel each other.
Q: Well then, what does that mean,
or why are they doing that
or how are they doing it?
RF: You ask...
Q: I'm not saying.. I think that's a
perfectly reasonable question.
RF: Of course it's reasonable,
it's an excellent question.
The problem that you're asking: You see, when you ask why
something happens, how does a person answer why something
happens? For example: "Aunt Minnie is in hospital." "Why?"
"Because she slipped, she went out, she slipped on
the ice and broke her hip." -- that satisfies
the people. It satisfies them, but it
wouldn't satisfy someone who came from
another planet and knew nothing about that. The first-graders understand why: "when you
break your hip you go to the hospital."
"How do you get to the hospital with that
hip, when the hip is broken?" "Well, because her
husband seeing that she has had her hip broken
has called the hospital up and sent
somebody to get her." All that is understood
by people and when you explain "a why"
you have to be in some framework that
you allow for something to be true otherwise
you're perpetually asking why.
"Why did the husband call up the hospital?" "Because
husband is interested in his wife's
welfare." Not always, some husbands aren't
interested in their wife's welfare, when
they're drunk and they're angry and... So
you begin to get a very interesting
understanding of the world and all its
complications. If you try
to follow anything up you go deeper and
deeper in various directions. For example
you'd go: "Why did she slip on the ice? Well,
ice is slippery, everybody knows that, no
problem, but I asked why is ice
slippery"..."
(an extended quote ends)
So if you'd say "but the subjective experience that a worm has is not my subjective experience" the answer is "of course it isn't."
If you'd claim that something would have to be able to tell you about its subjective experience for you to accept that it is like yours then you'd exclude those humans who can't talk, etc. The language is of course an emerging property, most of people can agree with that. At the end, these who try to claim a specific emerging property among the living organisms as something "very special" ("the feeling of me being different from the rest") somehow want to exclude that specific emerging property from all the rest, and they have some special motivation to do so. But it's still just an emerging property.
The most interesting view of consciousness that really changed my thoughts on it was from Wegner's "The Illusion of Conscious Will". He basically argues this:
We have an agent-based model to understand the behavior of certain things in the world. When we see a cat chase a mouse we assume that the cat as an agent which has a goal which is to catch the mouse. That is we imagine the intentions of the cat (and the mouse) to better predict what will happen next. This is just a mental model, but it's different than the causal model we use to predict where a baseball will land when you throw it.
We apply this model to all sorts of things because it is useful to help predict behavior. That is we imagine conscious intention as a tool to understand things in our world.
The catch is that when we observer our own mind at work... we apply this same model. This is a weird moment where we try to imagine that we have conscious intentions, but since it is own on selves we are watching this creates the illusion of conscious will.
Whether or not Wegner really nails it, I become increasingly suspicious that consciousness is far less special and much more of a trick than we believe it is. But because that illusion is tied to who we "are" we have a very hard time letting go (of course this idea goes back to Buddha and earlier)
Maybe I'm just dumb, but I have a problem taking anything meaningful from that article. There's an ontological problem in defining consciousness in terms of observations - this is circular because we don't have a meaningful starting point so we should throw it all away and... then what?
> The problem is that there could conceivably be brains that perform all the same sensory and decision-making functions as ours but in which there is no conscious experience. That is, there could be brains that react as though sad but that don’t feel sadness, brains that can discriminate between wavelengths of light but that don’t see red or yellow or blue or any other color, brains that direct their bodies to eat certain foods but that don’t taste them. So why is there nevertheless something that it’s like to be us?
I don't think so. What even is this "conscious experience"? I hypothesize that it's an illusion. A sufficiently complex robot would indeed have the same "conscious experience". Qualia is nothing more than complex arrangements of molecules in our brains, it's an abstraction, not something fundamental to the universe. Maybe stars and planets too have some sort of rudimentary "conscious experience".
I can't prove this, but you can't prove that you have "conscious experience" either.
> that no physical property or set of properties can explain what it’s like to be conscious.
I think that it can be explained but we just don't have enough knowledge of the internal workings of the brain yet.
For some reason I get really exalted when people such as the author disagree with me on this, of all topics. I don't know what it is, maybe it makes me angry that people don't realize it. I know that sounds really arrogant (specially when author has a phd in philosophy), and I might be wrong and look like an idiot, but I can't control this feeling. I feel the same way a teenage atheist feels when he hears a religious person speak about god (I know this because I was that teenage atheist).
If a brain performs the same sensory and decision-making functions, it is also going to claim that it feels emotions and experiences colors in particular ways. For all practical purpose, such a brain is conscious.
> "In philosophy, idealism is the group of metaphysical philosophies that assert that reality, or reality as humans can know it, is fundamentally mental, mentally constructed, or otherwise immaterial. Epistemologically, idealism manifests as a skepticism about the possibility of knowing any mind-independent thing. In contrast to materialism, idealism asserts the primacy of consciousness as the origin and prerequisite of material phenomena. According to this view, consciousness exists before and is the pre-condition of material existence. Consciousness creates and determines the material and not vice versa."
One assumption people often take for granted about consciousness is that everyone is conscious. I agree we should operate under that assumption for the purposes of making ethical decisions, but I think we should challenge it for the purpose of trying to understand consciousness better. What if philosophical zombies aren't just a hypothetical thought experiment, what if some people are conscious and others only pretend to be conscious? Is there some particular event that triggers consciousness?
Lacan thought that consciousness is triggered by looking in a mirror (or something equivalent to a mirror). If someone was carefully raised without the ability to look in a mirror, see their own shadow, hear their own voice, etc., would they never become conscious? How could you tell?
What if consciousness is triggered by something totally unexpected, like: circumcision; submersion baptism; chicken pox; or some particular bacteria in my gut? I can find someone who never had chicken pox and ask them if they're conscious, but how do I know if they're answering truthfully?
Everyone has a big incentive to profess consciousness, because anyone who professed non-consciousness would be in danger of losing the privileges and protections which society grants to conscious people.
In my opinion the only value in this idea is that it highlights the absurdity of trying to apply an empirical model on an un-empirical concept. Since "consciousness" can not be measured, and indeed is difficult to even define in words, it is excluded by any model which insists upon an objective reality.
To your point: how do we know that all people posses consciousness? We don't. We make that assumption because other people are like us. The less like us something is, the less likely we are to assume it has consciousness. For most of human history animals were not afforded this assumption and that is only now starting to change because, as it turns out, animals are a lot more like us than we like to admit.
In other words, it's speciesism. The whole discussion about what does and doesn't have conscious is a desperate attempt to justify human exceptionalism.
>We make that assumption because other people are like us.
Yes, and I think this is a mistake, because it shuts down any hope of isolating what it is about us that makes us conscious. Perhaps this will change as human-lookalike-robot technology gets better, breaking down the "looks like me, must be conscious like me" argument.
People are starting to grant animals the rights of consciousness, but let me ask, what about sperm? I myself was sperm once, and over time I became a full-grown human being. If consciousness is a boolean, then at what exact moment did I become conscious? Was it when I reached the egg? When the first neuron in my brain formed? When my umbilical cord was severed? When I first recognized myself in a mirror [Lacan]?
> If consciousness is a boolean, then at what exact moment did I become conscious?
There's a simpler answer: there was never a state in which you were not conscious. And yes, that would apply to literally everything in the universe, and in every grouping of such things imaginable, and in fact 'you' are neither a single entity, nor a gestalt of several, nor merely a component of another, 'you' are all these things at once.
But the real point I'm trying to make here is that these questions are literally meaningless if you insist upon empiricism because they are untestable.
We don't and can't. Because we can't even come up with a universally accepted definition, there can be no bright-line test.
Coupled with our innate arrogance, where we allow ourselves to "just know that we are", just like we are pretty sure that we get to exert "free will", you end up with a lot of sloppy thinking. I'm not claiming to have any answers (I'm more of an intentionally extreme skeptic of the answers I come across), but I don't think you can deny that there is a lot of sloppy thinking (esp. on a layman's board like this) around "consciousness", "intelligence" and "free will".
I think a lot of this stems from a trick of sorts. Nothing that we are is "magically" consciousness. If we had a machine and emulated every aspect of a "mind", would it be conscious? If you're answering no - then why? What's missing? I think the answer is: the only thing that's missing is that you couldn't possibly believe that that thing could be conscious. And this is simple because you believe that you are "conscious" to an extent which you really are not.
In other words, we think we're this thing called "conscious", but that thing isn't what we think it is!
When you look a bunch of different things: the left brain/right brain separation and the prefrontal cortex - you realize that you're not even just one brain, but many? Which one is you? And I seem to recall an article recently that challenged the existence of an unconscious. How does that relate to this?
Consciousness is not an outward product but an inward one. We are, in large part, deterministic. Our genetics and other factors can determine an increasingly large amount about us long before we're even self aware. Yet, for whatever reason, we all (though I suppose as per the GP comment - that is an assumption) have this inner dialog and observer who is not only watching every single thing as time gradually elapses, but also feels as though it is the one that is running the show.
Imagine you write a program to determine a pseudorandom number. You obviously don't imagine some entity puffs into existence, imagines itself picking the random number which our RNG already independently chose, and then puffs out of existence afterwards. Yet why would this somehow suddenly become true as the program became more complex? It requires extensive handwaving and speculation. Even if you somehow wrote similar pointless inner dialog mechanics into it, would something puff into existence and perceive itself then running those mechanics? I don't see any way you can answer yes to this question without, again, resorting to handwaving and speculation.
The p-zombie concept always struck me as stupid solipsism. If you can't tell the difference doesn't that mean there is no meaningful difference?
Likewise if they fear the loss of priviledge that implies some degree of consciousness /somewhere/. If some absurd set of state and pseudorandom random number generator capable of passing every metric of consciousness in response to inputs then it is a consciousness even if it is made of a bizzare set of equations and state.
Anyway for the consciousness somewhere - take a hypothetical hyperintelligence or supercomputer capable of simulating a human brain completely calculation by calculation in various events like say being flayed alive. It isn't torturing anybody because the actions are simply calculations that it itself is running. The victim may not be real but there is a real intelligence somewhere behind it - and it may or may not care about the simulated suffering. Where it is "run" is material like the difference between acting out a murder and actually murdering someone.
> The p-zombie concept always struck me as stupid solipsism. If you can't tell the difference doesn't that mean there is no meaningful difference?
That's a very strange argument. If you hold your nose and you taste a piece of potato and then a piece of apple, you can't tell the difference. So now suddenly the difference is meaningless because the sense required to tell the difference is removed from the equation? The apple is still (in a, to you, non-observable reality) an apple, regardless of how your perception changed.
I still think the difference is there even if we can’t know it. It’s the whole tree-woods-nobody thing, isn’t it? Of course the tree makes a sound. If all intelligent life in the universe was wiped, a falling rock would still make a sound.
But in the case of the potato and apple, one has the ability to unplug the nose and tell the difference. If no one had any senses that could distinguish them under any circumstances, then it would be different.
This comes down to something akin to Einstein's problem with quantum physics, that it didn't make aesthetic sense to have something be fundamentally random. That it is the same, and philosophically preferable, to say that something is fixed, but cannot be measured, than to say it is actually random.
Now, I hear that with quantum stuff they somehow have proved it to be fundamentally random, but the point is that if you really can't tell in any way, the difference doesn't matter at all. At least to a reductionist viewpoint. I feel that philosophies about "what if reality is a simulation or a dream?" are dumb for the same reason. Unless there's a way to wake up, who cares?
> Now, I hear that with quantum stuff they somehow have proved it to be fundamentally random, but the point is that if you really can't tell in any way, the difference doesn't matter at all.
Someone said that quantum should not be brought up in discussions about consciousness because it’s way too easy to misconstrue the quantum maths about probabilities and observations as somehow relevant on macro scale.
But we don't know that we can't ever possibly test consciousness, we just don't know how to test it yet.
Before Archimedes, the problem of determining the purity of a golden crown was intractable. Archimedes' solution was not arrived at by brute force concentrating on the problem, but rather by an epiphany (leading to the famous story of him shouting "Eureka" and running out of the house naked). https://en.wikipedia.org/wiki/Eureka_(word)#Archimedes
It's possible that someday someone like Archimedes will realize a way to test consciousness, and it'll be something ridiculously simple (like Archimedes' submerging the crown in water and seeing how much water it displaces), and we'll all kick ourselves for not thinking of it first :)
We are all not-conscious for large swaths of the day. We think we are conscious all of the time, because it is only when we are conscious that we think to think about it. So we subconsciously maintain a kind of linked-list of conscious periods, and so the illusion of permanent/static consciousness persists. But there are gaps between every conscious period, which are scarcely noticeable unless you have figured out to look for them. (It is a paradox of the mind that there can be awareness during non-conscious periods).
+1. Here's a trick you can use to trip yourself out (also works as a party trick to trip other people out). Look in a mirror. Look at your left eye. Look at your right eye. Look at your left eye. Look at your right eye. You won't see your eyes moving; it is as if your eyes are holding still. But to someone watching you, your eyes make very visible movements whenever you switch which eye you're looking at.
It isn't quite this. We have memories for only part of the day. We have no memories for other parts. We don't know whether we are conscious at these times. It isn't that we know we are unconscious when we are asleep except inasmuch as the term "unconscious" is sloppy and applies to whatever we are when we are asleep and the hypothetical state of existing without consciousness. These are two things we don't have direct experience with and which we assume are the same.
Usually I code in phases, figuring out the solution, which requires a lot of thinking and expermenting and trials and errors, but once I find what I think is a promising solution I might end up writing lots of code; in this phase I sort of feel that the code comes out of me without a clear conscious effort. This phase might last a few hours or a few days and I'm often limited by my typing speed and the responsiveness of my editor; in a away the code on the screen becomes part of my mental process (similarly sometimes I have to scribble on a piece of paper just to get my mental processes runnin). After that, I snap out and start compiling the code (I program in c++, so 10s of thousands of lines of errors are routine), clean it up and usually end up deleting large chunks of code. Note I'm fully aware of the code I have written but have little recollection of the actual act of writing. After this I might switch to write tests (which Is a much more conscious activity) or move back to phase one.
I'm fully aware that's not how most developers work, according to my boss, when I tell him I haven't compiled my code in a few days, I'm just strange.
I think we're more likely to make intellectual progress on consciousness by either redefining it in a more objectively rigorous way, or (more likely IMO) abandoning it as a philosophical construct analogous to "the soul" and focusing research on a subset of phenomena that can be rigorously defined.
It seems to me like discussions of "consciousness" here on HN seem to frequently devolve into arguments over the semantics of that particular word. That feels more philosophical than scientific to me.
Because the philosophical part is the only reason we care what consciousness is. Otherwise we are just talking about, what, perceptive abilities? Reactions to stimuli? That's all well and good, but we can't as easily use that to justify our enslavement and/or slaughter of other beings.
Obviously it's absurd to think not everyone is conscious the same way we are, but I don't think you need to assert that for the hard problem of consciousness to exist. It's enough that I (whoever I am) am conscious, and no one can lead me to doubt that, although they can cast confusion on the terms I use to describe it.
I think there are two different things called consciousness. The first is awareness of your surroundings. Yes, your dog is conscious, unless asleep. And even then it's conscious to some degree, because it can be awakened by external stimulus.
The second kind of consciousness is being aware of your awareness - being able to watch your mind work. To our knowledge, dogs don't have that. Nobody does but humans, so far as we know. The problem is, my definition here is a purely internal, subjective one. I can't prove to you that I am conscious in the second sense; I can't prove to you that anyone else is or is not. All I know is that this is something my own mind can do, and maybe I can describe a bit of what it's like to have my mind do it. That's not much to go on for further investigations.
Demonstratably, you don't actually experience doing so. When a solution to a problem you were thinking aloud about yesterday suddenly pops up in you mind today, you have 0 idea how your brain came up with it.
Yes, I have had that experience. And how do we know it just "popped up"? Because we can watch our own consciousness, and we can see that it did not originate at the level of conscious thought.
So you mean unless you think an idea in words, you are not doing anything conscious? Does that mean I play a real-time video game entirely unconsciously? Because I don't think "I need to go there and do this" in words, I just do it.
Seen from my perspective this doesn't make you any more different than the chinese room, as you cannot prove your claim to observe your very own mind while thinking.
The ability to lie seems to me to imply at least some level of consciousness. How could/why would you deceive others if you have no concept of your own existence?
Yes, but the key ingredient for human lying is a theory of mind, and a theory of mind is difficult to formulate without your own consciousness to generalize from. To get to even the motive for lying in the first place, you'd need awareness.
For a philosophical zombie, you'd need this behavior to exist independent of the zombie having a theory of mind and conscious awareness which they can use to reason about the state of another's awareness. That's a lot leaps of faith to take.
This "theory of mind" can be called "imagination" in a limited form. Allowing our conceptual self to act and predict what will happen is key to "consciousness".
We don't need to die by walking off a cliff, if we can have a conceptual version of ourself walk off, imagine the result of walking off the cliff and choose not to do it.
Yes this expands consciousness to animals, but I doubt it goes much farther than that. I think it fits.
This is the sort of idea that would appeal to xenophobes, racists, and others of their ilk. Not calling you one--not at all--but its the sort of idea that seems quite dangerous in the wrong hands.
I think you're making a mistake here. The mistake is to think that something/someone could very convincingly _seem_ to be conscious but somehow not actually be conscious. I would argue that, beyond a certain point, there is no difference.
The Turing Test is a good tool to roll out in these sorts of arguments. People often mention the Turing Test, but have you ever stopped to think how good a conversation would need to be to _convincingly_ pass it?
Dennett gives an imaginary example of a Turing Test conversation in his book Consciousness Explained:
Judge: Did you hear about the Irishman who found a magic lamp? When he rubbed it a genie appeared and granted him three wishes. “I’ll have a pint of Guiness!” the Irishman replied and immediately it appeared. The Irishman eagerly set to sipping and then gulping, but the level of Guiness in the glass was always magically restored. After a while the genie became impatient. “Well, what about your second wish?” he asked. Replied the Irishman between gulps, “Oh well, I guess I’ll have another one of these.”
CHINESE ROOM: Very funny. No, I hadn’t heard it– but you know I find ethnic jokes in bad taste. I laughed in spite of myself, but really, I think you should find other topics for us to discuss.
J: Fair enough but I told you the joke because I want you to explain it to me.
CR: Boring! You should never explain jokes.
J: Nevertheless, this is my test question. Can you explain to me how and why the joke “works”?
CR: If you insist. You see, it depends on the assumption that the magically refilling glass will go on refilling forever, so the Irishman has all the stout he can ever drink. So he hardly has a reason for wanting a duplicate but he is so stupid (that’s the part I object to) or so besotted by the alcohol that he doesn’t recognize this, and so, unthinkingly endorsing his delight with his first wish come true, he asks for seconds. These background assumptions aren’t true, of course, but just part of the ambient lore of joke-telling, in which we suspend our disbelief in magic and so forth. By the way we could imagine a somewhat labored continuation in which the Irishman turned out to be “right” in his second wish after all, perhaps he’s planning to throw a big party and one glass won’t refill fast enough to satisfy all his thirsty guests (and it’s no use saving it up in advance– we all know how stale stout loses its taste). We tend not to think of such complications which is part of the explanation of why jokes work. Is that enough?
Dennett goes on to say:
"The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, and much, much more…. Maybe the billions of actions of all those highly structured parts produce genuine understanding in the system after all."
The joke isn't funny because the irishman is stupid. The joke is funny because the irishman holds Guinness as his highest value.
That puppet can't tell you how to behave in life. Unless it is embodied in a human body that is relevant to the speaker and has a life context that is similar (requiring to eat, sleep, ect). The physical actions of the mind required to have a conversation are only part of our greater identity and awareness of ourselves in the world.
The puppet's existence in the world would be meaningless, though very intelligent. Why is there a puppet that can talk? What is it's purpose? Only a conscious being can answer that. The irishman's joke is funny because he chose Guinness as his reason to live, his highest value, his god, his philosophy.
Love that story, thanks! Ok, now a serious response. You come administer a Turing Test (in sign language) to a puppet which I'm controlling with some strings which you can't see. Using puppetry, I help the puppet pass the test. Is the puppet conscious?
Is your hand conscious? Are you your hand? Am I reasonable to assume that your hand typed your comment? So how can I be sure that you are conscious and not just your hands and mouth?
I can, because those are merely the mechanisms you use to communicate. If you choose to communicate via puppet, that's still you communicating, and the puppet is not conscious. Now, if you can make a puppet that passes the test, and I mean really passes, like the example above, where there can be questions and meta questions, and no running from some topics, without you having to interfere at all during the tests, then you might have a conscious puppet after all.
Well, maybe it would produce a hurricane on the other side of the globe.
If I told a time traveling Newton how I’m replying to this on my phone, he might conclude the phone is conscious.
I think the only thing we discover with this line of reasoning is that humans tend to ascribe consciousness to complex behavior.
This could be because complex behavior implies consciousness, but that’s generous. Humans have probably just been programmed to ascribe because it’s a good heuristic esp. in the Paleolithic world.
What good is my imagination as a tool for measuring consciousness?
Years ago I’d have a hard time imagining RESnet.
I see what you’re going for, if I really try to imagine it my brain starts ascribing consciousness to it. But that’s my point. I ascribe it due to intutition, not reason. Nothing about my intuition gives me a solid argument for why your bot must be conscious, it only evokes an intuitive feel that it would be.
What I'm going for ultimately here is the idea that consciousness is an emergent property when a system is complex enough and has meta-knowledge about itself and so on. I mean, I get its a stretch. But duality and panpsychism are also a stretch.
Ian M Banks said it well:
... Certainly there are arguments against the possibility of Artificial Intelligence, but they tend to boil down to one of three assertions: one, that there is some vital field or other presently intangible influence exclusive to biological life - perhaps even carbon-based biological life - which may eventually fall within the remit of scientific understanding but which cannot be emulated in any other form (all of which is neither impossible nor likely); two, that self-awareness resides in a supernatural soul - presumably linked to a broad-based occult system involving gods or a god, reincarnation or whatever - and which one assumes can never be understood scientifically (equally improbable, though I do write as an atheist); and, three, that matter cannot become self-aware (or more precisely that it cannot support any informational formulation which might be said to be self-aware or taken together with its material substrate exhibit the signs of self-awareness). ...I leave all the more than nominally self-aware readers to spot the logical problem with that argument.
I like to think there are a few ways it could be, and try to be comfortable with not knowing which one it is e g
-emergent property
-solipsism
-panpsychism
Its a huge mystery thats sitting there right infront of and behind our eyes every minute of the day. I'm comfortable with not knowing the answer but I'm also fascinated by it all and I like to debate. Particularly as a displacement activity when I really need to be doing something else.
It's another word for awareness. It depends on information and a drive to map information and navigate it.
I think that there is no hard divide between conscious and unconscious, more like a continuum. All sentient beings are conscious, but maybe not about themselves if the information about themselves doesn't get fed back to them in some way. But they're certainly conscious about their environment to be able to find food.
Consciousness is essential for actions within an environment. Self consciousness isn't essential for learning though, you can learn by doing, like animals. But it's essential for betterment, expanding your options and not relying on the first best thing you've found that worked by doing.
I sort of agree, but human willingness to attribute agency to physical objects (the willingness of the wind to blow, the rains to come, the sun to rise etc.) makes me doubt even quite strong versions of the turing test persay. I'd belive that a robot that managed to live a life in society and could make me feel that it was human like in a conversation was conscious; I think I would want it to have rights and protections.
For those interested in potential viable ontologies other than reductive physicalism, I encourage you to read the works of Bernardo Kastrup. His most recent book, The Idea of the World, is well-argued and very interesting.
The problem is that there could conceivably be brains that perform all the same sensory and decision-making functions as ours but in which there is no conscious experience.
That's really not true. In Neural Correlates of Consciousness research there are two very important things people can only do for experiences that they're consciously aware of: remember then and communicate about them. Someone with blindsight can pick up and apple in front of them as well as a sighted person. But give them a blindfold and they won't be able to reach for the spot their eyes once knew the apple was located in. And they can't tell anyone what is in front of them either.
The examples you give are of brains for which there is conscious experience, at least in some areas. For an existing consciousness-capable brain to have some capabilities that are both unconscious and memory-inaccessible does not disprove the potential existence of fully-functional, non-conscious brains.
Although I'd agree that otherwise functional non-conscious brains would probably be unable to communicate coherently about personal experiences/thoughts/memories/etc.
That's just memory isn't it? A raspberry pi can learn the position of a thing with a camera and then control a mechanical arm to reach that thing with the camera switched off.
> But give them a blindfold and they won't be able to reach for the spot their eyes once knew the apple was located in. And they can't tell anyone what is in front of them either.
There's no reason this functional behaviour requires the subjective quality of conscious experience. The recorded movement of a robotic arm doesn't, and it can replicate this feat, for example.
I don't know if this "problem" can ever truly be "solved" as there's not really a way to prove there to be a difference between something truly "feeling" emotions compared to faking it convincingly. I have to wonder at what point the difference becomes moot, a sort of Chinese room for emotions. I tend to lean towards solipsism when it comes to this kind of stuff, though. Does the difference actually matter?
I could argue that even people who genuinely feel emotions have, unconsciously, learned to feel them through their interactions with other people. Who can prove to me that emotions are something fundamental to humans and not acquired through culture? In a sense, everyone may as well be faking it.
I personally feel that some of the answers regarding consciousness and sentience may be found when we finally destigmatize psychedelics and allow structured, ethical studies and real analysis. Anyone who has had a really profound psychedelic experience knows that there is something more to our minds than we all realize. We barely know how it works, and it can't easily even really be explained to one who hasn't had a psychedelic experience.
Note that I am not advocating reckless exploration of psychedelics. A person needs to do a lot of research and self-reflection to put themselves in the mindset to even consider tripping. But for those who can handle them, there are paths to self-growth and self-repair that are unmatched in modern medicine.
Personally, my (unfounded but by personal experience) belief is that what we know as consciousness or sentience is driven by a specific balance of psychoactive substances that are naturally metabolized in the body and brain, and many neurological disorders are due to an imbalance in those substances. DMT helped me take great steps towards fighting ADHD
The only sane and productive model of consciousness I've encountered (and I've been around a bunch through growing up in the Transcendental Meditation movement) has been the one described in Hofstadter's works, such as Godel, Escher, Bach.
He talks about consciousness as an epiphenomenon, where the pattern itself is what makes something conscious, rather than some magical property that some matter has and other matter doesn't. With mathematical precision, he describes how consciousness relates to the ability to self-reference and how this relates to fundamental paradoxes in various fields such as the Halting Problem, Godel's Incompleteness Theorems, Russel's Paradox, and the works of Escher and Bach.
This line of thinking brings up some very interesting moral questions: What is it about life that makes us want to value it? Why do we have the notion of "higher" and "lower" life forms and value some species more highly than others? If we created a sufficiently advanced AI, that gave all the appearances of having feelings, a sense of self preservation, an identity, and desires, would it be immoral to unplug it or control its freedoms? What if it felt and understood even more than a human? Would its needs supersede our own?
Anyway, I highly recommend that book, GEB. It has made most other philosophizing about consciousness seem flat to me.
>What is it about life that makes us want to value it?
That's an excellent question. All the questions about consciousness are probably an attempt to better understand (and avoid?) death.
However I still have a hard time imagining a scenario where we can scientifically understand consciousness. Eventually we will understand all about how the mind works. All the various processes and how they lead to higher functions like thinking consciously in natural language etc. We'll be able to manipulate and alter our conscious experience. But even if there was a neural switch to turn consciousness on and off, we would still fail to convince ourselves of the physical nature of it, as we could never experience a state without consciousness.
My personal belief is that although we are painfully physical, we will never explain why we're actually here, experiencing those calculations, or in fact being them. Being calculations of a meat sack. Why would this happen?
Yeah, I agree with the sense of mystery you talk about. To me the two great, and linked, mysteries are why matter, space, time, etc. exists at all, and why some part of it experiences it in a self-aware way. They're both meta-questions to me, that probably can't be answered from within the universe by observing it, just like how one of Godel's Incompleteness Theorems says arithmetic cannot prove its own consistency from within. I know that's playing fast and loose with math metaphors, but it's an analogy not a rigorous proof.
This thread is depressingly chock-full of people who just do not understand the argument being made here. The same tired old counterarguments of "it's just like elan vitale was before we understood biology" are being trotted out again and again, with no attempt at understanding how flawed that talking point is (and has been for years).
I think the problem is looked at in a confusing way. On one hand, we use a scientific/empiric analysis of the success of science to explain consciousness on the other we are ready to admit science may not be able to explain it but we don't apply the same rigorous mechanism to test the non-scientific explanation of it. I also think we are not seeing the forest from the trees and are too hooked up to some kind of "magic explanation" of consciousness that is defined by having people explain their experience.
I'm a strong proponent of the "what acts like a duck is a duck" principle. If in the future we manage to create machines which will be, for a very large statistical value, indistinguishable from a human in terms of behavior (that is, make the Turing test seem like a childish joke), is it fair then to say that those are just machines made to emulate our behavior and don't really reflect "real" consciousness? Why is it fair to define consciousness just as something that humans and personal human experience can decide?
Yes, machines may never have human consciousness, but if for all intends and purposes they behave as having one, then they have one, in my opinion.
Also, saying that humans are more than biochemical machines is the same like saying that my home gaming PC is more than wires and electrons. Yes, the experience enabled by the software running on those wires and electrons goes beyond just the physical support for it but that doesn't mean, that at the end of the day it isn't just wires and electrons.
>It’s as though someone created a very elaborate spreadsheet and carefully defined how the values in every cell would be related to the values in all of the other cells. However, if no one enters a definite value for at least one of these cells, then none of the cells will have values.
Does it sound to anyone else like the author would benefit tremendously from learning the Lambda Calculus? It seems to me to be a disproof of the authors contention that a 'definite value' is needed at some point.
I've also heard that it could be that everything that exists, is "potentially conscious", such that if it is connected in such a way where it can experience and actuate, it can become aware of it's own essence, and that consciousness is just the first person experience of the universe itself. That idea is interesting but it doesn't quite solve anything, since the harder problem is figuring out exactly how the brain awakens it.
I've wondered this myself-- what if instead of the brain growing the create consciousness, it grows to receive consciousness? Like a tree growing to receive light.
>The problem is that there could conceivably be brains that perform all the same sensory and decision-making functions as ours but in which there is no conscious experience. That is, there could be brains that react as though sad but that don’t feel sadness, brains that can discriminate between wavelengths of light but that don’t see red or yellow or blue
That's an assertion I believe to not be so obvious it can be assumed to be correct with no argument to the contrary. If you can't define consciousness why do you believe it exists independent of these other systems and that these other systems exist independent of it? The study of consciousness is absolutely full of non-falsifiable claims like this. We decide apes and dogs and frogs and ants are not sapient but that is an outside observation. It may be as erroneous as looking at Specimen A and its great to the five hundredth generation ancestor and deciding that because they should count as two separate species there must have been some momentous leap in the middle to make Specimen A possible. Consciousness could easily be a smooth spectrum from human to insect and we wouldn't know it because everything next to our level is extinct.
Is there a formal language in philosophy? So that one can define the input formulas (axioms) and with some kind of "philosophy math" calculate things like "what is the meaning of life and everything" and "what is consciousness"?
Otherwise for me it all looks like words juggling continuing for many centuries. They really should implement a programming language for philosophy and outsource the hard parts to Ukraine.
There is indeed. It's called Formal Logic, and the broad class of philosophers who have attempted that program you describe are usually referred to as "Analytic Philosophers".
They haven't made a ton of progress (I mean that as a statement of fact, not disrespect), and tend to tackle much more primitive, foundational problems than what the Continental philosophers like to deal in ("what is the meaning of everything" and all that).
Turns out it's a pretty hard problem to even pin down exactly what a word means, or what a name is, much less what the meaning of everything is. The trick is, those questions are the axioms. And so the act of asserting them is a philosophical act in and of themselves. The rest is just moving stones around.
Thanks for clarifying things to me, I didn't even know the "Analytic Philosophers" term exists.
> They haven't made a ton of progress
But they progress is what should be provable and repeatable, like the real science requires? So probably, this kind of progress is the only real one in the field?
Absolutely, but that's not the interesting part. The problem boils down to the fact that any kind of formal proof system is only "truth-preserving machinery"; that is, you can't get out something "truer" than what you put in. It doesn't introduce new truths into the world, it just permutes existing ones so that different facets of them are clear.
But when you're trying ask big questions about the nature of truth itself, a proof doesn't get you very far! You're trying to get at the thing that has to be assumed as a prior or axiom in order for the proof machinery to do what it does. Given our current understanding of the universe, a proof in any formal system can never tell you "why" the thing that it proved was true. Just how it got there.
After reading this somewhat innocuous article and then going through this thread... I think the reason there is so much heated discussion here is that the simple suggestion to invert one's assumptions regarding physical reality and consciousness, also implies an inversion of responsibility.
If it really is the case that consciousness is the basis for reality, then it must also mean that only you, the reader, can find this out for yourself. Then this means you cannot fall back to preachers or scientific publications. It's up to you to do the work. From my experience, even just mentioning this idea of goal-driven contemplative practice often finds a lot of resistance if you don't approach it carefully.
This article reminds me of the difficulty ascribing meaning to data. The meaning of a series of bits - ASCII or Unicode, data or executable - is dependent on consciousness to give it meaning. Much like all of the physical observations of science.
It doesn't exactly. I think this quote really clarifies what the article is trying to discuss:
> While physical properties cannot explain consciousness, consciousness is needed to explain physical properties.
If consciousness is (and, yes, qualia are) how we perceive reality, how can we define consciousness based on observation? It would be like trying to use a telescope to capture an image of itself.
If "consciousness comes first" that means that consciousness contains reality, not visa-versa.
> It would be like trying to use a telescope to capture an image of itself.
Which you can do with the equivalent of a mirror. So assuming your analogy holds, then we just need to discover the equivalent of a consciousness mirror.
As for the article itself:
> While physical properties cannot explain consciousness, consciousness is needed to explain physical properties.
This is conjecture. There is literally no proof that physical properties cannot explain consciousness, nor is there proof that consciousness is needed to explain physical properties.
Why are we Conscious? I think the answer, at a very high level, must be that evolution followed the path of least resistance: consciousness has high value per amount of energy expended on it and is easy to achieve.
I think consciousness is one of those things that is not defined well enough for us to understand it.
Until there is a breakthrough in the understanding of consciousness itself there won't be any real conscious ai.
It's like when we learned to fly, we needn't understand how birds' wings work.
We had to understand the principles of aerodynamics or what is flying itself.
That's why I think imitating the brain won't work, (deep learning etc.) just like the early attempts to fly didn't, even if we'll know the function of every single neuron.
> "...brains that can discriminate between wavelengths of light but that don’t see red or yellow or blue or any other color..."
Well. I operate under the idea that we can only perceive three colors. Everything else is based on the brain's interpretation of that input. That is, for example, we don't see purple, the brain invents purple.
That aside, simply put, reality is a consensus. And it exists collectiveky for those who buy into that agreement.
If so, this would be backed by a great many ancient sources on this topic.
If so, it also creates great need to ponder how consciousness(es) are made to respond to their effects on others. If consciousness IS as wisdom traditions state, also what is needed is not empiricism, but plain commitment to respecting each other.
After all, saying "okay - if I exist regardless, what gives?" Others do too, and might also some reason or actual thing which makes this possible - and to honor that possibility is actually to behave safer, have more pleasant dealings, and ultimately concern ourselves with best practices in getting the most of what we get in addition to our bare consciousness.
If it can exist without any particular structure, it can be placed in other circumstances less desireable.
I.e. every cultural tradition regarding virtue and karma, and G-d.
I see evidence that the scientifically minded would be thrilled to know the makings of this, even if it would be n=1 revelation. I feel like the traditions given us are sensibly concerned with preserving awareness of these lately-arriving observations of upper dimensional/physics-derived approximations of how our selves are maintained in this projection by [previously unseen/ allegorically hinted in tradition] structures which are indeed real.
yadda yadda.. the wisest people always say this, and they themselves often got there the hard way, and yet report good and thankful circumstances from choosing respectful behavior.
So its somewhat able to be "controlled-for." The least-empirical and most curious, agenda-less, all say: do right, there is a supporting element that deserves respect, and since we do exist "hereafter" by some model in many different ways, its always worth not EXCLUDING this.
If we ARE, then what currently we are is only a portion. That creates curiosity and investigation - I do it. I discover what I didn't expect. Now I hope others do, and do better at getting this idea to those who refuse its possibility.
If so - if we are - that is to say, we exist regardless - it also means there's much to concern regarding not combating factors that will otherwise never preserve us. Instead, preserving mutual respect is greatest beneath whatever preserves all - because there would be no way free of the "other" if forever allowed for any result.
Does consciousness itself suffer from intractability similar to Gödel's incompleteness theorems? Is there a consistent set of axioms that can even include all the truths about what we mean when we say "consciousness"? If so, can we go further and use consciousness to actually fully enumerate all the interactions necessary for it or all the implications of it from within a conscious system?
This article highlights a massive issue with the field of philosophy in which there's a disconnect between advancements in neuroimaging analysis and metaphysics. The author claims that physical properties define "nothing at all". Yet what they tell are extremely valuable in emulating these systems and what they can teach us with the exploding fields of ML and AI. In the last decade big improvements have been made in the field neuroimaging. Trying to put the pieces together from neuroscience and emulation to lessons learnt is hard, but possible.
There's a worrying trend that's becoming more apparent in regards to the hard problem of consciousness. New neuroimaging analysis techniques indicates that when you take away individual functional elements of how the brain, we're left not much at all. At what point does a series of embedded components become a computer? At what point does a series of neural networks become conscious?
This is going to be an issue we're going to have to address now. Where is the line of conscious for an individual to make a legal choice? Think of someone wanting assisted suicide, in which they're often in a debilitated state. A call is going needed to be made if they're sentient enough. A tough call given it carries the massive burden. If we continue improving ML and AI we're going to have to make the call from the other end as well.
I've recently watched this talk titled "Your brain hallucinates your conscious reality" [1] by Anil Seth, seems relevant to the topic. If you are intrigued by the question of the original article you might like the TED talk.
I wonder why we believe that the evolution that has equiped us to be conscious and to wonder about consciousness (which I think is probably part of the deal of consciousness) has provided us with minds and languages that can understand or discuss it? In fact, it seems quite likely that being able to understand consciousness has no evolutionary value at all!
From my (very) personal point of view, I think consciousness has nothing to do with sensor or motor functions...we know so little about what we (as humans) and "conscious" beings really are that we're always in the realm of conjectures. Self-awareness is a so complicated subject to define anyways.
> there could conceivably be brains that perform all the same sensory and decision-making functions as ours but in which there is no conscious experience.
That makes no sense to me. What is consciousness but a mechanism for making decisions? A brain that has all of our decision making functions would be conscious by definition.
>the question of why we have conscious experience at all.
Maybe because being conscious rather than say knocked unconscious is an advantage for survival and reproduction?
While that is semi joking, consciousness quite likely arose as a functional, being aware of what's going on, thing with qualia (what it feels like) a kind of side effect.
Bernardo Kastrup has written some compelling books arguing that idealism is more rational and skeptical than materialism. The best introduction is probably Why Materialism Is Baloney. A more academic version, consisting mostly of peer-reviewed papers, is The Idea of the World.
I could start by dissecting the article and detail the sloppyness in the reasoning, the strawman of ignoring emergence, the value and inevitability of referential semantics etc, but ultimately the deeper question remains:
> For this reason, the “what it’s like” to be a conscious mind can’t be described in the purely relational, dispositional terms accessible to science. There’s just no way to get there from here.
Chalmers can keep saying this until he’s blue in the face, but it doesn’t make it true.
Just because you don't like the reality the consciousness is subjective rather than objective does not make consciousness objective.
Many people much smarter than you and I have been exploring this problem since forever, with both ancient thought experiments and modern science. They've not found what you assume is there.
To the extent that "the what it's like to be conscious" is part of physics - that is, to the extent that it enters into causal relations with the rest of the universe, it can of course be described as such. However, the way by which such casual relations occur is itself of interest, since it opens up other means of description that are not dependent on the inherent limits of an "outside" observer, acting through known physical mechanisms to make her observations.
When people talk about their consciousness, they communicate using things like their mouths and hands. Mouths and hands are physical.
Or do you belong to the school of "People talk about their consciousness not because of their consciousness, but from an entirely different chain of events that just happens to be accidentally correct"?
Within the structure of that argument, how is consciousness different from a pink elephant? I use a physical mouth to talk about a pink elephant; does the pink elephant therefore have classical physical effects that can be measured?
Not the OP, but consider the p-zombie world thought experiment: Chalmers would have us believe that we would still be having precisely this conversation about qualia, even if qualia were impossible in such a world, and no human had ever had a true subjective experience. The concept of qualia would still somehow have been invented and triggered precisely the same centuries of disagreement that we have seen.
This is as equally inconceivable as p-zombies are conceivable (to some), therefore I can only conclude that there is some fatal flaw in the p-zombie argument, and so I dismiss the conceivability of p-zombies. Thus, the presence of consciousness must somehow be behaviourally distinguishable.
> The burden of proof rests on those asserting that something is true.
That's nonsense. I'm sure you can re-write most if not all assertions that "something is false" as equivalent assertions that "something else is true," therefore the burden of proof rests equally on everyone asserting anything.
The thing that astonishes me is that we really haven't made a dent in the problem.
It's surely something we've wondered about since prehistory, and since then we've made scientific discoveries about things we never even suspected existed. But we still don't know much more about consciousness, except for some additions to the chemical and physical events that can disrupt it.
Dennett's point, or at least the point he was trying to make 20 years ago, is that one of the reasons consciousness confuses us is that it isn't actually quite what we imagine it to be. Our visual experience seems continuous, but we know from eye saccade movements that it cannot be. The way we feel the passage of time seems very straightforward, but the 'cartesian theatre' fallacy shows that our brain by necessity can't be processing things in a specific order.
I still don't understand what people mean when they say something like
"Rather than trying to reduce consciousness to fit into the box of relational/dispositional properties, it is time that we begin to explore it for what it is—and for the answers that studying it on its own terms, in its full splendor and variety, stands to provide."
What are you supposed to do with that? What does recognizing the ontological primacy of consciousness actually entail?
A lot of this sounds like the kind of "center of what's known" fallacies that plagued human thought in the pre-scientific era. The sun isn't central to anything, the Earth isn't central to anything, human consciousness is probably not central to anything either. Why wouldn't that pattern hold here. Why wouldn't we just be wrong about how unique consciousness is?
I went through the whole indoctrinated-religious, "devout" atheist, "scientific" agnostic, and now I guess I'm a theist.
Why? Because I read a book on consciousness that came like a bolt out of the blue and I understood that we do not understand.
I don't buy this "emergent property of the mind" thing because while I understand evolutionary theory I am not sure how you bridge that gap to subjective experience.
I have absolutely no clue what this "god" is that I accepted, but I know I'm not going to be punished for leaving this comment.
EDIT: just read the comments. I've been to BNL and CERN.
Unless I've misunderstood you, it sounds like you're advocating the "god of the gaps" argument. Don't understand something? Hide it behind the ol' God Band-Aid and call it a day. But that's just ignoring the problem with extra steps.
As a side-note, there are some radical, fascinating attacks on the hard problem of consciousness, such as dual-aspect monism.[1] I'd love to hear y'alls thoughts.
> I don't buy this "emergent property of the mind" thing because while I understand evolutionary theory I am not sure how you bridge that gap to subjective experience.
So you don't know, or aren't aware of any existing scientific theory, therefore it's magic?
I think the biggest question from all of this, including the fascinating comments on this thread, why is Elon Musk and OpenAI talking about the singularity at this moment in time? We aren't there, we don't have anyway of knowing when or if we'll ever be there. One thing that is real, AI is buzzing with excitement but nobody knows why.
And your claim is that consciousness is responsible for science, computers, existence...? I think you need to reread those books on atheism. Pay particular attention to the god of the gaps argument.
There are many things we don't understand, and as time pass we gradually understand more and more of them.
What does this have to do with the existence or not of God? Do you go back and forth between atheist/theist every time a new answer is discovered and a new question is asked?
Christian mystical theology has never had a problem with the god of the gaps argument. It's always postulated that God was something so utterly alien from us (prior to time itself!) that we can know him only through revelation. The nature of God lies in the realm of the unknown, for our finite minds cannot grasp the infinite. Knowledge of God is given only through experience and his grace. What has been revealed is logic-defying and mind-bending, although it has a certain paternicity of its own.
Whether the God of the Gaps points you towards atheism or theism depends on your concept of the unknown. If you think of it as a fixed sized bucket of things gradually trending towards zero due to the effort of science, then you are an atheist. If you think of the unknown as practically limitless in size and our process of discovery as barely scratching the surface, then you are pointed towards theism. Generally, materialists trend towards the former and phenomenologists towards the latter.
As the wikipedia article stats, the idea was proposed first by Christians rather than skeptics, arguing that it's a weak form of faith, so you're definitely right on that point.
> Whether the God of the Gaps points you towards atheism or theism depends on your concept of the unknown
I really like this assessment. Looking at the sibling comments here, most seem to think all questions will be answered eventually, but I suppose that's impossible to know, since we don't even know what all the questions are yet.
>There are many things we don't understand, and as time pass we gradually understand more and more of them.
On the contrary, when it comes to the fundamental nature of the universe, the more we've come to understand, the more we've come to realize how much even more we do not understand.
This is not the contrary.
We realize better how much we do not understand, but we do understand more and more.
More things are moving from 'unknown unknowns' to 'known unknowns', this is a progress in understanding, not the other way around.
When Copernicus mathematically determined that the sun was at the center of the universe rather than the earth, was that "actual comprehension", or "perceived comprehension"?
> There are many things we don't understand, and as time pass we gradually understand more and more of them. What does this have to do with the existence or not of God?
You’ve committed a mental shortcut. It’s too hard, therefore God, or magic, or whatever.
I’m convinced the ability to form short and long-term memories is crucial to our consciousness. Take that away and a human being is little more than a fruit fly, mentally.
It really depends on his theology. I'd agree if he succumbed to a prescribed framework of dogma. But for those freethinkers seeking inner truth there are no shortcuts.
I think it's invalid to say he casts it in the "too hard basket" because he chooses theology. On the contrary, I think you cast it into the too hard basket because you're so steadfast on believing in the rational.
It is after all the unifying question all Humans have pondered for possibly hundreds of thousands of years now. We still have no answers, maybe that's why we have religion? In any case, humans can't answer this fundamental question no matter how good our science has gotten so in my book, he's okay to give up on that. Life is too short. If theism works for someone, especially after such a great effort to find answers, he sounds like an ideal human to me. Probably a really level headed individual, but I'm just guessing like everybody else.
On the contrary, it’s a problem that I would like to work on but lack the bandwidth to do so at the time being. Just because I don’t have the bandwidth to work on it doesn’t make it unsolvable.
Theology lacks proof. I’m happy to believe in God if there’s proof. It should be easy to prove the existence of a singular being with unlimited power, no?
"The wind blows where it wishes. You hear its sound, but you do not know where it comes from or where it is going. So it is with everyone born of the Spirit." (John 3:8)
Why believe that a god is reponsible for the gaps rather than believe that we simply don't have the answers yet and that eventually science will give us them (as it has in the past, countless times)?
I would take this further: even if we NEVER get those answers (because of some inherent physical limit in our observation and understanding of physical phenomenon, similar to how Heisenberg's uncertainty principle means it's impossible to observe accurately both momentum and position) it still doesn't mean there needs to be "god" introduced in the equation to explain that which is unexplainable.
Once you accept that not everything is knowable, not everything is observable, not everything is explainable and not everything has to have a purpose, and all that is perfectly fine without the existence of a divine being, then why do we need to add a divine being in the mix?
I think the main reason is that if we reach that limit of knowledge, we will always be left with the question, "How did something come from nothing?" I don't know if humans will ever be able to explain that question as it's not something we can experience. That naturally lends itself to the conclusion that there is something greater than us in the universe that can manipulate the physical world we observe in ways we do not understand. Whether that is the colliding of multiverses or divine intervention we wouldn't and couldn't ever know.
Not at all. It’s perfectly okay in science to say we don’t know how that happens yet. Deciding that we’ll never know is an irrational leap. The “never” is a strong statement and requires proof. Can you prove never? I don’t think so. Then why assume it?
Well, there's never a way for me to know whether solipsism is true, or to even calculate a (meaningful) probability of it being true. In fact, this applies to probably infinitely many strange metaphysicses. All I know is that I'm conscious (or, more precisely, that "consciousness is"). I can say that "assuming the standard scientific metaphysics is correct, science might solve it," but that assumption is enormous and untestable.
That's assuming that "arising" is done as some part of logical plan. The universe doesn't have to be logical, physical phenomena doesn't have to follow human observation and logic. It's crazy hubris to assume the opposite IMO.
Just wanted to say: I like the way you think. In fact, I'm writing up a piece about how to arrive at (what is presumably) a similar realization. In brief: (1) notice consciousness, (2) notice that it's impossible (in a well-defined sense) to know what is causing it. If done right, there can be an epiphany.
What part of subjective experience exactly do you think that evolutionary theory fails to explain? I study evolution quite a bit and, while there are many details about behavior and learning that are still missing, AFAICT it seems to explain consciousness and subjectivity pretty well.
> Consciousness. We could and should be robots. There is nothing to reproducing that requires this, which is the survival of the fittest.
Humans are social animals. To interact with other people, it is probably evolutionarily advantageous to have a mental model that can predict the actions of other people. Consciousness might be simply an offshoot of that mental model, in which a person predicts their own actions, too.
I'm not saying that this theory is true, although I do think I heard it somewhere. (Maybe it's in I am a Strange Loop?) But it's just meant to be an example of how consciousness could arise as a side effect of something with an actual reproductive advantage.
Darwin pointed this out as a possible explanation for the emergence of altruism and morality. He didn't like the idea of group selection, so he suggested that humans might have started to reason that if they helped others, others might be inclined to help them in the future. As I mentioned in my other comment this probably emerged as a side-effect of the increased memory/reasoning-buffer required for language processing.
You're not the first to consider this possibility, as you know. The novelist Peter Watts, in Blindsight, has a remarkably interesting, if also remarkably dark, take on the same idea. I won't spoil it here, since the book is an enjoyable read and freely available from the author's site: https://www.rifters.com/real/Blindsight.htm
("Philosophical zombie" would be a more precise term than "robot", btw, and might make soi-disant rationalists take you slightly more seriously because it's closer to the sort of language they like to use. I don't know why that would be a desideratum for anyone, but if it is for you, this may be worth considering.)
Given infinite time and infinite space, anything can happen. Time is a weird thing, but it seems more of it is created every day, increasing the possibilities.
What do you mean with robots? Describe how our actions would be different if we were robots, compared with our reality where we are not.
Brain teaser answer: Well, that's more of a question of physics, and the origins of the big bang. We are talking about brains here. They came about some billions of years later.
Explain to me how a somewhat intelligent creature would experience it's life without consciousness? What's the alternative? How would you, as a living creature, be able to sense your environment, and fight for preserving your survival, if you didn't have a sense of self?
All I know is that "experiences seem to be happening." This "sheer fact of seeming" (aka experience) is all I can ever know, and to label it an "illusion" isn't really meaningful. "Illusions" are specific experiences that don't correspond to some truth (that I have derived from my metaphysical models, which I in turn derived from my experiences).
I'm not denying that the experience of life or consciousness exists. I just think it's not as significant as some make it out to be.
The question still stands. How would a creature experience life without consciousness?
Or let's even say you created a generally intelligent neural net with sufficient sensors, cameras, and actuators for it to live in our physical world, piped into the neural network. It learns to use it's sensors and actuators, much like a growing child would. What is it's experience? How does this neural network experience life as compared to our brains?
To give you a hint, it wouldn't have a Heads-Up-Display with a battery indicator. It would feel tired or hungry. It would experience life immersed in the world to the extent that it's sensors would allow it. It's sensors would be naturally fused in it's neural net in whatever way was most optimal for it's environment. Much in the same way that we use sight, sensation in our feet and muscles, and the liquid in our ears to maintain our balance without ever thinking about the source or distinction in the data our brains are receiving.
I'll expand my own take on this thought, borrowing a mathematical analogy I read recently.
Take the real and complex numbers. The real numbers have the nice property that they are easy to order since they form a line. Given any two you can tell which is larger. When you go up the complex numbers you lose this obvious ordering, you can define a new ordering, but it is no longer trivial since you are comparing points on a plane.
Now, some people see complex numbers as an extension of the real numbers, but it would be more accurate to call them a generalization of the real numbers. The real numbers are a subset of the complex numbers that satisfy the property of having this obvious ordering.
This again extends to the complex numbers and quaternions. A quaternion is a point on a 4D plane that does not necessarily satisfy commutativity, AKA ab != ba. The subset of quaternions that do satisfy commutativity are the complex numbers.
Every time you go up a level you lose an axiom, which is an assumption on how things work for that system. In a sense an axiom is a useful limitation that gives a certain structure to things. So what if there are 0 axioms? This would mean everything is possible, no limitations. 2 + 2 = 5. Obviously everything being possible is not useful.
Now how does this apply to physics? Physical laws are our axioms. But what if there are actually no physical laws, we are just witnessing the subset of "everything" that appears to have structure? If we are only capable of understanding things that are rational, we would be inherently unable to process events that are irrational. We would project this irrational observation down to the rational subset that we can make sense of. Let me finish with an example.
Take a uniform quantum superposition. It has an equally random chance of being measured 0 or 1. There is no hidden information determining this, it is truly random. This is irrational because there seems like there should be a reason for one final outcome being measured over the other due to our familiarity with cause-and-effect, but this ultimately appears to have no cause. We project this phenomena down to the rational and explain it the best we can with quantum states and probability.
I think evolution explains that fine. It's the same story as for physical development: the extrapolation and refinement of living organisms via natural selection over the course of hundreds of millions of years.
Consciousness is hard to define, but if we examine animal intelligence it unsurprisingly seems to exist at various levels of complexity. So even worms have intelligence (and consciousness IMO) that isn't quite robotic: they react in predictable ways to light, but the reaction is modulated by other factors like temperature, moisture and what they are doing at the time. Their behavior can be described by a simple neural net, which is in fact what they have. (see Darwin 1881)
Larger animals evolved the ability to evaluate multiple such signals simultaneously, and to do so it's necessary that lower level consciousness (like what the worm has) can send signals to higher levels of consciousness for evaluation. So if my foot is cramping a lower level system sends an appropriate pain signal to a higher level system, which can then aggregate that signal with other signals and cross reference it against current active behavioral plans. If I am meditating, I might ignore the pain, but if I'm working I might get up and stretch. This is similar to how the reaction of a worm is modulated, but just filtered up through higher levels of consciousness.
Why would we develop higher levels of consciousness? Because any change to animal brains that allowed them to be aware of sensory input and evaluate it with reference to behavioral goals (instead of just reacting like a robot) would increase that animals fitness, and any marginal increase in fitness will result in those traits being passed on.
The most significant one for human consciousness in particular is that the appearance of language pushed evolution in the direction of favoring a large working memory. If you can tell me "the bananas are down the hill, to the right, across the stream, next to the big stone" and I can remember that information and go find the bananas, then my fitness is improved. Darwin also pointed out that this increased working memory allows people to reason that if I help someone, that other person might help me back, allowing altruism to emerge from the cognitive space allotted to planning, which evolved from the pressures related to holding large chunks of linguistic data in the mind.
So there are two things: first is subjective experience, which to me seems to be easily explainable as being like "system signals" from lower systems to higher systems. It's important to remember here that the appearance of information itself and intelligent responses to the environment is one of the defining features of all life, even the most basic (Adolf Heschel 2002). The smell of rotten eggs (example from the article) is simply a chemical signal that we have, understandably, evolved an aversion to. My reaction to ice-cream is similarly a chemical signal filtered through a lower biological system which interprets it as something REALLY GOOD and sends the appropriate signals to my higher level cognitive systems. The fact that it's a subjective signal and not a "robotic" reaction means that I can respond to it differently depending on other factors, which is clearly advantageous from the POV of natural selection. If I have diabetes I can resist the temptation of "deliciousness" to my great advantage.
The second thing is that humans have a great memory, so we can hold lots of these signals in our minds alongside memories, plans, ideas etc. Since that ability was associated with increased fitness over millions of years, it has increased to the level we see today.
I largely agree, but it feels a bit like side-stepping the real question. We can make decisions that are completely orthogonal (and a lot of times opposite) to whatever would increase our "fitness". This agency is what's peculiar about consciousness in my opinion. Personally I think it is some sort of parallel, rouge system that has branched of from the higher level controller mechanism that you describe.
That's the amazing thing about evolution: the products of evolution don't have to be well engineered, they don't have to be perfect, they don't even really have to make sense. They only have to confer a marginal benefit to individuals with respect to their reproduction.
There are plenty of examples of bad design in evolution, human consciousness included (anxiety, depression etc). It's only necessary that the benefits outweigh the drawbacks for the feature to stick around.
It is amazing indeed! At some point our memes, language and culture became more powerful than any natural mutations. Perhaps this was helped by these irrational/rogue features of our consciousness.
Still doesn't explain what it is, therefore it is almost pointless to answer this. Not sure how whether it came first, second or third matters to explaining what it really is and its relationship on reality.
>The issue is that physical properties are by their nature relational, dispositional properties. That is, they describe the way that something is related to other things
Author neglects to mention that this may apply to everything except the universe itself.
>Something in the universe has to have some kind of quality in and of itself to give all the other relational/dispositional properties any meaning. Something has to get the ball rolling.
The "something" may be the universe, i.e. its total wave function. Occam's Razor suggests looking for simple explanations rather than assuming the existence of things not detectable.
Could the universe itself be conscious? Making such a claim would seem to make the author's argument a tautology.
That's an interesting idea though it's hard to see how it would work in practice. In practice, our scientific concepts are ultimately grounded in observational data, data accessible to a conscious observer. I don't see how scientific concepts could be restated in terms of the universe as a whole, especially when one considers we only have very indefinite information about the ultimate characteristics of it.
I don’t agree that consciousness came first. Before I was born, there was a large amount of time when I and my consciousness did not exist. Yet the objective world as we know exited then.
Commence with the 'there is no free will' crowd followed by argumentation that assumes free will (in order to change people's minds... which they have no control over).
I may get pooh poohed here, but to me, this discussion is one of a spiritual nature. My belief is that God created all things spiritually first, before he created them physically. And, God is what gives all life it's sentience. Understanding the brain is only part f the equation. The soul of man (and woman) is both the physical body (including the brain) and the spirit. Until we can fully grasp the spiritual, we will never be able to make the "leap" between paper and conscience. Anyone else feel this way?
If the spirit interacts with the brain, then a spirit is a phisical thing that can be measured and is in the realm of science. If the spirit does not interact with the brain, then its existence does not have any effect and it is irrelevant to understand consciousness.
The hard problem is leaking dangerously into the real world though. Chalmers' original paper posited consciousness as a physical quantity, which presumably could be empirically studied.
I don't think so, because to Chalmers it's an epiphenomenon, and so has no influence on the outside world. Only our minds experience consciousness, but as an epiphenomenon it doesn't influence our behaviour one bit, so to Chalmers, we might actually not even have consciousness and are simply deluded in thinking we do.
Exclamations of "but I know I'm conscious!" would mirror exactly those of a p-zombie who wasn't conscious.
Chalmers' consciousness is not an epiphenomenon, it s an entity of its own, but not part of the physical world
> I suggest that a theory of consciousness should take experience as fundamental. We know that a theory of consciousness requires the addition of something fundamental to our ontology, as everything in physical theory is compatible with the absence of consciousness. We might add some entirely new nonphysical feature, from which experience can be derived, but it is hard to see what such a feature would be like. More likely, we will take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time. If we take experience as fundamental, then we can go about the business of constructing a theory of experience.
The problem that most physicalists see is that this introduces an entity seemingly needlessly, which breaks occam's razor.
So me saying "I am conscious"/"I exist" is not influenced by consciousness? In his view, this subjective experience just happens to be bound to and is observing a person - "me" - that says this?
You can study consciousness, but not scientifically. The "trick" is that, though you can't make a scientific instrument to detect or measure consciousness, you do have one "instrument" to use: your own consciousness. For example, two people can "merge" and experience themselves as one.
That's unsubstantiated. Arguably, an ability of a person to play any specific multiplayer game at a specific time can be reasonably thought as a measure of consciousness. Then a simple ranking can be used to assign a number.
It is a good theory as it clearly works with drugs, sleepiness, etc
I wonder if she knows this is essentially the Kalam cosmological argument for God's existence.
The heavens declare the glory of God, and the sky above proclaims his handiwork. Day to day pours out speech, and night to night reveals knowledge. (Psalm 19:1-2, ESV)
I can answer a little bit of the question. Why do rotten eggs smell like rotten eggs and not roses? The answer is: it's actually arbitrary.
In my 20s I broke my leg fairly badly. It damaged the nerve, but luckily left the sheath of the nerve undamaged. The end result was that half my foot was paralysed. What I never realised is that if the nerve sheath is undamaged, the nerve will regrow! So slowly over time I started to get feeling back in my foot. I don't know exactly what goes on, but I'd get this sharp shooting pain, like a needle in my foot and slowly after each time, I'd regain a little bit of feeling (I don't know... is that just the nerve endings "reattaching"???)
Eventually, I got pretty much all the feeling back... except it wasn't mapped properly. The space between my toes felt like the sole of my foot and various other strangeness. Slowly I got used to it. A few years later, my brain had completely translated everything and the space between my toes felt like the space between my toes. Even though I know it's mapped differently I can't consciously distinguish between the feelings I have now and the feelings before I broke my leg.
So a rose smells like a rose because the receptors that are activated when you smell a rose are different from the receptors that are activated when you smell a rotten egg. It's just like the nerve endings between my toes are different than the nerve endings on the sole of my foot. But it's just input for your brain, nothing more. Of course we have an aversion to the smell of rotten eggs, but that's just hard wired -- more input for the brain. There are lots of people who have aversion to smells that other people like.
In terms of "consciousness", I think it's likely an illusion of sorts. We experience a kind of continuum of consciousness. In reality, though, there is only an instant. Our awareness is an artefact of our memory.
Behaviour couldn't exist without a feedback loop. For those of us who are programmers, this is pretty obvious. In order to have a state machine, the new output needs to not only look at new input, but also its current state. Every instant we exist we are processing new input and also our current state (which seems to prioritise recent inputs). We exist instantaneously, but because we are processing previous data along with new data, it creates an illusion of having existed over a continuum. Additionally, the only reason we have an ego is simply because our data networks are isolated. If I could access the data of another brain in the same way I can access data in my own brain, there would be no way to distinguish between the "two of us". There would be no way to distinguish between "my thoughts" and "somebody else's thoughts". "I" would not exist... or rather "I" would be the "two of us". It's just an artificial distinction based on a lack of ability to access the data.
Or at least that's the way I look at it. There is no way to know for sure. It might just be the FSM manipulating me like a puppet.
I wouldn't call the mapping necessarily arbitrary. The causal/information processes that your nerves connect to may have some mathematical structure that distinguishes them.
There's the symmetry theory of valence which proposes that symmetric (in some strict mathematical sense) processes feel good, and vice versa with bad qualia.
We will be able to explain and understand consciousness in objective terms 1 second after the first person achieves flight by pulling themselves into the air with the bucket they are standing in. About 5 seconds after a computer can run a 100% simulation of itself running a 100% simulation of itself. A whole minute after someone writes a program that can tell if/when any other programs will stop running.
As a question that's dogged us for thousands of years, maybe its time to accept its just a shitty question.
Nonsense. Consciousness is a matter of information processing. We're just biological computers running a program. The so called hard problem of consciousness is really just folklore by now.
Neuroscience will eventually provide a complete explanation of consciousness. I'm frankly surprised we still think in magical terms about consciousness, in this time and age, knowing all we know about how the universe works, knowing how the brain works.
Pragmatic philosophers, such as Thomas Metzinger, have already accepted the failure of philosophy in this regard, and support the neuroscience approach.
I can refer you to Thomas Metzinger's book, "The Ego Tunnel: The Science of the Mind and the Myth of the Self". He went through great efforts to make it accessible to laypersons, such as myself. It's well worth the time.
Side note, I got to Thomas Metzinger from Peter Watts' Firefall. He says something along the lines "Metzinger is THE man" (will try to find the exact text and come back with an edit at some point).
Sorry to be so direct, but I have to say it like this otherwise it doesn't get through.
Consciousness is that which makes us go take something to eat in the morning so we don't die of hunger, keeps us from running into cars or falling off a high place, allows us to work and be intelligent in our actions so that we cover our needs, guides us to form relations and make babies (thus replicate consciousness further).
I think we are agents with the purpose of survival and reproduction. That is only possible by adaptation to the environment, including society and nature. While we're busy at keeping ourselves alive, we have to act intelligently and learn from our mistakes. There is nothing outside the realm of physics and nature, just plain old agent+environment+learning+self-replication.
It feels like something to see blue (or to be a bat) because it is linked to survival, because we have senses and brains to create models of the world, because we have positive and negative signals (rewards) that guide our present and future actions and the value we attach to all life situations, and ultimately because life depends on it and self-replication would eliminate inefficient agents and leave those who are fit.
In short, consciousness is an adaptation mechanism in service of self replication in an environment with limited resources and competition.
I think all this dualist incredulity towards science is bullshit. Instead, the evolutionary process and the reinforcement learning process are sufficient to explain it. If we want to learn about consciousness we should not look only at the brain but at the environment and its limitations. It is the environment that shaped consciousness into existence.
I know it's not as poetic as souls and hard-problems, but it is the simplest explanation that fits.
Many people would reject your definition of consciousness. You can easily imagine a universe where agents perform all the activities you listed, but without an experiential flavour to them. It would all be an "empty" process, much like a simulation in software except with more complex rules.
Yet our universe seems nothing like this. There is an experiential flavour to it which everyone has access to from a first person perspective.
I am not convinced that a successful agent in a competition driven environment could be an empty process. It would have values based on the utility of its actions . Sensations plus values equals experiential flavor.
That's very well said. Consciousness isn't a "thing" to be discovered, it's a process. It's only visible when a variety of "things" are functioning in harmony. It's like asking which guitar string plays the G chord. It's _all_ of them and only when in tune.
A thermostat doesn't need to compete for resources to keep itself in good shape and make little thermostats. There is no evolutionary pressure, no learning from the environment.
This seems so intuitively wrong to me; like postulating that Windows is necessary for assembly to exist.
Almost none of the preliminary claims in this article feel compelling to me: "no physical property or set of properties can explain what it’s like to be conscious". Where is the proof of this? All the author does is re-state this sentence in different ways.
I imagine people have had similar conversations throughout history: "no one can really predict weather", "no-one understands economics", "no-one understands what makes people fall in love"; but would you claim that (for example) love is the trigger, or boundary condition which causes reality to coalesce?
The claims which do feel believeable are:
1) Many aspects of physical reality are relational
2) Some boundary conditions are necessary in order to make everything well-defined
But there is a real gap from the points above, to the conclusion that consciousness is the missing boundary-condition. Maybe we just haven't discovered the missing boundary-condition yet. Or maybe there is none, and reality is ill-defined.
Everyone who's going on about "we don't know enough about the brain to even START this" feels like a Sunday-school teacher telling me not to question the Greater Truths. It's a thought-killing sentiment. It's a subconscious defense mechanism for people who can't admit that they're a mostly-deterministic machine.
I think Elon's neurolink will shed more light on the shroud of mystery that is consciousness.
Maybe not, but I think there are people who are saying "damn the nay sayers" and just trying shit that they think can give some insight into how everything functions, up there.
Consciousness is not an impossible problem. It only becomes absurd if it's tinted of mysticism and dipped into a half-digested understanding of the current scientific consensus.
First, let's demystify conscience. Scientifically, there's no soul - it's a religious concept that has no meaning outside of it. What's left of conscience is its shell, its interface.
If we can model something that behaves like conscience, well, we created a genuine conscience.
There are two sides to the shell of conscience. One is the outer shell: the problem is to build something that appears to be conscient. This is a reasonably hard CS or neurobiology problem but by no means impossible. We are getting reasonably close to generating seemingly conscious automata. Surely it's possible to see specific parts of the brain connected to this function: for example, the speech center, and so on. So: hard to do, but doable.
The other part is the inner shell, or "self-conscience." This has to do with perception, mostly, and abstraction. We have already created expert systems that can perceive and give meaning to a lot of inputs.
The trick is creating a system that can perceive itself, its state like it sees the world. This will require a general AI or strong AI, which is currently believed by many experts to be possible although super hard to do. Again though, this is a CS problem, it's addressable by science, and I have no doubt it's a matter of time until we can settle it.
A machine with both characteristics would be just as alive as you or me. It would perceive the world semantically. It would understand to be conscious, and we would recognize it to be conscious too.
At that point, the question of consciousness will become once more only attractive to philosophers and priests. The rest of us will have less problem accepting one of these machines as "alive and conscious."
We may get there. Read something about how vision works from a century ago, when nobody had a clue. The first real progress came from "What the Frog's Eye Tells the Frog's Brain" (1959).[1] That was the beginning of understanding visual perception, and the very early days of neural network technology. Now we have lots of systems doing visual perception moderately well. There's been real progress.
(I went through Stanford CS at the peak of the 1980s expert system boom. Back then, people there were way too much into asking questions like this. "Does a rock have intentions?" was an exam question. The "AI winter" followed. AI finally got unstuck 20 years later when the machine learning people and their "shut up and calculate" approach started working.)
[1] https://hearingbrain.org/docs/letvin_ieee_1959.pdf