The real answer is that we don't yet know enough about how the brain works to work effectively on this problem. We don't know what questions to ask or how to break down the problem into smaller problems.
We may get there. Read something about how vision works from a century ago, when nobody had a clue. The first real progress came from "What the Frog's Eye Tells the Frog's Brain" (1959).[1] That was the beginning of understanding visual perception, and the very early days of neural network technology. Now we have lots of systems doing visual perception moderately well. There's been real progress.
(I went through Stanford CS at the peak of the 1980s expert system boom. Back then, people there were way too much into asking questions like this. "Does a rock have intentions?" was an exam question. The "AI winter" followed. AI finally got unstuck 20 years later when the machine learning people and their "shut up and calculate" approach started working.)
I wholeheartedly agree on your first point. I was a philosophy major and it was frustrating how so much of the philosophy of mind field were attempting to "run" off with their ideas before they could "stand".
I realize this could be true for a lot of other schools of thought, but it seemed especially prominent when arguments about what makes a person seem to rely on a lower-level assumption of how the brain works.
The way I see it, that's basically the definition of philosophy. When some sub-discipline of philosophy becomes clear enough to define its questions, they give it some other name (cf linguistics, economics, "natural philosophy")
What's left as "philosophy" is always the stuff where we don't even really know what questions to ask. So we kick them around for a few centuries, or millennia, in the hopes that something will eventually take on a shape that can be pursued in a better-defined fashion.
> So we kick them around for a few centuries, or millennia, in the hopes that something will eventually take on a shape that can be pursued in a better-defined fashion.
This process has a name : The Great Conversation [0]. Bit presumputious to me to think Plato would have any clue as to what we're saying, but it is a good name for a thing.
An interesting observation, but there's also an analytical side to philosophy: bringing in information from other fields, figuring out the implications, possibly drawing conclusions, and pointing out new directions to try.
I don't know that learning more about the brains operation will satisfy people who resist the notion that their consciousness is a property of a physical system. Since it is an emergent property of a complex system, even if we understood the functioning of each independent piece, we would still be left with the question of why all of those pieces in concert have the property of consciousness. While we gather more and more data on the brain, we barely even have a beginning notion of how emergent properties actually work. If we had to take everything we know about atoms and their interactions and explain where phase transitions and states of matter come from, we would be stymied. We know from experimentation that very large groups of many atoms do experience distinct phase transitions. We know every way individual atoms can interact. But bridging that gap from components to being able to predict large-scale complex nonlinear dynamics... we have more proofs of our inability to do such things (with conventional tools) than hints of how we might tackle it.
We are very much in the infancy of the understanding of the Brain. New tools like optogenetics, CRIPSR-CAS9, and Clarity should assist greatly in understanding the brain and the body in general. Still, we are starting to know that we don't know a lot about the brain.
To me "consciousness" feels mostly like early chemists talking about pholgiston. Or like early biologistis discussing elan vital. Or even current physics' "dark matter". These are words that don't really point to single externalities with sharp borders, instead they are terms we apply to disparate, but seemingly (somehow) related phenomena.
> The real answer is that we don't yet know enough about how the brain works...
I don't think the "problem of consciousness" is one of missing empirical evidence so much as simply a fuzzy (ill-posed?) question. Though enough evidence might be sufficient to forcefully dissolve an irreal question. On the flipside, a lot of the current ML research does a good job at addressing consciousness by breaking it into concrete, communicable actions.
Instead of asking, "What is consciousness?" try asking, "What actions could XYZ take that would convince me it's conscious?" A related question is "What actions (in minute detail) are involved as I believe/think/feel I'm conscious?" Those two questions are similar, but tend to evoke quite different sets of "external" experiences and actions.
It might turn out that there is some mathematically invariant property of things that are capable of acting as convincing conscious agents, a la the physically precise definition of heat turning out to behave kind of like a phlogiston. In such a case, we might in fact find a "thing" that deserves the label "consciousness"---such as Hofstadter's idea of a strange loop---but for now I think the term is used pretty much used in a way that's synonymous with "magic".
>A related question is "What actions (in minute detail) are involved as I believe/think/feel I'm conscious?"
I think we can define a generalized version of what we call a "conscious thought" as a thought that can be "consciously reflected" about. With "conscious reflection" I mean using some representation of the thought as an input for another thought in such a way that the new thought, including the usage of the original thought, can be used as an input for another thought in the same way. The representation doesn't have to correspond particularly closely to the execution. (We remember to think in natural language, but maybe the thought process is just converted into natural language after the fact for the purpose of saving and reflecting.)
Computer programs can also be conscious according to this generalized definition if the process of executing it is saved in some way and can, depending on the circumstances, be used in some way, which is saved in the same way.
That different "ways" of using a thought as an input for another thought are possible means that there can be different consciousnesses in our brain. One of them (if there are multiple), which is the one responsible for our output, at least on a higher level, or does at least significantly inform the output, is what we call "my consciousness". The reason why we don't know whether "my consciousness" is responsible for the output, despite having the appearance, is that it may be mostly a "post-hoc rationalization engine" for decisions made on another level, possibly for other reasons. But it does at least inform the output, since thoughts of "my consciousness" in the past inform our later output. For example, if we are asked about our past thoughts, we talk about the ones that are remembered in "my consciousness". This is the thing that puts it in such a special position compared to other hypothetical consciousnesses, which don't inform our output in this way, and are therefore invisible to other people, and obviously also to "my consciousness".
Think about it like this: What would have to change to call an conscious thought unconscious or an unconscious thought conscious? The ability of "my consciousness" to reflect about it.
> On the flipside, a lot of the current ML research does a good job at addressing consciousness by breaking it into concrete, communicable actions.
I wasn't aware ML had anything to do with consciousness.
> In such a case, we might in fact find a "thing" that deserves the label "consciousness"---such as Hofstadter's idea of a strange loop---but for now I think the term is used pretty much used in a way that's synonymous with "magic".
So you consider your experiences of color, taste, pleasure, etc. to be akin to "magic"? Because those sensations are what make up our conscious experiences.
> "Does a rock have intentions?" was an exam question.
What does a good answer to this question look like in this context? Genuinely curious what they were looking for.
Imo the real question is whether humans have intentions. It seems like if you look at it rationally, we're just collections of chemicals reacting with each other. Set the initial conditions and then the whole thing is deterministic. It's pretty uncomfortable to think this though, so I think it's best if we avoid the subject.
I would encourage you to think on this some more until the discomfort diminishes. Just because things are deterministic (at a level of complexity that is difficult or even possible to imagine, let alone predict with our current understanding), doesn't mean your experience is any less real or important for you.
Imagine you are on a rollercoaster: you know your course is pre-determined, but you can't see too far ahead, and it sure is a fun and surprising ride along the way.
I think that's unlikely to happen. Even if it all is deterministic, the amount of variables at play will make it very difficult to determine the outcome beforehand (if not even impossible, eg Conway's Game of Life).
I think the question one has to answer first is what "intention" is in itself, at least some informal definition to work with is necessary. how you define it will shape the answer to the rock intention.
> Set the initial conditions and then the whole thing is deterministic
If quantum physics theories are correct than there's always some amount of pure randomness in the game, making it impossible to create perfectly deterministic and repeatable system of any significant complexity.
Randomness is not an objection to determinism. The problem is not that decisions can not be predicted, the problem is that some people think decisions don't exist. For that vision is irrelevant if they happen by chance or by rules.
The real answer is that decisions exist in a different frame of reference. Some philosophers are stuck in a model in which decisions were taken by the soul, an inmaterial entity. So if decisions are generated by physical processes, they're an ilusion.
But that's as idiotic as it can be. Our brain is a material system and of course decisions are generated by physical processes inside it. The interesting question is how much of us is malleable and how we can make our decisions to change ourselves and our own decision making process.
Where does the randomness come from? If we could rewind the universe back to the same starting conditions, would it be any different the second time through and if so, where did that difference come from?
If you suppose an infinite multiverse where every possible thing happens in parallel, a typical observer will find themself in a universe with events that seem random. There are a lot more random-looking numbers than orderly-looking numbers.
Videogames are deterministic too, but that doesn't mean all of the things that a player's character does are predetermined inside the videogame. I like to think that there is a similar analogy for our universe, where a soul is controlling our body the same way we might control a game character.
Well... From a game designers perspective a player has very little free will... Yes they can make choices but in the grand scheme all the choices are predetermined.
Good level design is basically when a player traverses a predetermined path while not feeling that he is on rails...
Example og good level design, Half Life games. A player Traverse the world as if they can go anywhere but they always pick the right way.
Other example is Dark souls. Keeps you in a loops so you never hit a "dead end"
Thanks, I like to see the other side when I believe something.
I once read Dennett's critiques and I remember I didn't find them very convincing. But right now I don't remember his arguments. I won't analyze them right now as to be able to comment in this topic. If you mention Dennett's main argument against what Harris says about freewill I'd thank you.
Marvin Minsky said we are 'meat machines'. What does Dennet say against that? (considering machines as deterministic).
Dennett's overall claim is that incompatiblists are making a category error when they say that determinism at the physical level precludes free will at the level of subjective experience. He argues that free will, and really any kind of freedom at all, emerges from the layers upon layers upon layers that make up the existence of what we call a living creature. (And that there are layers, or degrees, of freedom.)
I'm afraid I don't remember the exact contents of the Harris/Dennett debate; I should probably reread it myself. :)
any kind of freedom at all, emerges from the layers upon layers upon layers that make up the existence of what we call a living creature.
I don't see why layers upon layers would imply freedom. A complex Java web framework may have layers upon layers of abstractions, and that may make its operation hard to understand fully. But that doesn't mean it isn't deterministic.
Spinoza wrote: Men are mistaken in thinking themselves free; their opinion is made up of consciousness of their own actions, and ignorance of the causes by which they are determined.
This was a wonderful book. As I recall, it catalogs a lot of convincing evidence that things come into conscious awareness basically upon a certain level of global activation in the brain -- when enough parts of the brain are "talking" about it, typically when different parts of the brain are having conflicting activation patterns. It likens this to the "workspace model" of awareness. And it's clear why the brain would need to resolve such a conflict, why it would need something like focus or attention to do so, and how this would relate to all sorts of information processing needs of organisms that behave in ways that keep them alive.
But there's just nothing that I recall in that book that suggests or even hints at any reason for this to result in a subjective experience. And I don't believe it rules out an electron having a nano-unit of subjective experience, for example.
The book suggests that something is a subjective experience if an organism can report it as one, and goes to great detail about what's going on in the brain when an organism is able to report a subjective experience, and makes the very reasonable suggestion that something is probably a subjective experience in a brained organism that can't report it as one so long as it exhibits all the same patterns in that organism's brain (infants, other animals).
But I don't think it does any work at all to show that there is no subjective experience in plants, or rocks, or even nano-units of subjective experience in individual electrons. It can't do this, because, as far as I recall, there is simply no progress on the problem of why this global activation in the brain would produce a subjective experience.
The book suggests that something is a subjective experience if an organism can report it as one
Also, just because someone reports having a subjective experience, doesn't mean that they actually have one. I do have subjective experience, but I have no way of telling whether someone else has one, even if they claim so.
Eh, this still doesn't go beyond the fundamental "I think therefore I am." I can prove to myself that I have consciousness, but for everyone else, ¯\_(ツ)_/¯
You can't prove even that.
If you examine it closely, the argumnent only proves that "it" thinks.
"It" is not necessarily an "I", for what restricts the thinking to an I (i.e a part of reality)? It could be the whole reality that does the thinking as far as the argument goes.
It proves itself if you define "I" as the same as the experiencer of the thinking, but "I" (as many other complex word) is much more overloaded.
Unfortunately all our word definitions seems shaky, if we want to describe something that is the base requirement of those very definitions.
Maybe the best we can do is to deconstruct the above using more simple or base terms, but the meaning of those terms maybe also depends on the content of experience not the mere fact of experience:
I experience thought -> experience of thoughts exists -> experience exists -> something exists
So upon experiencing thought you may conclude that "something exists", or "there IS something"...
Of course you can. You know it to be true that you posses consciousness because you experience it directly. What is impossible (empirically) is knowing that about anyone or anything else.
We know there is thinking. There is no reason to believe that subject-object duality has any basis in reality, or that any individual, including our "self", has a sufficient delineation to consider it an independent entity.
More fundamental than thinking is experience itself.
Regardless of whether there is a "you" or if it's some amalgamation of state that is loosely bounded together and "fooled" into thinking it is a unity, something is
there experiencing. At least in my frame there is.
This isn't something you can prove because it comes any sort of structure capable of doing proving. It's just something that's a given and you start from there.
Descartes' "Meditations on First Philosophy" is the originator of this idea. While it is dated, the form of its principal argument hasn't changed.
With regards to conscious unity, there is at least a weak form of it in the sense that you can't experience others' experiences. While it is possible that your own experience may not be fully unified, it is (very likely) disjoint from others' experiences.
You can experience another person's experience when you see them smile or cry. We call it empathy in modern parlance. The hogan twins joined at the head have an even more direct connection to each others' experiences:
I've never been very moved by ideas that I can't know or share other people's feelings, or other fanciful ideas like their blue is my green. It's more reasonable to assume they are like me because we share similar hardware (DNA) and software (Culture). Others hands look like mine, more or less. Others legs are like mine, more or less. And so others perception of green is like mine, more or less.
Telling me that my experience is prime, or fundamental doesn't tell me much. Similarly, saying I think therefore I am doesn't tell me much. What then am I and what is existence? I think therefore I am only as much as I think I am. And sometimes I forget myself.
>The real answer is that we don't yet know enough about how the brain works to work effectively on this problem.
If we imagine there's a God and I could ask any 1 question about the brain and get an anaswer, all I would need to ask is:
"Hey God: w.r.t. the human brain, are there any special shenanigans like is connected to a soul that's responsible for consciousness, or is it WYSIWYG, just a bunch of cells and that's it?"
Luckily for you, there is no God and I can answer definitively: no, there are no shenanigans. It's just some cells and that's it. I am telling you this definitively. There are no metaphysical shenanigans going on in the human brain.
Note: you might wonder why, for this answer, I decided to phrase it in terms of asking God. The answer is in order to activate the natural scientist's reaction "that's silly, God can't tell you there's no God like that". Well, brain metaphysics is exactly and equally silly. It's just some cells, that's all.
It's true that it isn't something we can say definitively. It's more than a philosophical belief, however.
There isn't any evidence that there is anything happening that's not biological despite 1000's of years of searching for that evidence.
And there is all kinds of evidence that biological processes can explain our experience. More all the time.
It's not definitive in the sense that we're anywhere near completely understanding our biology.
But it's the most likely explanation, given what we do know, but such a huge margin that there is no real alternative explanation outside of mythology.
>There isn't any evidence that there is anything happening that's not biological despite 1000's of years of searching for that evidence.
Do you consider software in a computer to be mechanical? You could record the 1010101 in an electricity stream and conclude there is no meaning to that electricity beyond getting from point A to point B in a chip.
> There isn't any evidence that there is anything happening that's not biological despite 1000's of years of searching for that evidence.
If we're looking for something supernatural like a soul, then we shouldn't expect to see natural evidence. Thus the lack of that evidence does not imply its nonexistence. Relevant xkcd: https://www.xkcd.com/638/
Its also the kettle calling itself black. The kettle, the calling, the self, that's all G..Nature..Universe, whatever you wanna call it. You cannot be sure you are anything in this universe. Even a sentence like "I think therefore I am" requires short term memory. The universe could have just created you at this very moment. Its all subject to corruption, subterfuge, and undecidability. Big woop, you find out that the brain is connecting to your thinking, that still doesn't explain why you, why the universe, why the brain, and why the thinking.
You simply don't know that is my point. By continually asserting it you are demonstrating a lack of actual scientific thinking. It is convenient to trust one's memory and the physics of the universe as consistent and indubitably pure. But you have been around less than 100 years in a universe of billions of years. So claiming any kind of self assuredness about the universe or consciousnesses's patterns is total foolishness. A glimpse doesn't warrant authority to describe the entire landscape.
Hey you're right I forgot to mention that MAYBE the brain might have a long cord going off into soul land and that's where the real stuff happens. You know - in the cloud. It's just that maybe we just haven't found that cord yet. It's like super super transparent! Hard to see. Wireless, even. That's why you can't think in a faraday cage. No access to the cloud.
see how silly that sounds.
so on one level sure you're right, on the other hand it's obvious. no, there's obviously no super transparent cord going off to soul land. it's just a bunch of sell-contained cells. It's not wireless. It's not even networked. It's just limited to within your body. You can literally plunge yourself in water and still think.
you have no cord going off into ether. you are nnot a networked component. you are just a bunch of cells in one little package and that's it. That's you. your output is what your body does robotically (move, make sounds) your input is sensory. and your consciousnes is whatever your braincells are doing.
You forgot to mention that your theory requires that _nothing_ gave birth to _everything_ via _undefined behaviours_ with intelligence and consciousness being mere side effects. That's at least as silly as the other explanation.
> there's obviously no super transparent cord going off to soul land.
You make it sounds extra silly by using this type of language. Remember when hardcore scientists were saying "my dear you must be mad to think the earth is round", or "yeah OBVIOUSLY the stars are pinned to crystal spheres, they can't just fly, that would be silly".
Anyway, the only thing we can be sure of is that we just don't know. Everything else is simply stories we tell ourselves to make us feel better. We play little games, but not all games offer the same experiences.
I make it silly because it's silly. either there's a wireess soul organ connecting us to the cloud, where the REAL thinking happens, or it's just some meat: exactly what it looks like.
what do you think the chances are of finding a small metaphysical soul organ in the brain, that connects us to the ether where all the real consciousness happens, is?
Your whole logic is based on materialistic principles. You seem to be looking for a physical "cord" linking us to a physical "place" where consciousness "happens". You can't consider what I'm hinting if you stay in that paradigm.
What if your base assumptions are flawed ? The current materialistic approach is based on flawed 15th century intuitions. Just like the previous assumptions were based on other flawed intuitions (gold can be made from any element, stars are on crystal spheres, earth is the center of the universe, &c.)
What makes you think our current assumptions are the absolute truth ?
An example of how it could happen is:
- the material universe is a simulation/mechanism that is taking place in a higher level universe with unknown properties.
- conscious processes are associated with that higher level.
- the "wetware" aspects of consciousness as neural software in the brain are not fundamental and/or are maintained as part of the simulation.
I can tell you definitively that even if there is a simulation, it doesn't treat the brain specially and differently from any other matter in the (in our) universe. the brain has no special properties. it is not a radio into a higher level universe. it's just stuff, same as all the other stuff. no shenanigans, I guarantee it for you.
I don't believe it's possible to be definitive about such matters. The way I interpret your "definitive" is "I very much believe this to be so." That is well and good, but I do not.
Consider the fact that brain damage can change a person's personality. That strikes me as a powerful indication that there is not a higher level on which the consciousness lives with an interface to the wetware.
hey God here, just a note that I did put a small radio in everyone's brain for communicating with higher dimensions that mere physics can never touch. When this radio gets damaged, it falls back to 802.11n, and this slower connectivity is what causes the changes in behavior and personality. It's basically a connectivity issue. No, the processing isn't going on in the brain, but you still need a good, fast connection to the soul realm and at the moment the only way to do that is with the consciousness organ I designed. the brain is basically a thin client and the soul organ is the network card. hope this heps.
--
okay so now what are the chances that I'm God and really just said that? If you said anything over 0.00000000% you're totally wrong. There is no chance of that because it's stupid. the above paragraph is obviously satire, because it's stupid.
Your opinions here are very arrogant. You are presuming that lack of evidence is proof of a negative. Pretty sure that so far outside of the scientific method that it's as much quackery as homeopathic "medicine"
You are presuming that the physical is all there is to existence. You fail to consider the possibility that there are portions of reality that we don't have the physical capability to perceive or the mental capability to truly understand.
There's a difference between saying "there is no evidence of X" versus "there is no evidence of X, so X is impossible"
what do you put the chances at that there's a soul organ in the brain that acts like a radio into a soul dimension, where consciousness occurs? (rather than as an emergent property of the matter, with the brain being no different than any other matter in our Universe.)
I estimate that there is a non-zero chance that consciousness itself is something we will never truly understand. Would you ever expect a piece of software to be able to truly understand the things that drive its actual consciousness, should we ever figure out how to create truly sentient AI? I don't honestly think one could, without speaking directly to their creator. And since the existence of a creator of human consciousness is purely a thing of speculation, I don't see us ever being able to do such a thing (should they exist) until we pass through what we know as death. At that point, I feel that there's a non-zero chance that our consciousness does indeed continue on in some form of existence. What that form is, where it resides, or if it even has physical properties, I don't know, and I don't think we'll ever know, until we cross the threshold of death as individuals.
I feel that consciousness itself is something non-physical. Whether it be a specific cocktail of neurotransmitters working in concert to give us the characteristics that we attribute to sentience, or a "core" form of existence that exists outside of our physical existence, I don't know, and I don't presume to know. I also don't presume I should be going around and acting like I can say with complete authority and accuracy "X doesn't exist in any way, shape, or form, because there is no evidence". I mean, what of the many other "scientific facts" humans have revised and subsequently rejected over a few millennia?
I don't object to what you've just written. I expect even if we had conscious robots that we programed with AI software and which are connected to sensors and aware of themselves (similar to boston dynamics humanoid and dog-like robots, if we also add in a large neural software brain), having them be conscious by obvious virtue of running software we developed/coded/used genetic algorithms on, wouldn't mean we understand that consciousness.
I'd say we would have the chance to have a much better understanding of that type of consciousness than we would our own, unless such an AI were to come about spontaneously from a long string of machine learning such that we don't have any clue about the inner machinations.
Interesting. Three points one from science, one from spiritual and one from literature.
1) This article talks about "causal entropic force" which can give "intentions" to inanimate object. https://www.wired.com/2017/02/life-death-spring-disorder/
2) Read somewhere that instead of figuring out "self" exist or not, Buddha suggested to observer and realize how concept of "self" arise.
3) In the book "of human bondage" author writes, the concept of self arises from pain.
> I went through Stanford CS at the peak of the 1980s expert system boom. Back then, people there were way too much into asking questions like this. "Does a rock have intentions?" was an exam question. The "AI winter" followed. AI finally got unstuck 20 years later when the machine learning people and their "shut up and calculate" approach started working.
Isn't it the other way round, though?
They were twiddling their thumbs back then because they had no other option. There was no way to do machine learning back then. I've played with perceptrons on '90s hardware and it was basically just a toy.
And then Moore's law opened the flood gates some decades later.
I used a Cray Y-MP back then but you're still right... My iPhone is faster than that hardware now, which is pretty neat.
It wasn't so much "thumb twiddling" though, there was a lot of work being done on systems which were more focused on knowledge representation (like Cyc [1], which still exists). Also a lot of work was being done from a more Psychological direction (mental models, scripts etc) and from a physical/neuro-science direction (brainz!).
These were all happening simultaneously and it wasn't clear (partly because of the MIPS issue you mention) that ML was the winning pony (for now) and I still appreciate the broad spectrum of knowledge covered in my particular Cognitive Science program.
It seems obvious now, but back then it wasn't obvious that "AI" required a learning system at all. Knowledge Engineering was a popular approach, and rules based systems running over knowledge bases were supposed to be the path to AI.
And don't forget Minsky's decimation of neural network research at the start of the 1970s [1], which led to major research centers like MIT ignoring them completely.
Personally, I think the brain as a data processing unit is a model that's seriously limiting the way many philosophers and neuroscientists think about this problem. You may be able to correlate an individual's reported experience of periwinkle down to the exact number of potassium ions crossing the cell membrane of every relevant neuron upon there acknowledgement of the color, but you still will know nothing about why it feels like something to see periwinkle.
Maybe the focus on the brain is part of the problem. Sure, neuroscience has yielded some interesting results, but consciousness is a social and behavioral phenomenon, and the brain evolved to satisfy such social and behavioral constraints. The acceptance per se of the Turing test suggests the brain may be irrelevant here.
We do. We know enough to state with absolute certainty that it's an emergent property. Nothing in your head is conscious. Nothing. Not even the whole of the human mind. It's in the "software", it's something you learned (and therefore did not have even when you were born, not even until quite a bit after that).
It's equally clear that most of what we associate with consciousness, such as thinking, awareness of the body and the moment and time and decision making and ... doesn't exist either. Because time and time again studies prove that when a decision is made (this is well studied in traffic for instance) there are no conscious reasons. Reasons only happen afterwards.
Is it therefore such a stretch to say that consciousness simply doesn't exist until long after the fact, and it is only once we ask one of these bags of mostly water to explain themselves (or ... well when we ask them something) that any trace of consciousness, at least the way humans understand it, is actually forthcoming ?
Consciousness is a trick. A learned trick. Human minds are not conscious and it is most definitely not a certainty that they, even when born fully formed and healthy that they will become conscious (read the reports on children raised by animals. They are old, sometimes 20 years old and they most definitely aren't conscious, not even on the level that a primate is conscious. The 12 year old boy they found in the wild in France never learned to speak only to articulate 2 words).
This is weird, because this is not most humans experience. Everyone around them always had consciousness. But let's compare. Everybody who has kids realizes that memory, firstly, isn't actually memory. We are very much not storing events when they happen in our brain. We learn a trick, because our parents keep referring to our past and "what we've done". We learn to calculate back from our current state of mind to what happened before.
That and of course philosophers have a millennium or 3 of history of ... philosophers getting consciousness wrong. Consciousness has been accepted in history to be being religious, to being able to rhyme, to composing music, to being able to talk and explain ourselves, to being able to love, to convince a professor (via chat) that you are conscious, to solve problems (all kinds), to walk around, to play chess, to ... all of these are now of course considered wrong. Why ? Mostly because things that definitely aren't conscious, from little dumb tricks, even mechanical contraptions in some cases, to rule based engines, to deep learning and now reinforcement learning algorithms can do this.
So can we please just conclude that whatever this article claims is ... wrong ? Just wrong. Nothing of value, other than perhaps interest a few people for stories with enough alcohol present. The current consensus seems to be that more details will be forthcoming the first time a reinforcement learning algorithm gets far enough to explain it's actions. So you want to know more ? Start there.
You assume that there is a physical reality "out there", because you perceive and you experience it (or you don't, in which case you are a p-zombie). The theory you are proposing is probably the most widely accepted theory in scientific circles, which is that consciousness is an emergent property of the brain, but that ultimately everything that causes this consciousness are just impersonal, physical events devoid of an inherent quality of "experiencing". When you die, that consciousness ceases to exist. Fair enough.
However, I think the article is simply suggesting to invert this assumption about physical reality. It proposes that for something to be "out there", you first need an "in here" (rather than afterwards), i.e. an experiencing of forms. This would be consciousness. At this point we are not even talking about decision making, thinking, memory, intellectual pursuits... Just subjective experience. So everything you're taking into the discussion regarding how thinking and decision making and memory happens is really a bridge further; not immediately relevant to the point the author is making.
I understand if this line of reasoning feels uncomfortable. You were literally pleading people to think that this is wrong. I think that is a mistake. There is value in challenging your assumptions, even if only philosophical with ethical/moral ramifications.
> It's equally clear that most of what we associate with consciousness, such as thinking, awareness of the body and the moment and time and decision making and ... doesn't exist either. Because time and time again studies prove that when a decision is made (this is well studied in traffic for instance) there are no conscious reasons. Reasons only happen afterwards.
Since a person can tell what they experience and what they do not, the distinction between conscious experience and unconscious processing must have a base in physical reality (brain activity). With sufficiently advanced technology, one could analyze the brain processes and see which processes are associated with the reported conscious experience.
The fact that not all brain activity is associated with conscious experience in no way implies that conscious experience does not exist.
No, the fact that people have grown up to very limited or even arguably nonexistent consciousness, but still perfectly functional, capable and alive, means that. Functional enough to survive 7+ years by himself in a French forest. Add that these cases prove that perfectly healthy and normal human minds might never achieve consciousness, or at least nothing exceeding whatever consciousness a cat or dog achieves. There was no medical problem preventing consciousness, we don't even really know what the problem was, or perhaps I should say which of the many, many problems this child faced caused this. Lack of human contact ? Upbringing by wolves (assuming that actually happened) ? Was it surviving by himself ? Was it the water ? Perhaps a forest is just a uniquely bad environment for kids ? Perhaps even that specific forest ? Perhaps there was a human or animal or even something else in that forest that somehow further traumatized this kid ?
You'd have to give definitions of consciousness that don't include human contact, don't include language, symbols, any human other than yourself at all, or any thoughts at all not related to short-term survival, don't involve realizing you (as a human) are obviously not a wolf, ...
It also means that there is a period where you can be taught consciousness, and clearly if it doesn't happen before 7 years of age, you will never learn it.
I agree with most of your points, but I see no reason to assume that someone who cannot use language is not conscious. After all, use of language is just one of the brain’s functions. It’s not an emergent result of logical reasoning or something like that. On the contrary, it’s a task the brain evolved to perform, and has “hardware acceleration” for – that is, regions of its genetically encoded floor plan which are dedicated to it. If that accelerator is disabled due to not having been initialized properly… that’s no reason to conclude that the rest of the system is also defective.
It does seem clear that language assists consciousness in most people – e.g. most people report experiencing an internal narrative. But some people don’t. And even if everyone did, I don’t think that would be strong enough evidence to conclude that language is required for consciousness.
Given that most definitions of consciousness I've seen essentially express that you are capable of symbolic/abstract thinking, I'd say:
1) (extreme) autistic person that doesn't speak, but can, and arguably thinks too abstract, rather than not enough : yes, conscious. Probably more conscious in some sense than "normal" people, whose consciousness is more a group thing, or at least less independent.
(also: not speaking is a pretty extreme form of autism, certainly not something you'd see in your average school)
2) person that grew up without ever having any reason to learn symbolic or abstract thinking ? No, not conscious
But it's going to be a sliding scale thing. By some measures a cat and a dog are conscious because, well, because they are certainly capable of making humans think they are suffering (and therefore they both think and feel, which is where consciousness definitions are going on now. Fish, for instance, are not). This seems to me a really bad way to define it but it's certainly widely used.
Didn't all organic life arise from inorganic molecular structures like rocks?
As a result of the theory of the big bang, molecular structures have progressively evolved in complexity, eventually becoming so complex that the boundaries of physics and chemistry are transcended into biology, life, and consciousness.
This suggests that "rocks" -- inorganic molecular structures -- indeed have "intentions" to the extent that they are primordial building blocks of consciousness
Please don't say things like this, the theory of the big bang says no such thing at all. What it says is how the universe expanded from a very high-density and high-temperature state.
There are entirely separate theories to explain how that matter, after it arose, interacts with itself to give rise to chemistry. Then there is the origin of life, which is another problem. And then we get to evolution, which is how the initial life modified itself to become the species we have today. And then we have a bunch of other theories that explain how the brain operates.
Any one of these theories could be wrong, but that wouldn't invalidate any of the other ones. Some of them we have much more data and certainty on then others. But the only people that talk as if they were the same thing are creationists, not scientists.
It is probably interconnected, but it would be a darn shame if all of our data on how suns are created gets "invalidated" by pop sci articles if we find data that changes our origins of the universe theory.
... and prescient life. Don't forget the next stage on from sentience. The line between sentience and prescience also seems blurred, given how many humans nowadays report having flashes of the future.
There's no reason to think that the laws of physics are "perfectly" tuned for consciousness. This could be the universe where there's a trillion-to-one shot of its evolving, and we beat the odds.
The Anthropic Principle rightly points out that the question of "why does the universe support life" is fundamentally circular.
Who is suggesting that consciousness is completely dependent on having "the laws of physics perfectly tuned"? Obviously there are other conditions outside the scope of just physics that must be met to foster life
We may get there. Read something about how vision works from a century ago, when nobody had a clue. The first real progress came from "What the Frog's Eye Tells the Frog's Brain" (1959).[1] That was the beginning of understanding visual perception, and the very early days of neural network technology. Now we have lots of systems doing visual perception moderately well. There's been real progress.
(I went through Stanford CS at the peak of the 1980s expert system boom. Back then, people there were way too much into asking questions like this. "Does a rock have intentions?" was an exam question. The "AI winter" followed. AI finally got unstuck 20 years later when the machine learning people and their "shut up and calculate" approach started working.)
[1] https://hearingbrain.org/docs/letvin_ieee_1959.pdf