Hacker News new | past | comments | ask | show | jobs | submit login

>Thought is certainly associated with change of state, but can't be reduced to it.

You can effectively reduce continuously dynamic systems to discreet steps. Sure, you can always say that the "magic" exists between the arbitrarily small steps, but from a practical POV there is no difference.

A transistor has a binary on or off. A neuron might have ~infinite~ levels of activation.

But in reality the ~infinite~ activation level can be perfectly modeled (for all intents and purposes), and computers have been doing this for decades now (maybe not with neurons, but equivalent systems). It might seem like an obvious answer, that there is special magic in analog systems that binary machines cannot access, but that is wholly untrue. Science and engineering have been extremely successful interfacing with the analog reality we live in, precisely because the digital/analog barrier isn't too big of a deal. Digital systems can do math, and math is capable of modeling analog systems, no problem.




It's not a question of discrete vs continuous, or digital vs analog. Everything I've said could also apply if a transistor could have infinite states.

Rather, the point is that the state of our brain is not the same as the content of our thoughts. They are associated with one another, but they're not the same. And the correctness of a thought can be judged only by reference to its content, not to its associated state. 2+2=4 is correct, and 2+2=5 is wrong; but we know this through looking at the content of these thoughts, not through looking at the neurological state.

But the state of the transistors (and other components) is all a computer has. There are no thoughts, no content, associated with these states.


It seems that the only barrier between brain state and thought contents is a proper measurement tool and decoder, no?

We can already do this at an extremely basic level, mapping brain states to thoughts. The paraplegic patient using their thoughts to move the mouse cursor or the neuroscientist mapping stress to brain patterns.

If I am understanding your position correctly, it seems that the differentiation between thoughts and brain states is a practical problem not a fundamental one. Ironically, LLMs have a very similar problem with it being very difficult to correlate model states with model outputs. [1]

[1]https://www.anthropic.com/research/mapping-mind-language-mod...


There is undoubtedly correlation between neurological state and thought content. But they are not the same thing. Even if, theoretically, one could map them perfectly (which I doubt is possible but it doesn't affect my point), they would remain entirely different things.

The thought that "2+2=4", or the thought "tiger", are not the same thing as the brain states that makes them up. A tiger, or the thought of a tiger, is different from the neurological state of a brain that is thinking about a tiger. And as stated before, we can't say that "2+2=4" is correct by referring to the brain state associated with it. We need to refer to the thought itself to do this. It is not a practical problem of mapping; it is that brain states and thoughts are two entirely different things, however much they may correlate, and whatever causal links may exist between them.

This is not the case for LLMs. Whatever problems we may have in recording the state of the CPUs/GPUs are entirely practical. There is no 'thought' in an LLM, just a state (or plurality of states). An LLM can't think about a tiger. It can only switch on LEDs on a screen in such a way that we associate the image/word with a tiger.


> The thought that "2+2=4", or the thought "tiger", are not the same thing as the brain states that makes them up.

Asserted without evidence. Yes, this does represent a long and occasionally distinguished line of thinking in cognitive science/philosophy of mind, but it is certainly not the only one, and some of the others categorically refute this.


Is it your contention that a tiger may be the same thing as a brain state?

It would seem to me that any coherent philosophy of mind must accept their being different as a datum; or conversely, any that implied their not being different would have to be false.

EDIT: my position has been held -- even taken as axiomatic -- by the vast majority of philosophers, from the pre-Socratics onwards, and into the 20th century. So it's not some idiosyncratic minority position.


Clearly there is a thing in the world that is a tiger independently of any brain state anywhere.

But the thought of a tiger may in fact be identical to a brain state (or it might not; at this point we do not know).


Given that a tiger is different from a brain state:

If I am thinking about a tiger, then what I am thinking about is not my brain state. So that which I am thinking about is different from (as in, cannot be identified with) my brain state.


> What I am thinking about is not my brain state

Obviously the thing you are thinking about is not the same as your thinking about it, nor the same as your brain state when thinking about it. Thinking about a thing is necessarily and definitionally distinct from the thing.

The question however is whether there is anything to "thinking about thing" other than the brain state you have when doing so. This is unknown at this time.


Earlier upthread, I said

>> the thought "tiger" [is] not the same thing as the brain state that makes [it] up.

To which you said

> Asserted without evidence.

This was in the context of my saying

>> There is undoubtedly correlation between neurological state and thought content. But they are not the same thing.

Now you say

> the thing you are thinking about is not the same as your thinking about it, nor the same as your brain state when thinking about it.

Are we at least agreed that the content of the thought "tiger" is not the same thing as the brain state that makes it up?

> The question however is whether there is anything to "thinking about thing" other than the brain state you have when doing so. This is unknown at this time.

If a tiger is distinct from a brain state, which I think we agree on, and if our thoughts are about real things such as tigers, which I assume we agree on, then how can there not be more to thought than the associated brain state?


> Are we at least agreed that the content of the thought "tiger" is not the same thing as the brain state that makes it up?

No. I don't agree that "the content of [a] thought" is something we can usefully talk about in this context.

Thoughts are subjective experiences, more or less identical to qualia. Thinking about a tiger is actually having the experience of thinking about a tiger, and this is purely subjective, like all qualia. The only question I can see worth asking about it is whether the experience of thinking about a tiger has some component to it that is not part of a fully described brain state.

> If a tiger is distinct from a brain state, which I think we agree on, and if our thoughts are about real things such as tigers,

We also have thoughts about unreal things. I don't see why such thoughts should be any different than the ones we have about real things.


>> If a tiger is distinct from a brain state, which I think we agree on, and if our thoughts are about real things such as tigers, which I assume we agree on, then how can there not be more to thought than the associated brain state?

> We also have thoughts about unreal things. I don't see why such thoughts should be any different than the ones we have about real things.

Let me rephrase then:

If a tiger is distinct from a brain state, which I think we agree on, and if our thoughts can be about real things such as tigers, which I assume we agree on, then how can there not be more to thought than the associated brain state?

A brain state does not refer to a tiger.


I realize I'm butting in on an old debate, but thinking about this caused me to come to conclusions which were interesting enough that I had to write them down somewhere.

I'd argue that rather than thoughts containing extra contents which don't exist in brain states, its more the case that brain states contain extra content which doesn't exist in thoughts. Specifically, I think that "thoughts" are a lossy abstraction that we use to reason about brain states and their resulting behaviors, since we can't directly observe brain states and reasoning about them would be very computationally intensive.

As far as I've seen, you have argued that thoughts "refer" to real things, and that thoughts can be "correct" or "incorrect" in some objective sense. I'll argue against the existence of a singular coherent concept of "referring", and also that thoughts can be useful without needing to be "correct" in some sense which brain states cannot participate in. I'll be assuming that something only exists if we can (at least in theory if not in practice) tie it back to observable behavior.

First, I'll argue that the "refers" relation is a pretty incoherent concept which sometimes happens to work. Let us think of a particular person who has a thought/brain state about a particular tiger in mind/brain. If the person has accurate enough information about the tiger, then they will recognize the tiger on sight, and may behave differently around that tiger than other tigers. I would say in this case that the person's thoughts refer to the tiger. This is the happy case where the "refers" relation is a useful aid to predicting other people's behavior.

Now let us say that the person believes that the tiger ate their mother, and that the tiger has distinctive red stripes. However, let it be the case that the person's mother was eaten by a tiger, but that tiger did not have red stripes. Separately, there does exist a singular tiger in the world which does have red stripes. Which tiger does the thought "a tiger with red stripes ate my mother" refer to?

I think it's obvious that this thought doesn't coherently refer to any tiger. However, that doesn't prevent the thought from affecting the person's behavior. Perhaps the person's next thought is to "take revenge on the tiger that killed my mother". The person then hunts down and kills the tiger with the red stripes. We might be tempted to believe that this thought refers to the mother killing tiger, but the person has acted as though it referred to the red striped tiger. However, it would be difficult to say that the thought refers to the red striped tiger either, since the person might not kill the red striped tiger if they happen to learn said tiger has an alibi. Hopefully this is sufficient to show that the "refers" relationship isn't particularly connected to observable behavior in many cases where it seems like it should be. The connection would exist if everyone had accurate and complete information about everything, but that is certainly not the world we live in.

I can't prove that the world is fully mechanical, but if we assume that it is, then all of the above behavior could in theory be predicted by just knowing the state of the world (including brain states but not thoughts) and stepping a simulation forward. Thus the concept of a brain state is more helpful to predicting their behavior than thoughts with a singular concept of "refers". We might be able to split the concept of "referring" up into other concepts for greater predictive accuracy, but I don't see how this accuracy could ever be greater than just knowing the brain state. Thus if we could directly observe brain states and had unlimited computational power, we probably wouldn't bother with the concept of a "thought".

Now then, on to the subject of correctness. I'd argue that thoughts can be useful without needing a central concept of correctness. The mechanism is the very category theory like concept of considering all things only in terms of how they relate to other things, and then finding other (possibly abstract) objects which have the same set of relationships.

For concreteness, let us say that we have piles of apples and are trying to figure out how many people we can feed. Let us say that today we have two piles each consisting of two apples. Yesterday we had a pile of four apples and could feed two people. The field of appleology is quite new, so we might want to find some abstract objects in the field of math which have the same relationship. Cutting edge appleology research shows that as far as hungry people are concerned, apple piles can be represented with natural numbers, and taking two apple piles and combining them results in a pile equivalent to adding the natural numbers associated with the piles being combined. We are short on time, so rather than combining the piles, we just think about the associated natural numbers (2 and 2), and add them (4) to figure out that we can feed two people today. Thus the equation (2+2=4) was useful because pile 1 combined with pile 2 is related to yesterday pile in the same way that 2 + 2 relates to 4.

Math is "correct" only in so far as it is consistent. That is, if you can arrive at a result using two different methods, you should find that the result is the same regardless of the method chosen. Similarly, reality is always consistent, because assuming that your behavior hasn't affected the situation, (and what is considered the situation doesn't include your brain state) it doesn't matter how or even if you reason about the situation, the situation just is what it is. So the reason math is useful is because you can find abstract objects (like numbers) which relate to each other in the same way as parts of reality (like piles of apples). By choosing a conventional math, we save ourselves the trouble of having to reason about some set of relationships all over again every time that set of relationships occurs. Instead we simply map the objects to objects in the conventional math which are related in the same manner. However, there is no singular "correct" math, as can be shown by the fact that mathematics can be defined in terms of set theory + first order logic, type theory, or category theory. Even an inconsistent math such as set theory before Russell's Paradox can still often produce useful results as long one's line of reasoning doesn't happen to trip on the inconsistency. However, tripping on an inconsistency will produce a set of relationships which cannot exist in the real world, which gives us a reason to think of consistent maths as being "correct". Consistent maths certainly are more useful.

Brain states can also participate in this model of correctness though. Brain states are related to each other, and if these relationships are the same as the relationships between external objects, then the relationships can be used to predict events occurring in the world. One can think of math and logic as mechanisms to form brain states with the consistent relationships needed to accurately model the world. As with math though, even inconsistent relationships can be fine as long as those inconsistencies aren't involved in reasoning about a thing, or predicting a thing isn't the point (take scapegoating for instance).

Sorry for the ramble. I'll summarize:

TL;DR: Thoughts don't contain "refers" and "correctness" relationships in any sense that brain states can't. The concept of "refers" is only usable to predict behavior if people have accurate and complete information about the things they are thinking about. However, brain states predict behavior regardless of how accurate or complete the information the person has is. The concept of "correctness" in math/logic really just means that the relationship between mathematical objects is consistent. We want this because the relationships between parts of reality seem to be consistent, and so if we desire the ability to predict things using abstract objects, the relationships between abstract objects must be consistent as well. However, brain states can also have consistent patterns of relationships, and so can be correct in the same sense.


Thanks for the response. I don't know if I'll have time to respond, I may, but in any case it's always good to write one's thoughts down.


Does a picture of a tiger or a tiger (to follow your sleight of hand) on a hard drive then count as a thought?


No. One is paint on canvas, and the other is part of a causal chain that makes LEDs light up in a certain way. Neither the painting nor the computer have thoughts about a tiger in the way we do. It is the human mind that makes the link between picture and real tiger (whether on canvas or on a screen).


>Rather, the point is that the state of our brain is not the same as the content of our thoughts.

Based on what exactly ? This is just an assertion. One that doesn't seem to have much in the way of evidence. 'It's not the same trust me bro' is the thesis of your argument. Not very compelling.


It's not difficult. When you think about a tiger, you are not thinking about the brain state associated with said thought. A tiger is different from a brain state.

We can safely generalize, and say the content of a thought is different from its associated brain state.

Also, as I said

>> The correctness of a thought can be judged only by reference to its content, not to its associated state. 2+2=4 is correct, and 2+2=5 is wrong; but we know this through looking at the content of these thoughts, not through looking at the neurological state.

This implies that state != content.


>It's not difficult. When you think about a tiger, you are not thinking about the brain state associated with said thought. A tiger is different from a brain state. We can safely generalize, and say the content of a thought is different from its associated brain state.

Just because you are not thinking about a brain state when you think about a tiger does not mean that your thought is not a brain state.

Just because the experience of thinking about X doesn't feel like the experience of thinking about Y (or doesn't feel like the physical process Z), it doesn't logically follow that the mental event of thinking about X isn't identical to or constituted by the physical process Z. For example, seeing the color red doesn't feel like processing photons of a specific wavelength with cone cells and neural pathways, but that doesn't mean the latter isn't the physical basis of the former.

>> The correctness of a thought can be judged only by reference to its content, not to its associated state. 2+2=4 is correct, and 2+2=5 is wrong; but we know this through looking at the content of these thoughts, not through looking at the neurological state. This implies that state != content.

Just because our current method of verification focuses on content doesn't logically prove that the content isn't ultimately realized by or identical to a physical state. It only proves that analyzing the state is not our current practical method for judging mathematical correctness.

We judge if a computer program produced the correct output by looking at the output on the screen (content), not usually by analyzing the exact pattern of voltages in the transistors (state). This doesn't mean the output isn't ultimately produced by, and dependent upon, those physical states. Our method of verification doesn't negate the underlying physical reality.

When you evaluate "2+2=4", your brain is undergoing a sequence of states that correspond to accessing the representations of "2", "+", "=", applying the learned rule (also represented physically), and arriving at the representation of "4". The process of evaluation operates on the represented content, but the entire process, including the representation of content and rules, is a physical neural process (a sequence of brain states).


> Just because you are not thinking about a brain state when you think about a tiger does not mean that your thought is not a brain state.

> It doesn't logically follow that the mental event of thinking about X isn't identical to or constituted by the physical process Z.

That's logically sound insofar as it goes. But firstly, the existence of a brain state for a given thought is, obviously, not proof that a thought is a brain state. Secondly, if you say that a thought about a tiger is a brain state, and nothing more than a brain state, then you have the problem of explaining how it is that your thought is about a tiger at all. It is the content of a thought that makes it be about reality; it is the content of a thought about a tiger that makes it be about a tiger. If you declare that a thought is its state, then it can't be about a tiger.

You can't equate content with state, and nor can you make content be reducible to state, without absurdity. The first implies that a tiger is the same as a brain state; the second implies that you're not really thinking about a tiger at all.

Similarly for arithmetic. It is only the content of a thought about arithmetic that makes it be right or wrong. It is our ideas of "2", "+", and so on, that make the sum right or wrong. The brain states have nothing to do with it. If you want to declare that content is state, and nothing more than state, then you have no way of saying the one sum is right, and the other is wrong.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: