Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But abstract things can supervene on the physical. Information, for example, is abstract, but it supervenes on some physical stuff. Granted, information is not identical to any particular instantiation, but the abstract pattern can be manifested by a particular physical instantiation. You're welcome to call information immaterial if you like, but it presents no metaphysical difficulties for physicalism.

Example? The world "information" is often used in a magical way. Patterns need not be immaterial, and I never argued for that, but I really don't know what you mean by "information". (FWIW, "supervene" is another one of those terms.)

> Why are non-materialists so fucking angry? Incivility doesn't help your cause. If your arguments were good they would stand on their own without embellishment.

Pot to kettle? Looks, there's a history there that maybe you're not privy to. Eliminativists and other materialists have consistency refused to address these fundamental problems while simultaneously ridiculing and dismissing anyone who doesn't agree with them. So you'll have to forgive me for being "uncivil". After a while, it's hard not to conclude that we're dealing with willful ignorance or intellectual dishonesty.



Information is state or configuration of one system that tells you something about another system. The pixels on your screen contain information about the state of my brain because the particular pattern of lights communicates the thoughts in my head. Information is abstract because it is independent of the medium: the pixels on the screen, pressure waves in the air, marks on paper, etc, can all be used to carry the same information.

Supervene means something is constituted by the configuration of some substance. Or the more common definition: A supervenes on B if there can be no change in A without a corresponding change in B.

>Eliminativists and other materialists have consistency refused to address these fundamental problems

I admit that people have reason to be frustrated with certain materialists, Dennett chief among them. I have my share of frustrations with him as well. But there's this trend I see with non-materialists (online and professionals) showing active disdain for materialism/physicalism that is entirely unhelpful. Ultimately we're all just trying to solve one of the hardest problems of all. Genuine efforts to move the conversation forward should be welcomed. Intractable disagreement just points towards the need for better arguments.


Okay, so intentionality is essential to information. Let's take your example of the pixels on your screen.

There is nothing intrinsic to those pixels or that arrangement of pixels that points to the state of your brain. That doesn't mean there isn't a causal history the effect of which are those physical pixel states. It is, however, entirely a matter of convention how those pixels are arranged by the designers and how they must be interpreted in conformity with the intended convention. You must bring with you the hermeneutic baggage, so to speak, that allows you to interpret those pixels in the manner intended by the designers. Those same pixels will signify something else within a different context and it is the observer that needs to have the contextual information to be able to interpret them in conformity with the designer's intentions. Furthermore, the designers of the program could have chosen to cause different pixels to light up to convey the same information. They could have instead caused those pixels to resemble, in aggregate, English language sentences that, when interpreted, describe the image of the state in your brain. But there is nothing about those pixels qua pixels that can tell you anything about your brain state. The meaning of each pixel is just that it is a pixel in a particular state, and the meaning of the aggregate of pixels is that they are an aggregate of pixels, each in a particular state. You can call that supervenience in that the meaning of the aggregate follows from the meanings of individual constituting pixels, but none of that changes the fact that the pixel states as such, whether individually or in aggregate, do not intrinsically mean your brain state. This is analogous to written text. A human actor with some meaning in mind causes blobs of ink to be arranged in some way on paper in accordance with some convention. Those blobs of ink are just blobs of ink no matter how many there are or how they're arranged. The reader, which is to say the interpreter, must bring with him a mental dictionary of conventions (a grammar) that relates symbols and arrangements of symbols to meanings to be able to reconstruct the meaning intended by the author. The meaning (or information) is in no way in the text even if it influences what meaning the interpreter attaches to it.

As Feser notes[0], Searle calls this derived intentionality which is different from intrinsic intentionality (thoughts are one example of the latter). So I do not agree that anything abstract is happening in your panel of flashing lightbulbs.

[0] https://edwardfeser.blogspot.com/2010/08/fodors-trinity.html


>Searle calls this derived intentionality which is different from intrinsic intentionality

But what makes derived intentionality not abstract? What definition of abstract are you using that excludes derived intentionality while including intrinsic intentionality?

But lets look more closely at the differences between derived and intrinsic intentionality. Derived intentionality is some relation that picks out a target in a specified context. E.g. a binary bit picks out heads/tails or day/night in my phone depending on the context set up by the programmer. Essentially the laws of physics are exploited to create a system where some symbol in the right context stands in a certain relation with the intended entities. We can boil this process down to a ball rolling down the hill along one track vs another track is picking between two objects at the bottom of the hill.

How does intrinsic intentionality fare? Presumably the idea is that such a system picks out the intended object without any external context needed to establish the reference. But is such a system categorically different than the derived sort? It doesn't seem so. The brain relies on the laws of physics to establish the context that allows signals to propagate along specific circuits. The brain also stands in specific relation to external objects such that the necessary causal chains can be established for concepts to be extracted from experience. Without this experience there would be no reference and no intentionality. So intrinsic intentionality of this sort has an essential dependence on an externally specified context.

But what about sensory concepts and internal states? Surely my experience of pain intrinsically references damaging bodily states as seen by my unlearned but competent behavior in the presence of pain, e.g. avoidance behaviors. But this reference didn't form in a vacuum. We represent a billion years of computation in the form of evolution to craft specific organizing principles in our bodies and brains that entail competent behavior for sensory stimuli. If there is a distinction between intrinsic and derived intentionality, it is not categorical. It is simply due to the right computational processes having created the right organizing principles to allow for it.


An essential feature of abstract things is that they do not exist independently and in their own right. For example, this chair or that man (whose name is John) are concrete objects. However, the concepts "chair" and "man" are abstract. They do not exist in themselves as such. The same can be said for something like "brown", an attribute that, let's say, is instantiated by both the chair and by John in some way, but which cannot exist by itself as such. So we can say that "chair", "man" and "brown" all exist "in" these concrete things (or more precisely, determine these things to be those things or in those ways). However, apart from those things that instantiate them, these forms also exist somewhere else, namely, the intellect. However, they exist in our intellects without instantiating them. Otherwise, we would literally have to have a chair or a man or something brown in our intellects the moment we thought these things. So you have a problem. You have a kind of substratum in which these forms can exist without being those things. That does not sound like matter because when those forms exist in matter, they always exist as concrete instantiations of those things.

W.r.t. derived intentionality, the relation that obtains here between a signifier and the signified is in the mind of the observer. When you read "banana", you know what I mean because the concept, in all its intrinstic intentionality and semantic content, already exists in your intellect and you have learned that that string of symbols is meant to refer to that concept. I could, however, take a non-English speaker and mischievously teach them that "banana" refers to what you and I would use the term "apple" to mean. No intrinsic relation exists between the signifier and the concept. However, there is an intrinsic relation that obtains between concepts and their instantiations. The concept "banana" is what it means to be a banana. So the derived intention involves two relations, namely, one between the signifier and the concept (which is a matter of arbitrary convention) and another relation between the concept and the signified, which necessarily obtains between the two. Derived intentionality is parasitic on intrinsic intentionality. The former requires the latter.

So what is meant when we say that computers do not possess concepts (i.e., abstract things), only derived intentionality, we mean that computers are, for all intents and purposes, syntactic machines composed of symbols and symbol manipulation rules (I would go further and say that what that describes are really abstract computing models like Turing machines, whereas physical computers are merely used to simulate these abstract machines).

Now, my whole point earlier was that if we presuppose a materialist metaphysical account of matter, we will be unable to account for intrinsic intentionality. This is a well known problem. And if we cannot account for intrinstic intentionality, then we certainly cannot make sense of derived intentionality.


Your description of abstract things sounds like a dressed-up version of something fairly mundane. (This isn't to say that your description is deficient, but rather that the concept is ultimately fairly mundane.) So I gathered three essential features of instrinsic intentionality: (1) does not exist independently, (2) exist in the intellect, (3) exist in the things that instantiate them.

Given this definition there are a universe of potential abstracta due to the many possible ways to categorize objects and their dynamics. Abstracta are essentially "objects of categorization" that relate different objects by their similarity along a particular set of dimensions. Chairs belong to the category "chair" due to sharing some particular set of features, for example. The abstract object (concept) here is chair, which is instantiated by every instance of chair; the relation between abstract and particular is two-way. Minds are relevant because they are the kinds of things that identify such categorizations of objects along a set of criteria, thus abstracta "exist in the intellect".

You know where else these abstracta exist? In unsupervised machine learning algorithms. An algorithm that automatically categorizes images based on whatever relevant features it discovers has the power of categorization which presumably is the characteristic property of abstracta. Thus the abstracta also exists within the computer system running the ML algorithm. But this abstracta seems to satisfy your criteria for intrinsic intentionality (if we don't beg the question against computer systems). The relation between the ML system and the abstracta are independent of a human to fix the reference. Yes, the algorithm was created by a person, but he did not specify what relations are formed and does not fix reference for the concepts discovered by the algorithm and the things in the world. This is analogous to evolution creating within us the capacity to independently discover abstract concepts.

(Just to preempt a reference to Searle's Chinese room argument, I believe his argument is fatally flawed: https://news.ycombinator.com/item?id=23182928)


You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous while my concept of, say, triangularity is universal, determinate and exact.

Say I give you an image of a black isosceles triangle. Nothing in that image will tell you how to group those features. There is no single interpretation, so single way to classify the image. You might design your algorithm to prefer certain ways of grouping them, but that follows from the designer's prior understanding of what he's looking at and how he wants his algorithm to classify things. If your model has been trained using only black isosceles triangles and red rhombuses, it is possible that it would classify a red right triangle as rhombus or an entirely different thing and there would be no reason in principle to say that the classification was objectively wrong apart from the objective measure of triangularity itself. But that's precisely what the algorithm/model lack in the first place and cannot attain in the second.

Furthermore, just because your ML algorithm has grouped something successfully by your measure of correctness doesn't mean it's grasped essentially what it means to be a member of that class. The grouping is always incidental no matter how much refinement goes into it.

Now, you might be tempted to say that human brains and minds are no different because evolution has done to human brains what human brains do to computer algorithms and models. But that is tantamount not only to denying the existence of abstract concepts in computers, but also their existence in human minds. You've effectively banished abstracta from existence which is exactly what materialism is forced to do.

(With physical computers, things actually get worse because computers aren't objectively computing anything. There is no fact of the matter beyond the physical processes that go on in a particular computer. Computation in physical artifacts is observer relative. I can choose to interpret what a physical computer does through the lens of computation, but there is nothing in the computer itself that is objectively computation. Kripke's plus/quus paradox demonstrates this nicely.)

P.S. An article you might find interesting in this vein, also from Feser: https://drive.google.com/file/d/0B4SjM0oabZazckZnWlE1Q3FtdGs...


>You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous

A substrate with a statistical description can still have determinate behavior. The brain, for example is made up of neurons that have a statistical description. But it makes determinate decisions, and presumably can grasp concepts exactly. Thresholding functions, for example, are a mechanism that can transform a statistical process into a determinate outcome.

>doesn't mean it's grasped essentially what it means to be a member of that class.

I don't know what this means aside from the ability to correctly identify members of that class. But there's no reason to think an ML algorithm cannot do this.

Regarding Feser and Searle, there is a lot to say. I think they are demonstrably wrong about computation being observer relative and whether computation is indeterminate[1]. Regarding computations being observer relative, it's helpful to get clear on what computation is. Then it easily follows that a computation is an objective fact of a process.

A computer is at its most fundamental an information processing device. This means that the input state has mutual information with something in the world, the computer undergoes some physical process that transforms the input to some output, and this output has further mutual information with something in the world. The input information is transformed by the computer to some different information, thus a computation is revelatory: it has the power to tell you something you didn't know previously. This is why a computer can tell me the inverse of a matrix, while my wall cannot, for example. My wall is inherently non-revelatory no matter how I look at it. This definition is at odds with Searle's definition of a computer as a symbol processing device, but my definition more accurately captures what people mean when they use the term computer and compute.

This understanding of a computer is important because the concept of mutual information is mind-independent. There is a fact of the matter whether one system has mutual information with another system. Thus, a computer that is fundamentally a device for meaningfully transforming mutual information is mind independent.

[1]https://www.reddit.com/r/askphilosophy/comments/bviafb/what_...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: