You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous while my concept of, say, triangularity is universal, determinate and exact.
Say I give you an image of a black isosceles triangle. Nothing in that image will tell you how to group those features. There is no single interpretation, so single way to classify the image. You might design your algorithm to prefer certain ways of grouping them, but that follows from the designer's prior understanding of what he's looking at and how he wants his algorithm to classify things. If your model has been trained using only black isosceles triangles and red rhombuses, it is possible that it would classify a red right triangle as rhombus or an entirely different thing and there would be no reason in principle to say that the classification was objectively wrong apart from the objective measure of triangularity itself. But that's precisely what the algorithm/model lack in the first place and cannot attain in the second.
Furthermore, just because your ML algorithm has grouped something successfully by your measure of correctness doesn't mean it's grasped essentially what it means to be a member of that class. The grouping is always incidental no matter how much refinement goes into it.
Now, you might be tempted to say that human brains and minds are no different because evolution has done to human brains what human brains do to computer algorithms and models. But that is tantamount not only to denying the existence of abstract concepts in computers, but also their existence in human minds. You've effectively banished abstracta from existence which is exactly what materialism is forced to do.
(With physical computers, things actually get worse because computers aren't objectively computing anything. There is no fact of the matter beyond the physical processes that go on in a particular computer. Computation in physical artifacts is observer relative. I can choose to interpret what a physical computer does through the lens of computation, but there is nothing in the computer itself that is objectively computation. Kripke's plus/quus paradox demonstrates this nicely.)
>You're trying to reduce abstraction to statistical pattern classification, but that doesn't work because statistical measures are inherently bounded in generality, indeterminate and ambiguous
A substrate with a statistical description can still have determinate behavior. The brain, for example is made up of neurons that have a statistical description. But it makes determinate decisions, and presumably can grasp concepts exactly. Thresholding functions, for example, are a mechanism that can transform a statistical process into a determinate outcome.
>doesn't mean it's grasped essentially what it means to be a member of that class.
I don't know what this means aside from the ability to correctly identify members of that class. But there's no reason to think an ML algorithm cannot do this.
Regarding Feser and Searle, there is a lot to say. I think they are demonstrably wrong about computation being observer relative and whether computation is indeterminate[1]. Regarding computations being observer relative, it's helpful to get clear on what computation is. Then it easily follows that a computation is an objective fact of a process.
A computer is at its most fundamental an information processing device. This means that the input state has mutual information with something in the world, the computer undergoes some physical process that transforms the input to some output, and this output has further mutual information with something in the world. The input information is transformed by the computer to some different information, thus a computation is revelatory: it has the power to tell you something you didn't know previously. This is why a computer can tell me the inverse of a matrix, while my wall cannot, for example. My wall is inherently non-revelatory no matter how I look at it. This definition is at odds with Searle's definition of a computer as a symbol processing device, but my definition more accurately captures what people mean when they use the term computer and compute.
This understanding of a computer is important because the concept of mutual information is mind-independent. There is a fact of the matter whether one system has mutual information with another system. Thus, a computer that is fundamentally a device for meaningfully transforming mutual information is mind independent.
Say I give you an image of a black isosceles triangle. Nothing in that image will tell you how to group those features. There is no single interpretation, so single way to classify the image. You might design your algorithm to prefer certain ways of grouping them, but that follows from the designer's prior understanding of what he's looking at and how he wants his algorithm to classify things. If your model has been trained using only black isosceles triangles and red rhombuses, it is possible that it would classify a red right triangle as rhombus or an entirely different thing and there would be no reason in principle to say that the classification was objectively wrong apart from the objective measure of triangularity itself. But that's precisely what the algorithm/model lack in the first place and cannot attain in the second.
Furthermore, just because your ML algorithm has grouped something successfully by your measure of correctness doesn't mean it's grasped essentially what it means to be a member of that class. The grouping is always incidental no matter how much refinement goes into it.
Now, you might be tempted to say that human brains and minds are no different because evolution has done to human brains what human brains do to computer algorithms and models. But that is tantamount not only to denying the existence of abstract concepts in computers, but also their existence in human minds. You've effectively banished abstracta from existence which is exactly what materialism is forced to do.
(With physical computers, things actually get worse because computers aren't objectively computing anything. There is no fact of the matter beyond the physical processes that go on in a particular computer. Computation in physical artifacts is observer relative. I can choose to interpret what a physical computer does through the lens of computation, but there is nothing in the computer itself that is objectively computation. Kripke's plus/quus paradox demonstrates this nicely.)
P.S. An article you might find interesting in this vein, also from Feser: https://drive.google.com/file/d/0B4SjM0oabZazckZnWlE1Q3FtdGs...