There is no reason at all to believe that cognition can be represented as a mathematical function.
We don't even know if the flow of water in a river can always be represented by a mathematical function - this is one of the Millennium Problems. And we've known the partial differential equations that govern that system since the 1850's.
We are far, far away from even being able to write down anything resembling a mathematical description of cognition, let alone being able to say whether the solutions to that description are in the class of Lebesgue-integrable functions.
The flow of the a river can be approximated with the Navier–Stokes equations. We might not be able to say with certainty it's an exact solution, but it's a useful approximation nonetheless.
There was, past tense, no reason to believe cognition could be represented as a mathematical function. LLMs with RLHF are forcing us to question that assumption. I would agree that we are a long way from a rigorous mathematical definition of human thought, but in the meantime that doesn't reduce the utility of approximate solutions.
I'm sorry but you're confusing "problem statement" with "solution".
The Navier-Stokes equations are a set of partial differential equations - they are the problem statement. Given some initial and boundary conditions, we can find (approximate or exact) solutions, which are functions. But we don't know that these solutions are always Lebesgue integrable, and if they are not, neural nets will not be able to approximate them.
This is just a simple example from well-understood physics that we know neural nets won't always be able to give approximate descriptions of reality.
There are even strong inapproximability results for some problems, like set cover.
"Neural networks are universal approximators" is a fairly meaningless sound bite. It just means that given enough parameters and/or the right activation function, a neural network, which is itself a function, can approximate other functions. But "enough" and "right" are doing a lot of work here, and pragmatically the answer to "how approximate?" can be "not very".
This is absurd. If you can mathematically model atoms, you can mathematically model any physical process. We might not have the computational resources to do it well, but nothing in principle puts modeling what's going on in our heads beyond the reach of mathematics.
A lot of people who argue that cognition is special to biological systems seem to base the argument on our inability to accurately model the detailed behavior of neurons. And yet kids regularly build universal computers out of stuff in Minecraft. It seems strange to imagine the response characteristics of low-level components of a system determine whether it can be conscious.
I'm not saying that we won't be able to eventually mathematically model cognition in some way.
But GP specifically says neural nets should be able to do it because they are universal approximators (of Lebesgue integratable functions).
I'm saying this is clearly a nonsense argument, because there are much simpler physical processes than cognition where the answers are not Lebesgue integratable functions, so we have no guarantee that neural networks will be able to approximate the answers.
For cognition we don't even know the problem statement, and maybe the answers are not functions over the real numbers at all, but graphs or matrices or Markov chains or what have you. Then having universal approximators of functions over the real numbers is useless.
I don't think he means practically, but theoretically. Unless you believe in a hidden dimension, the brain can be represented mathematically. The question is, will we be able to practically do it? That's what these companies (ie: OpenAI) are trying to answer.
We don't even know if the flow of water in a river can always be represented by a mathematical function - this is one of the Millennium Problems. And we've known the partial differential equations that govern that system since the 1850's.
We are far, far away from even being able to write down anything resembling a mathematical description of cognition, let alone being able to say whether the solutions to that description are in the class of Lebesgue-integrable functions.