Neural networks are not a model of the working of the human brain. They are based on an extremely simplified approximation of how neurons connect and function (which while conceptually similar is a terrible predictive model for biological neurons) and are connected together in ways that have absolutely zero resemblance to how complex nervous systems look in real animals. The burden of proof here is absolutely on showing how LLMs can model the human brain.
> They are based on an extremely simplified approximation of how neurons connect and function (which while conceptually similar is a terrible predictive model for biological neurons) and are connected together in ways that have absolutely zero resemblance to how complex nervous systems look in real animals.
Well then you already think it’s a model. Being a simplified approximation makes it a model.
Just as I said in another comment, a SIR model also models infection behavior of COVID in humans, even though it is extremely simplified and doesn’t even look at individuals. It’s basically just a set of differential equations that give a curve that looks like infection numbers. But that is exactly what makes it a model. It’s a simplified abstraction.
Neural networks are much closer to modeling a brain than other approaches to AI, ie symbolic reasoning. There will always be differences (it's machine, not meat), but it's fair to say the approach is at least "brain like".
Your position sounds like a No True Scotsman fallacy.
Sorry if it came across as non-falsifiable, that was not the intent.
Neural networks do not directly encode high-level reasoning and logic, yes. But in the spectrum of “does this model the actual functioning of an animal/human brain”, they lack both a 1st order model of how biological neurons and neural chemistry behaves, but also lack anything like the multiple levels of structural specialization present in nervous systems. That’s the basis for my argument.
That's true, but we also don't know that the multiple levels of structural specialization are necessary to produce "approximately human" intelligence.
Let's say two alien beings landed on earth today and want you to settle a bet. They both look weird in different ways but they seem to talk alike. One of them says "I'm intelligent, that other frood is fake. His brain doesn't have hypersynaptic gibblators!" The other says "No, I'm the intelligent one, the other frood's brain doesn't have floozium subnarblots!"
Who cares? Intelligence is that which acts intelligent. That's the point of the Turing test, and why I think it's still relevant.
I think we are arguing on different tracks, probably due to a difference in understanding of ‘model’.
There are arguments to be made, including the Turing test, for some sort of intelligence and potential equivalence for LLMs. I am probably more skeptical than most here that current technology is approaching human intelligence, and I believe the Turing test is in many ways a weak test. But for me that is different, more complex discussion I would not be so dismissive of.
I was originally responding to the claim “isn’t a neural network a simplified model of the working of the human brain”. A claim I interpreted to mean that NNs are system models of the brain. Emphasis on “model of the working of”, as opposed to “model of the output of”.