The question isn't whether LLMs can simulate human intelligence, I think that is well-established. Many aspects of human nature are a mystery, but a technology that by design produces random outputs based on a seed number does not meet the criteria of human intelligence.
A lot of things are going to look the same when you aren't wearing your glasses. You don't even appear to be trying to describe these things in a realistic fashion. There is nothing of substance in this argument.
Look, let's say you have a black box that outputs one character at a time in a semi-random way and you don't know if there's a person sitting inside or if it's an LLM. How can you decide if it's intelligent or not?
I appreciate the philosophical direction you're trying to take this conversation, but I just don't find discussing the core subject matter in such an overly generalized manner to be stimulating.
The original argument by vineyardmike was "LLMs are a next character predictor, therefore they are not intelligent". I'm saying that as a human you can restrict yourself to a being a next character predictor, yet you can still communicate intelligently. What part do you disagree with?
Yeah, I am writing word by word, but I am not predicting the next word I thought about what I wanted to respond and am now generating the text to communicate that response, I didn't think by trying to predict what I myself would write to this question.
Your brain is undergoing some process and outputting the next word which has some reasonable statistical distribution. You're not consciously thinking about "hmm what word do I put so it's not just random gibberish" but as a whole you're doing the same thing.
From my point of view as someone reading the comment I can't tell if it's written by an LLM or not, so I can't use that to conclude if you're intelligent or not.
"Your brain is undergoing some process and outputting the next word which has some reasonable statistical distribution. You're not consciously thinking about "hmm what word do I put so it's not just random gibberish" but as a whole you're doing the same thing.
From my point of view as someone reading the comment I can't tell if it's written by an LLM or not, so I can't use that to conclude if you're intelligent or not."
There is no scientific evidence that LLMs are a close approximation to the human brain in any literal sense. It is uncouth to critique people on the basis of what appears to be nothing more than an analogy.
I'm not sure what point you think you are making by arguing with the worst possible interpretations of our comments. Clearly intelligence refers to more than just being able to put unicode to paper in this context. The subject matter of this thread was a LLM's inability to perform basic tasks involving analytical reasoning.
No, that's shifting the goalposts. The original claim was that LLMs cannot possibly be intelligent due to some detail of how they output the result ("smarter autocorrect").