You can call it blorbblorb if it makes you feel better. Reasoning is a social construct which, for many people, is grounded in humanity. Others ground it using other socially transmitted ontologies.
We don't usually discuss how people choose to ground their ontological beliefs, but why not? Why did you choose to ground "reasoning" in the way you do? If you didn't choose, why not?
The word "reasoning" is a "social construct," as all words are. Reasoning itself is not. Our brains do things. Reasoning is one of them. The word "reasoning" is one of the labels, the approximations, that we use when we name that activity.
Changing the label doesn't change the fact that there exists something that we're naming.
The person you're answering is asking whether reasoning -- that thing that really, actually exists -- is one of the activities LLMs perform. It's a valid question.
And the answer is that LLMs do not reason. Or if they do, we have no evidence of it or way of verifying that we actually understand qua reasoning the activity the LLM is performing (which is to say nothing of the fact that reasoning requires a reasoner). Anyone who says that LLMs reason is mistaking special effects/simulation for reality and, in essence, believes that whenever they see a picture of a dog on their computer screens, there must be a real, actual dog somewhere in the computer, too.
I hadn't thought of it that way, but when you name-droppped and indirectly questioned the honesty of people who think differently from the theorists you named, I realized that you must be onto something.
To start with, "I/you" is most of the time a meaningless or at best very ambigous term.
Let's say that here "I" is taken as synonym of "the present reflective attention".
Can the question "did I chose to ground reasoning?" in such a context be attached to a meaningful interpretation? And if so, is the answer reachable by the means available to "I"? Can "I" transcend "my" beliefs through contemplation of "my" own affabulations?
Throwing your hands up in the air like this doesn't help build a constructive case for using the word reasoning. It builds a case that words mean whatever
Yes, words mean whatever. See Saussure and Wittgenstein. To advance the claim that words are objective is to confuse the symbolic with the real.
This is generally regarded by engineer-types as false, but societal taboos and power structures can be revealed by noting what speech provokes the strongest reactions.
It's taboo to believe that LLMs can reason. People who believe this are systematically de-legitimized and framed as being out of or at least out of touch with reality.
This will appear as common sense or naturally true if you're inside the LLMs-cant-reason ideology.
It's not taboo, it's just ridiculous given the state of the art.
That doesn't mean that a silicon based reasoning entity is an ontological impossibility. But if it is to become a reality, it's not necessarily through LLM that such an entity will be spawn.
We've got your reply, which says it's not taboo and is actually common (not contradictory, lots of taboo things are common). And then we've got the other reply, which says it's not taboo because the idea is so ridiculous (implied "You'd have to be an idiot to believe it, and recognising that someone is an idiot isn't establishing a taboo").
I don't know whether it's past the mark enough to be considered a "taboo" yet, but the other comment replying to him is certainly treating it as taboo. I would note that many, many other people particularly in academia/important society act the same way as the other commenter. I'd also note I have felt strong social pressure to not hold the beliefs I hold about LLM's capacity for reasoning, including actually losing meaningful social status.
Probably worth remembering that different subcultures have different taboos.
You didn't make a case for any of that. No one did. This whole discussion is just a bunch of people who have their feelings hurt when other people tell them a LLM is modeling language, not reasoning. It's so narcissistic. "My opinion on AI is criticized so I'm oppressed."
That is not how oppression and power work. That's not how discussion works. That not how Foucault's analysis of power works.
But, according to the paper, that's not what's happening
It's examining published news / research / whatever (input), making statistical predictions, and then comparing (playing) it against other predictions to fine-tune the result
We don't usually discuss how people choose to ground their ontological beliefs, but why not? Why did you choose to ground "reasoning" in the way you do? If you didn't choose, why not?