I can prove LLMs can reason. You cannot prove LLMs can't reason. This is easily demonstrable. LLMs failing to reason is not proof LLMs can't reason, it's just proof that an LLM didn't reason for that prompt.
All I have to do is show you one prompt with a correct answer that cannot be arrived at with pattern matching and the prompt can only be arrived at through reasoning. One. You have to demonstrate this for EVERY prompt if you want to prove LLMs can't reason.
No I can “prove” it — look at any number of cases where LLMs can’t even do basic value comparisons despite being claimed as super intelligent. You can try and say well that’s a limitation of the technology and then I would reply — yes and that’s why I would say it’s not reasoning according the original human definition. Also you have yet to produce any evidence of reasoning and claiming you can over and over again doesn’t add to your arguments substance. I would be interested in your proof that some answer can’t be pattern matched too — at this point I wonder if we could create an non conscious “intelligence” that if large enough would be mostly able to describe anything known to us along some line of probability we couldn’t compute with our brain architecture and it could be close to 99.99999% right. Even if we had this theoretical probability-based super intelligence it still wouldn’t be “reasoning” but could be more “intelligent” than us.
I’m also not entirely convinced we can’t arrive at a reasoning system via probability only (a really cool thought experiment) but these systems do not meet the consistency/intelligence bar for me to believe this currently.
That’s the claim everyone makes. That is a human definition if it reasoned one time correctly. That is the colloquial definition.
Someone who has brain damage can reason correctly on certain subjects and incorrectly on other subjects. This is an immensely reasonable definition. I’m not being pedantic or out of line here when I say LLMs can reason while using this definition.
Nobody is making the claim that LLMs reason like humans or are human or reason perfectly every time. Again the claim is: LLMs are capable of reasoning.
No reasoning is about applying rules of logic consistently, so if you only do it some of the time, that's not reasoning.
If I roll a die and only _sometimes_ it returns the correct answer to a basic arithmetic question, this is the exact reason why we don't say a die is doing arithmetic.
Even worse in the case of LLMs, where it's not caused by pure chance, but also training bias and hallucinations.
You can claim nobody knows the exact definition of reasoning, maybe there are some edges which aren't clearly defined because they're part of Philosophy, but applying rules of logic consistently is not something you just don't always do and still call it reasoning.
Also, LLMs are generally incapable of saying they don't know something, cannot know something, can't do something, etc. They would rather try and hallucinate. When it does that, it's not reasoning. And you also can't explain to an LLM how to figure out it doesn't know something, and then actually say it doesn't know and not make stuff up. If it was capable of reasoning you should be able to convince it using _reason_, to do exactly that.
I still think the jury is out on this given that they seem to fail on obvious things which are trivially reasoned about by humans. Perhaps they reason differently at which point I would need to understand how this reasoning is different from a humans reasoning (perhaps biological reasoning more generally?) and then I would want to consider whether one ought to call it reasoning given its differences (if there are any at the time of sampling). I understand your claim I’m just not buying it based on the current evidence and my interacting with these supposed “super intelligences” every day. I still find these tools valuable, just unable to “reason” about a concept which makes me think, as powerful and meaning filled as language is, our assumption of reasoning might just be a trick of our brain reasoning through a more tightly controlled stochastic space and us projecting the concept of reasoning onto a system. I see the COT models contort and twist language in a simulacrum of “reasoning” but any high school English teacher can tell you there is a lot of text written that appears to logically reason but doesn’t actually do anything of the sort once read with the requisite knowledge in the subject matter.
They can fail at reasoning. But they can demonstrably succeed to.
So the the statement that they CAN reason is demonstrably true.
Ok if given a prompt where the solution can only be arrived at by reasoning and the LLM gets to the solution for that single prompt, then how can you say it can't reason?
Given your set of theoreticals then I would concede, yes the model is reasoning. At that point, though, the world would probably be far more concerned with your finding of a question that can only be met via reasoning and would be uninfluenced or paralleled by any empirical phenomenon including written knowledge as a medium of transference. The core issue I see here is you being able to prove that the model is actually reasoning in a concrete way that isn’t just a simulacrum like the Apple researchers et al. theorize it to be.
If you do find this question answer pair then it would be a massive breakthrough for science and philosophy more generally.
You say “demonstrably” but I still do not see a demonstration of these reasoning abilities that is not subject to the aforementioned criticisms.
This looks neat but I don’t think it meets the standard for “reasoning only.” (Still not sure how you would prove that one) furthermore this looks to be fairly generalizable in pattern+form to other grid problems so i don’t think it also meets the bar for “not being in the training data.” We known these models can generalize somewhat based upon their training but not consistently and certainly not consistently well. Again I’m not making the claim that responding to a novel prompt is a sign of reasoning as other have pointed out a calculator can do that too.
Your quote:
“This is a unique problem I came up with. It’s a variation on counting islands.”
You then say:
“ as I came up with it so no variation of it really exists anywhere else.”
So not sure what to take away from your text but I do think this is a variation of a well-known problem type so I would be pretty amazed if there was something very close to this in the training data. Given it’s an interview question and those are written about ad-nauseum I’m not surprised then that it was able to generalize to the provided case.
The COT researchers did see the ability to generalize in some cases just not necessarily actually use the COT tokens to reason and/or failed on generalizing on variations which they thought it should have given its ability to generalize in others and the postulation that it was using reasoning and not just a larger corpus to pattern match with.
It’s a variation on a well known problem in the sense that I just added some unique rules to it.
The solution however is not a variation. It requires leaps of creativity that most people will be unable to solve. In fact I would argue this goes beyond just reasoning as you have to be creative and test possibilities to even arrive at a solution. It’s almost random chance that will get you there. Simple reasoning like logical reduction won’t let you arrive at a solution.
Additionally this question was developed to eliminate pattern matching that candidates use on software interviews. It was vetted and verified to not exist. No training data exists.
It definitively requires reasoning to solve. And it is also unlikely you solved it. ChatGPT o3 has solved it. Try it.
I did and I fail to see how you can make those guarantees given you given it as a n interview question? You’re able to the vet the training data of O3? I still don’t see how your answer could only be arrived at via reasoning and that it would take “leaps of creativity” to arrive at the correct answer? These all seem like value judgments not hard data or some proof that your question cannot be derived from the training data given you say it is a variation of.
Seems like you have an interview question not “proof of reasoning” especially given the prior cited case of these models being able to generalize in some cases with enough data.
“And it is also unlikely you solved it” well I guess you overestimated your abilities on two counts today then.
> It’s a variation on a well known problem in the sense that I just added some unique rules to it.
> No training data exists.
No it definitely does but is a variation. You kinda just confirmed what we already knew. Given enough data about a thing these LLMs can generalize somewhat.
I don’t think you solved it otherwise you’d know that what I mean by variation is similar to how calculus is a variation of addition. Yea it involves addition but the solution is far more complicated.
Think of it like this counting islands exists in the training data in the same way addition exists. The solution to this problem builds off of counting islands in the same way calculus builds off of addition.
No training data exists for it to copy because this problem is uniquely invented by me. The probability that it has is quite low. Additionally several engineers and I have done extensive google searches and we believe to a reasonable degree that this problem does not exist anywhere else.
Also you use semantics to cover up your meaning. LLMs can “generalize” somewhat? Generalization is one of those big words that’s not well defined. First off the solution is not trivially extracted from counting islands and second “generalize” is a form of reasoning. You’re using big fuzzy words with biased connotations to further your argument. But here’s the thing, even if we generously go with it, the solution to counting donuts is clearly not some trivial generalization of counting islands. The problem is a variation but the solution is NOT. It’s not even close to what we term as the colloquial definition of “generalization”
Did you solve it? I highly doubt you did. It’s statistically more likely you’re lying, and the fact that you call the solution a “generalization” just makes suspect that even more.
Yep and yep. Did it on two models and by myself — you know if you ask them to cite similar problems (and their sources) I’ll think you’ll quickly realize how derivative your question is in both question and solution. Given that you’re now accusing me of arguing in bad faith despite the fact I’ve listened to you repeat the same point with the only proof being “this question is a head scratcher for me; must be for everyone else therefore it proves that one must reason” makes me think you don’t actually want to discuss something; you think you can “prove” something and seem to be more interested in that. Given that I say go publish your paper about your impossible question and let the rest of the community review it if you feel like you need to prove something. So far the only thing you’ve proven to me is that you’re not interested in a good-faith discussion; just repeating your dogma and hoping someone concedes.
Also generalization is not always reasoning: I can make a generalization that is not reasoned; I can also make one that is poorly reasoned. Generalization is considered well-defined in regards to reasoning:
https://www.comm.pitt.edu/reasoning
Your example still fails to actually demonstrate reasoning given its highly derivative nature, though.
Yeah I know you claimed to solve it. I’m saying I don’t believe you and I think you’re a liar. There’s various reasons why the biggest one is that you think the solution is “generalizable” from counting islands (it’s not).
That’s not the point though. The point is I have metrics on this. Roughly 50 interviews only one guy got it. So you make the claim the solution is generalize-able well then prove your claim then. I have metrics that support my claim. Where’s yours?
All I have to do is show you one prompt with a correct answer that cannot be arrived at with pattern matching and the prompt can only be arrived at through reasoning. One. You have to demonstrate this for EVERY prompt if you want to prove LLMs can't reason.