Hacker News new | past | comments | ask | show | jobs | submit login

What's your point?

The discussion is about the architecturally imposed limitations of LLMs, resulting in capabilities that are way less than that of a brain.

The fact that the brain has it's own limits doesn't somehow negate this fact!




My point is that for some bizare reason, people have standards of reasoning (for machines) that only exist in fiction or their own imagination.

It is beyond silly to dump an architecture for a limitation the human brain has. A reasoning engine that can iterate indefinitely with no external aid does not exist in real life. That the transformer also has this weakness is not any reason for it to have capabilities less than a brain so it's completely moot.


LLMs are here to stay until something better replaces them, and will be used for those things they are capable of.

It shouldn't be surprising they are not great at reasoning, or everything one would hope for from an AGI, since they simply were not built for that. If you look at the development history, the transformer was a successor to LSTM-based seq-2-seq models using Bahdanau attention, whose main goal was to more efficiently utilize parallel hardware by supporting parallel processing. Of course a good language model (word predictor) will look as if it's reasoning because it is trying to model the data it was trained on - a human reasoner.

As humans we routinely think for seconds/minutes or even hours before speaking or acting, while an LLM only has that fixed N steps (layers) of computation. I don't know why you claim this difference (among others) should make no difference, but it clearly does, with out-of-training-set reasoning weakness being a notable limitation that people such as Demis Hassabis have recently conceded.


Reasoning is reasoning. "Look as if it is reasoning" is an imaginary distinction you've made up. One that is very clear because everybody touting this "fake reasoning" rhetoric is still somehow unable to define a testable version of reasoning that disqualifies LLMs without also disqualifying some chunk of humans.

>As humans we routinely think for seconds/minutes or even hours before speaking or acting

No human is iterating on a base thought for hours uninterrupted lol so this is just moot

>with out-of-training-set reasoning weakness being a notable limitation that people such as Demis Hassabis have recently conceded.

Humans reason weaker out of training. LLMs are simply currently worse


> Reasoning is reasoning. "Look as if it is reasoning" is an imaginary distinction you've made up.

No - just because something has the surface appearance of reasoning doesn't mean that the generative process was reasoning, anymore than a cargo cult wooden aircraft reflects any understanding of aerodynamics and would be able to fly.

We've already touched on it, but the "farmer crossing river" problems is a great example. When the LLM sometimes degenerates into "cross bank A to B with chicken, cross band B to A with chicken, cross bank A to B with chicken.. that is the fewest trips possible", this is an example of "looks as if it is reasoning" aka cargo-cult surface-level copying of what a solution looks like. Real reasoning would never repeat a crossing without loading/unloading something since that conflicts with the goal of fewest trips possible.


I never said anything about the surface appearance of reasoning. Either the model demonstrates some understanding or reasoning in the text it generates as it is perfectly capable of or it reasons faultily or lacks understanding in that area. This does not mean LLMs don't reason anymore than it means you don't reason.

The idea that LLMs "fake reason" and Humans "really reason" is an imaginary distinction. If you cannot create any test that can distinguish the two then you are literally making things up.


Dude, I just gave you an example, and you straight-up ignore it and say "show me a test"?!

An averagely smart human does not have these failure modes where they answer a question with something that looks like an answer "cross A to B, then B to A. done. there you go!" but has zero logic to it.

Do you follow news in this field at all? Are you aware that poor reasoning is basically the #1 shortcoming that all the labs are working on?!!

Feel free to have the last word as this is just getting repetitive.


You are supposed to show me an example no human will fail. I didn't ignore anything. I'm just baffled that you genuinely believe this:

>An averagely smart human does not have these failure modes where they answer a question with something that looks like an answer "cross A to B, then B to A. done. there you go!" but has zero logic to it.

Humans are poor at logic in general. We make decisions, give rationales with logical contradictions and nonsense all the time. I just genuinely can't believe you think we don't. It happens so often we have names for these cognitive shortcomings. Get any teacher you know and ask them this. No need to take my word for it. And i don't care about getting the last word.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: