Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM’s mechanics are algorithmically much closer to the human brain (which the LLM is modeled on) than a TI-83, a CPU, or any other Turing machine. Which is why, like the brain, it can solve problems that no individual Turing machine can.

Are you sure you aren’t just defining reasoning as something only a human can do?




My prior is reasoning is a conscious activity. There is a first person perspective. LLMs are so far removed mechanically from brains the idea they reason is not even remotely worth considering. Modeling neurons can be done with a series of pipes and flowing water, and that is not expected to give arise to consciousness either. Nor are nuerons and synapses likely to be sufficient for consciousness.

You know how we insert ourselves into the process of coming up with a delicious recipe? That first person perspective might be also necessary for reasoning. No computer knows the taste of mint, it must be given parameters about it. So if a computer comes up with a recipe with mint, we know it wasn’t via tasting anything ever.

A calculator doesn’t reason. A facsimile of something we have no idea about its role in consciousness has the same outlook as the calculator.


LLMs are so far removed mechanically from brains the idea they reason is not even remotely worth considering.

Jet planes are so far removed mechanically from a bird that the idea they fly is not even remotely worth considering.


You’re right that my argument depends upon there being a great physical distinction between brains and H100s or enough water flowing through troughs.

But since we knew properties of wings were major comments to flight dating back to beyond the myths of Pegasus or Icarus, we rightly connected the similarities in the flight case.

Yet while we have studied neurons and know the brain is apart of consciousness, we don’t know their role in consciousness like the wing’s for flight.

If you got a bunch if daisy chained brains and that started doing what LLMs do, I’d change my tune—because the physical substrates are now similar enough. Focusing on neurons, and their facsimilized abstractions, may be like thinking flight depending upon the local cellular structure of a wing, rather than the overall capability to generate lift, or any other false correlation.

Just because an LLM and a brain get to the same answer, doesn’t mean they got there the same way.


Motte? Consciousness.

Bailey? Reason.

How reasonable are the outputs of ANNs considering the inputs? This is a valid question and it has a useful response.

From ImageNet to LLMs we are finding these tools to give some scale of a reasonable response.

Recommended reading: Philosophical Investigations by Wittgenstein.


Are we then conferring some kind of supernatural or religious properties to the brain’s particular implementation of neurons?

If not, then why shouldn’t differently constructed but algorithmically similar systems be able to produce similar phenomena?


Because we know practically nothing about brains so comparing them to LLMs is useless and nature is so complex that we're constantly discovering signs of hubris in human research.

See C-sections versus natural birth. Formula versus mother's milk. Etc.


I think you'd benefit from reading Helen Keller's autobigoraphy "the world i live in", you might reach the same conclusions I did, this being that perhaps conciousness is flavoured by our unique way of experiencing our world, but not strictly neccesary for conciousness of some kind or another to form. I beleive conciousness to be a tool a sufficently complex neural network will develop in order for it to achieve whatever objective it has been given to optimize for.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: