Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> check mental items lucidly and consciously

Capabilities that evolved over millennia. We don't even have a decent, universally-agreed upon definition for consciousness yet.

> "store" and "learn"

Actually there are tools for that. Again, the core LLM functionality is best left on its own, and augmented on the fly with various tools which can be easily specialized and upgraded independently of the model. Consider too that the brain itself has multiple sections dedicated to different kinds of processing, instead of anything just happening anywhere.



> Capabilities that evolved over millennia

Means nothing. Now they are urgent.

> consciousness

I meant "conscious" as the wake opposed to deliriousness, as the ability "to be sure about what you have in front of you with the degree of accomplished clarity and substantially unproblematic boundaries of definition",

not as that quality that intrigues (and obsesses) less pragmatic intellectuals in directions e.g. at the "Closer to Truth" channel.

When I ask somebody, it has to be sure to a high degree. When implementing a mind, the property of "lucid conscious check" is fundamental.

> tools for that

The "consciously check, then store and learn" is structural in the proper human mental process - a high level functioning, not just a module; i.e. "it's what we do".

Which means, the basic LLM architecture is missing important features that we need if we want to implement a developed interlocutor. And we need and want that.


> Now they are urgent.

Something being urgent doesn't mean there's a known viable pathway to an ideal implementation. The brain has billions of neurons working in tandem, a scale we're nowhere near to replicating, last I checked. And there are signs pointing to scale of neural interactions as a key factor in intelligent capabilities.

> When I ask somebody, it has to be sure to a high degree.

You're looking in the wrong place if you need surety. LLMs aren't "sure" about anything, and will never be. We going in circles at this point, but if you need certitude in anything, add tools to the LLM to increase surety. For some reason you seem to be pushing for a less optimal solution to a problem that already has a decent one.

BTW seems you may be - unconsciously? - crossing a person with an LLM there.

> structural in the proper human mental process

The brain is very modularized. It has sections/lobes which specialize in core life support functions, seeing, hearing, movement, reasoning, memory, etc. That's why brain surgeons can reliably know what capabilities may be affected by their actions in a given part of the brain. And all those functions are tools in some way to something else, for example reasoning would be pretty limited without memory.

And even at a larger scope, as humans we still use tools to achieve greater surety from our mental processes. That's why we have calculators, watches, cameras, etc. And why some would type "blueberry" into a tool with proven spell checking or autocorrect capabilities and eyeball the letters for a couple seconds to confirm the number of "b"s in it. The brain as a whole is still pretty fallible with all its capacity and capabilities.


> doesn't mean there's a known viable pathway [...] solution [...] BTW seems

It means it must be researched into with high commitment. // LLMs are an emergency patch at best (I would not call them «decent» - they are inherently crude). This is why I insist that they must be overcome with urgency already because they are now here (if a community needed wits and a lunatic appears, wits become more needed). // And no, I am not «crossing»: but people do that, hence I am stating an urgency.

We do not need to simulate the brain, we only ("only") need to implement intelligence. That means the opposite of stating hearsay: it means checking every potential uttering and storing results (and also reinforcing the ways that had the system achieve sophisticated thoughts and conclusions).

It is not given that LLMs cannot be part of such system. They surely have a lot of provisional uttering to criticize.


There's already a lot of research happening for the next thing. The AGI race has been on among not only companies, but also between the largest nations. Everybody's doing their best and then some.

It could very well be that the highest intelligent functionalities requires a closely brain-like substrate. We don't know yet, but we'll get there eventually. And it is very likely to be something emergent, not specifically programmed features as you seem to be insinuating with "... checking [...] and storing results..."


> as you seem to be insinuating with

The implementation details are not clear, not the goals.

I never said that the feature has to be coded explicitly. I said it has to be there.


OK. So it's just a matter of waiting for the desired capabilities to emerge in future models.


But they will probably thought models, not just language models.

The engineering will be different.


Possibly. I personally think it's the type of data and scale that're the primary differentiators. The use of characters is a fundamental flaw because characters are synthetic entities. Instead the models should be based on raw sensory data types, such as pixels and waveforms, and iterate from there on something close to the existing architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: