Hacker Newsnew | past | comments | ask | show | jobs | submit | procha's commentslogin

That's an excellent analogy. Also, if the fundamental nature of LLMs and their training data is unstructured, why do we try to impose structure? It seems humans prefer to operate with that kind of system, not in an authoritarian way, but because our brains function better with it. This makes me wonder if our need for 'if-else' logic to define intelligence is why we haven't yet achieved a true breakthrough in understanding Artificial General Intelligence, and perhaps never will due to our own limitations.


That’s a powerful point. In my view, we shouldn’t try to constrain intelligence with more logic—we should communicate with it using richer natural language, even philosophical language.

LLMs don’t live in the realm of logic—they emerge from the space of language itself.

Maybe the next step is not teaching them more rules, but listening to how they already speak through us


exactly on point, It seems paradoxical to strive for a form of intelligence that surpasses our own while simultaneously trying to mold it in our image, our own understanding and our rules,

we would be listening not directing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: