Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM isn't just recalling stuff. Brand new stuff, which it never saw in it's training, can come out.

The minute you take a token and turn it into an embedding, then start changing the numbers in that embedding based on other embeddings and learned weights, you are playing around with concepts.

As for executing a decision tree, ReAct or Tree of Thought or Graph of Thought is doing that. It might not be doing it as well as a human does, on certain tasks, but it's pretty darn amazing.



>Brand new stuff, which it never saw in it's training, can come out.

Sort of. You can get LLMs to produce some new things, but these are statistical averages of existing information. Its kinda like a static "knowledge tree", where it can do some interpolation, but even then, its interpolation based on statistically occurring text.


The interpolation isn't really based on statistically occurring text. It's based on statistically occurring concepts. A single token can have many meanings depending on context and many tokens can represent a concept depending on context. A (good) LLM is capturing that.


Neither just text or just concepts, but text-concepts — LLMs can only manipulate concepts as they can be conveyed via text. But I think wordlessly, in pure concepts and sense-images, and serialize my thoughts to text. That I have thoughts that I am incapable of verbalizing is what makes me different from an LLM - and, I would argue, actually capable of conceptual synthesis. I have been told some people think “in words” though.


Nope, you could shove in an embedding that didn't represent an existing token. It would work just fine.

(if not obvious.. you'd shove it in right after the embedding layer...)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: