>People will write lengthy and convoluted explanation on why LLM isn't like calculator or microwave oven or other technology before. (Like OP's article) But it really is.
You generally don't need a lengthy explanation because it's common sense. When someone doesn't get it then people have to go into lengthy convoluted explanations because they are trying to elucidate common sense to someone who doesn't get it.
I mean how else do I elucidate it?
LLMs are different from any revolutionary technology that came before it. The first thing is we don't understand it. It's a black box. We understand the learning algorithm that trains the weights, but we don't understand conceptually how an LLM works. They are black boxes and we have limited control over them.
You are talking to a thing that understands what you say to it, yet we don't understand this how this thing works. Nobody in the history of science has created anything similar. And yet we get geniuses like you who can use a simple analogy to reduce the creation of an LLM to something like the invention of a car and think there's utterly no difference.
There is a sort of inflection point here. It hasn't happened yet but a possible future is becoming more tangible. A future where technology surpasses humanity in intelligence. You are talking to something that is talking back and could surpass us.
I know the abundance of AI slop has made everyone numb to the events that happened in the past couple of years. But we need to look past that. Something major has happened, something different then the achievements and milestones humanity has surpassed before.
> You generally don't need a lengthy explanation because it's common sense. When someone doesn't get it then people have to go into lengthy convoluted explanations because they are trying to elucidate common sense to someone who doesn't get it.
Why do we keep getting people who say we understand LLMs.
Let me put it plainly. If we understood LLMs we would understand why hallucinations happen and we would subsequently be able to control and stop hallucinations from happening. But we can’t. We can’t control the LLM because of lack of understanding.
All the code is available on a computer for us to modify every single parameter. We have full access and we can’t control the LLM because we don’t understand or KNOW what to do. This is despite the fact that we have absolute control over the value of every single atomic unit of an LLM
I mean, I thought "we" did understand why they happen. It's a design decision to always provide an answer because an LLM never wants to say "I do not know". It might sometimes say "I cannot answer that" for compliance reasons, but never "I don't know".
This design decision is inherent to American culture and waht they consider as a "trustworthy person". Always having an answer is better than admitting a lack of knowledge.
I don't know the technical details behind incomplete information, but I feel I know the meta reasoning behind it.
>Perhaps you do not understand it, but many software engineers do understand.
No, they do not. LLMs are by nature a black box problem solving system. This is not true about all the other machines we have, which may be difficult to understand for specific or even most humans, but allow specialists to understand WHY something is happening. This question is unanswerable for an LLM, no matter how good you are at Python or the math behind neural networks.
> The first thing is we don't understand it. It's a black box. We understand the learning algorithm that trains the weights, but we don't understand conceptually how an LLM works
Wrong.
> You are talking to a thing that understands what you say to it
Prove your statements. Otherwise it's equivalent to AI slop. You're written response is no different and no better, therefore what's the point of you even responding? I'd prefer an AI bot to write a retort because it'd be more intelligent then just "wrong".
Not trying to be insulting here. But genuinely if you think humanity is better then AI, why is your response to me objectively WORSE then AI slop?? Prove your own statements by being better yourself, otherwise your own statement is in itself proof against your point.
You generally don't need a lengthy explanation because it's common sense. When someone doesn't get it then people have to go into lengthy convoluted explanations because they are trying to elucidate common sense to someone who doesn't get it.
I mean how else do I elucidate it?
LLMs are different from any revolutionary technology that came before it. The first thing is we don't understand it. It's a black box. We understand the learning algorithm that trains the weights, but we don't understand conceptually how an LLM works. They are black boxes and we have limited control over them.
You are talking to a thing that understands what you say to it, yet we don't understand this how this thing works. Nobody in the history of science has created anything similar. And yet we get geniuses like you who can use a simple analogy to reduce the creation of an LLM to something like the invention of a car and think there's utterly no difference.
There is a sort of inflection point here. It hasn't happened yet but a possible future is becoming more tangible. A future where technology surpasses humanity in intelligence. You are talking to something that is talking back and could surpass us.
I know the abundance of AI slop has made everyone numb to the events that happened in the past couple of years. But we need to look past that. Something major has happened, something different then the achievements and milestones humanity has surpassed before.