The "statistical parrot" parrots have been demonstrably wrong for years (see e.g. LeCun et al[1]). It's just harder to ignore reality with hundreds of millions of people now using incredible new AI tools. We're approaching "don't believe your lying eyes" territory. Deniers will continue pretending that LLMs are just an NFT-level fad or bubble or whatever. The AI revolution will continue to pass them by. More's the pity.
> Deniers will continue pretending that LLMs are just an NFT-level fad or bubble or whatever. The AI revolution will continue to pass them by. More's the pity.
You should re-read that very slowly and carefully and really think about it. Calling anyone that's skeptical a 'denier' is a red flag.
We have been through these AI cycles before. In every case, the tools were impressive for their time. Their limitations were always brushed aside and we would get a hype cycle. There was nothing wrong with the technology, but humans always like to try to extrapolate their capabilities and we usually get that wrong. When hype caught up to reality, investments dried up and nobody wanted to touch "AI" for a while.
Rinse, repeat.
LLMs are again impressive, for our time. When the dust settles, we'll get some useful tools but I'm pretty sure we will experience another – severe – AI winter.
If we had some optimistic but also realistic discussions on their limitations, I'd be less skeptical. As it is, we are talking about 'revolution', and developers being out of jobs, and superintelligence and whatnot. That's not the level the technology is at today and it is not clear we are going to do anything else other than get stuck in a local maxima.
I don't know how you can say they lack understanding of the world when in pretty much any standardised test designed to measure human intelligence they perform better than the average human. They only thing that don't understand is touch because they're not trained on that, but they can already understand audio and video.
You said it, those tests are designed to measure human intelligence, because we know that there is a correspondence between test results and other, more general tasks - in humans. We do not know that such a correspondence exists with language models. I would actually argue that they demonstrably do not, since even an LLM that passes every IQ test you put in front of it can still trip up on trivial exceptions that wouldn't fool a child.
No you don’t understand, if i put a billion billion trillion monkeys on typewriters, they’re actually now one super intelligent monkey because they’re useful now!
We just need more monkeys and it will be the same as a human brain.
[1] https://arxiv.org/abs/2110.09485