Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "statistical parrot" parrots have been demonstrably wrong for years (see e.g. LeCun et al[1]). It's just harder to ignore reality with hundreds of millions of people now using incredible new AI tools. We're approaching "don't believe your lying eyes" territory. Deniers will continue pretending that LLMs are just an NFT-level fad or bubble or whatever. The AI revolution will continue to pass them by. More's the pity.

[1] https://arxiv.org/abs/2110.09485



> Deniers will continue pretending that LLMs are just an NFT-level fad or bubble or whatever. The AI revolution will continue to pass them by. More's the pity.

You should re-read that very slowly and carefully and really think about it. Calling anyone that's skeptical a 'denier' is a red flag.

We have been through these AI cycles before. In every case, the tools were impressive for their time. Their limitations were always brushed aside and we would get a hype cycle. There was nothing wrong with the technology, but humans always like to try to extrapolate their capabilities and we usually get that wrong. When hype caught up to reality, investments dried up and nobody wanted to touch "AI" for a while.

Rinse, repeat.

LLMs are again impressive, for our time. When the dust settles, we'll get some useful tools but I'm pretty sure we will experience another – severe – AI winter.

If we had some optimistic but also realistic discussions on their limitations, I'd be less skeptical. As it is, we are talking about 'revolution', and developers being out of jobs, and superintelligence and whatnot. That's not the level the technology is at today and it is not clear we are going to do anything else other than get stuck in a local maxima.


A trillion dimensional stochastic parrot is still a stochastic parrot.

If these systems showed understanding we would notice.

No one is denying that this form of intelligence is useful.


I don't know how you can say they lack understanding of the world when in pretty much any standardised test designed to measure human intelligence they perform better than the average human. They only thing that don't understand is touch because they're not trained on that, but they can already understand audio and video.


You said it, those tests are designed to measure human intelligence, because we know that there is a correspondence between test results and other, more general tasks - in humans. We do not know that such a correspondence exists with language models. I would actually argue that they demonstrably do not, since even an LLM that passes every IQ test you put in front of it can still trip up on trivial exceptions that wouldn't fool a child.


So they fail in their own way? They're not humans; that's to be expected.


An answer key would outperform the average human but it isn’t intelligent. Tests designed for humans are not appropriate to judge non humans.


No you don’t understand, if i put a billion billion trillion monkeys on typewriters, they’re actually now one super intelligent monkey because they’re useful now!

We just need more monkeys and it will be the same as a human brain.


What does the mass of users change about what it is? How many of these check the results for hallucinations and how many don’t because I part of AI?

More than once these tools fail at tasks a fifth grader could understand




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: