Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, the stochastic parrots story is one of the most strong instances in recent times where experts in a field are made blind by their own expertise (the mental model they have of certain things) to the point of being incapable of seeing trivial evidences.


There’s a certain irony in hearing someone describe an LLM as a "stochastic parrot" for the ten-thousandth time when the only reason they’re doing so is that they’ve seen a sufficient number of other people using the exact same term (so now it's in their proverbial training data).


It's an accurate description, though. People are running around convincing themselves that the RNG text generator is alive and it's worth slapping some sense into them.


it's no longer an accurate description.

with recent models, while stitching together training data is still most of the story, there's obviously something more going on, especially with regards to code, mathematics, and with the ability to successfully produce dialogue that respects pragmatics. this has nothing to do with a real thinking being, but it's become different from a stochastic parrot.

there are deluded people who think that Claude and ChatGPT are really thinking and feeling. it is admirable to try to argue them out of this position.

but if your argument involves saying things that are not true, and anyone can easily see are not true by interacting with a consumer-grade chatbot for 10 minutes, it will just discredit you, and those people will go on believing they're talking to HAL-9000.


"Obviously" is doing some heavy lifting there. I don't think it's obvious at all.

What is obvious is people are seeing patterns in the output where there are none. We're hardwired for pattern-seeking and we're great at it. We see faces in clouds and hear words in static.

I've seen a number of people wax about how they interrogate ChatGPT until it gives them "the good stuff" and they're all just Clever Hans-ing themselves.


so here's an example: I saw the following amusing math joke on BlueSky, posted a couple days ago.

https://bsky.app/profile/elliotlip.bsky.social/post/3lh5bs7g...

"It's wild that I took a whole-ass class all about differentiable functions from R^2 to itself that satisfy one specific PDE."

output from Claude (one-shot, no retries or trickery) when asked to "explain the joke":

"The joke is playing off the fact that "complex analysis" (the study of differentiable functions satisfying the Cauchy-Riemann equations - a type of PDE) sounds like it should be more complex than it is when described technically. The humor comes from reducing an entire mathematical field to "functions that satisfy one specific PDE," making it sound surprisingly narrow or trivial despite its actual depth and importance."

Wikipedia and lots of math textbooks are in the training data, so it's not so impressive that this model will produce correct statements about math.

But the joke itself was not in the training data (to the best of my knowledge, it's this guy's original joke). And complex analysis was not mentioned in the joke. Yet somehow the text generated is correct with respect to both of those things.

I see things like this quite regularly, which under the "stochastic parrots" story, can't happen.

I've tried to phrase all these sentences very carefully to not claim there is any "agent" or "intelligence" behind the Claude product. There are many explanations for how a language model like this could imitate intelligent dialogue in ways that are somewhat fake and don't generalize -- I think this is what's happening. I also see things break down all the time and the sleight-of-hand fall apart. However, it is not "stochastic parrots" any more.


I'm sure that there are people that are deluded into thinking ChatGPT loves them like a real life flesh and blood being can even when it says it can't, but we have such limited vocabulary, especially as laymen, for describing any non-human intelligence, that saying it's thinking and reasoning aren't entirely unreasonable words to describe what it's doing. sure, it's not thinking in the same way a human would, but when a computer, pre-LLM, and even pre-Internet, is doing something that requires the user to wait, saying the computer is "thinking" is an entirely accepted practice.

So if we want to get people to stop using the words thinking and reasoning, we have to get replacement words into the lexicon. if I tell an LLM A implies B implies C, and I tell it A is true and it's able to tell me that C is thus also true, sure it's entirely due to that much logic existing in its training corpus, but unless we get to a point where I can say that ChatGPT is dot-producting an essay for me, or some other phrase, saying it's not doing "thinking" is going to fall flat on its face. Hell, Deepseek R1's output for the local model literally says <think>. It may not be thinking in a biological being sense, and it may not be reason in a biological sense to conclude C if A implies B implies C, if A is true, but we lack the common colloquial language to describe it otherwise.


completely agree. people say a classic chess engine is "thinking" when it goes deeper into the search tree, but nobody is confused about that. This colloquial language doesn't bother me so much.

But there really are people who think they are talking to something more than that. Like within a conversation with today's consumer product, they sincerely believe that an actual being is instantiated who has goals and intentions and talks to them.


Using language like "thinking" doesn't bother me. I'm not a stickler for precision language in colloquial speech.

I do think it's important to deflate the hype and give context to what you mean by "thinking" in products, technologies and so on. Calling it a "stochastic parrot" is a bit pithy but not unreasonable. Plus it's memorable.


All they do is predict the next word!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: