Hacker Newsnew | past | comments | ask | show | jobs | submit | contagiousflow's commentslogin

If you think talking to an LLM is the same experience as talking to a human you should probably talk to more humans

That's not what I said. What I said is that the claim "LLMs aren't intelligent because they stochastically produce characters" doesn't hold because humans do that too even if they're intelligent and authorative.

We don't actually know how human cognition works, so how do you know that humans "stochastically produce characters?"

Do humans always answer exactly the same way to the same question? No.

Also you could always pick the most likely token in an LLM as well to make it deterministic if you really wanted.


That doesn't really prove anything. I could create a Markov chain with a random seed that doesn't always answer the same question the same way, but that doesn't prove the human brain works like a Markov chain with a random seed.

One thing humans tend not to do is confabulate entirely to the degree that LLMs do. When humans do so, it's considered a mental illness. Simply saying the same thing in a different way is not the same as randomly randomly syntactically correct nonsense. Most humans will not, now and then, answer that 2 + 2 = 5, or that the sun rises in the southeast.


I'm not making any claim about how the human brain works. The only thing I'm saying is that humans also produce somewhat randomized output for the same question, which is pretty uncontroversial I think. That doesn't mean they're unintelligent. Same for LLMs.

I really wish people into LLMs would limit themselves to terms from neuroscience or philosophy when descrbing humans.

You are in my mind rightfully getting pushback for writing "human experts also output tokens with some statistical distribution. "


That's just a mathematical fact.

You have a big opaque box with a slot where you can put text in and you can see text come out. The text that comes out follows some statistical distribution (obviously), and isn't always the same.

Can you decide just from that if there's an LLM or a human sitting inside the box? No. So you can't make conclusions about whether the box as a system is intelligent just because it outputs characters in a stochastic manner according to some distribution.


Okay... I objected to your use of the word token. Humans don't think in tokens or even write in tokens so obviously what you wrote is not a fact.

That shouldn't even be controversial, I don't think?

You wrote "The text that comes out follows some statistical distribution".

At the risk of being over my head here did you mean the text can be described statistically or "follows some statistical distribution". Are these two concepts the same thing? I don't think so.

A program by design follows some statistical distribution. A human is doing whatever electrochemical thing it's doing that can be described statistically after the fact.

Regardless my point was pretty simple, I know this will never happen but I wish tech people would drop this tech language when describing humans and adopt neuroscience language.


> Humans don't think in tokens or even write in tokens so obviously what you wrote is not a fact.

Doesn't matter what they think in. A token can be a letter or a word or a sound. The point is that the box takes some sequence of tokens and produces some sequence of tokens.

> You wrote "The text that comes out follows some statistical distribution". > At the risk of being over my head here did you mean the text can be described statistically or "follows some statistical distribution". Are these two concepts the same thing? I don't think so. > A program by design follows some statistical distribution. A human is doing whatever electrochemical thing it's doing that can be described statistically after the fact.

Again, it doesn't matter how the box works internally. You can only observe what goes in and out and observe its distribution.

> Regardless my point was pretty simple, I know this will never happen but I wish tech people would drop this tech language when describing humans and adopt neuroscience language.

My point is neuroscience or not doesn't matter. People make the claim that "the box just produces characters with some stochastic process, therefore it's not intelligent or correct", and I'm saying that implication is not true because there could just as well be a human in the box.

You can't decide whether a system is intelligent just based of the method with which it communicates.


I think we are talking past each other but this has been entertaining.

I'd say anybody who writes "the LLM just produces characters with some stochastic process, therefore it's not intelligent or correct" is making an implicit argument about the way the LLM works and the way the human brain works. There might even be an implicit argument about how intelligence works.

They are not making the argument that you can't make up statistical models to describe a box, a human generated text, or an expert human opinion. But that seems to be the claim you are responding to.


Why was this flagged?

Pink News is often not a reliable source; they’re prone to hyperbole.

cuz a certain demographic, that supports the expansion of authoritarian controls over democratic countries in order to get them to comply with a certain "non genocide", was offended by it.

For a more charitable interpretation: the pinknews is a source that regularly produces low quality, poorly fact-checked and polemical content, and is to me on the same level as the daily mail. The article here seems somewhat polemical, and it is difficult to verify if some of the stronger claims made are actually true.

The headline focusses on polemics and omits a detail many people would find quite important: that this was part of a lawsuit settlement. It also decides to use the word "appoint", which has a stronger underlying implication that Starbuck will have a job at / take a significant role in doing this at Meta.

It is important that this information is shared, but it is better if it's done accurately. I don't see why the source that Pinknews used itself, https://www.wsj.com/tech/ai/meta-robby-starbuck-ai-lawsuit-s..., wasn't used instead.


I would recommend you asking the women in your life what they think.


I don't follow the analogy? Are you just comparing two technologies that have had criticisms at their infancy?


Don't ever look into the US's involvement in Latin America if you want to keep believing this


You think every new technology is inherently a good thing and good for society?


These tools aren't magic, if there are reasons for code changes outside of the diff LLMs aren't going to magically fabricate a commit message that gives that context.


If you were having a Claude code session, it will know the context from the discussion.


Do you have recommendations of other search engines?


I don't think you read the article at all...


[Citation needed]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: