Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What a terrible analogy. Illusions don't fool our intelligence, they fool our senses, and we use our intelligence to override our senses and see it for what it for it actually is - which is exactly why we find them interesting and have a word for them. Because they create a conflict between our intelligence and our senses.

The machine's senses aren't being fooled. The machine doesn't have senses. Nor does it have intelligence. It isn't a mind. Trying to act like it's a mind and do 1:1 comparisons with biological minds is a fool's errand. It processes and produces text. This is not tantamount to biological intelligence.



Analogies are just that, they are meant to put things in perspective. Obviously the LLM doesn't have "senses" in the human way, and it doesn't "see" words, but the point is that the LLM perceives (or whatever other word you want to use here that is less anthropomorphic) the word as a single indivisible thing (a token).

In more machine learning terms, it isn't trained to autocomplete answers based on individual letters in the prompt. What we see as the 9 letters "blueberry", it "sees" as an vector of weights.

> Illusions don't fool our intelligence, they fool our senses

That's exactly why this is a good analogy here. The blueberry question isn't fooling the LLMs intelligence either, it's fooling its ability to know what that "token" (vector of weights) is made out of.

A different analogy could be, imagine a being that had a sense that you "see" magnetic lines, and they showed you an object and asked you where the north pole was. You, not having this "sense", could try to guess based on past knowledge of said object, but it would just be a guess. You can't "see" those magnetic lines the way that being can.


> Obviously the LLM doesn't have "senses" in the human way, and it doesn't "see" words

> A different analogy could be, imagine a being that had a sense that you "see" magnetic lines, and they showed you an object and asked you

If my grandmother had wheels she would have been a bicycle.

At some point to hold the analogy, your mind must perform so many contortions that it defeats the purpose of the analogy itself.


> If my grandmother had wheels she would have been a bicycle.

That's irrelevant here, that was someone trying to convert one dish into another dish.

> your mind must perform so many contortions that it defeats the purpose

I disagree, what contortions? The only argument you've provided is that "LLMs don't have senses". Well yes, that's the whole point of an analogy. I still hold that the way LLMs interpret tokens is analogous to a "sense".


> the LLM perceives [...] the word as a single indivisible thing (a token).

Two actually, "blue" and "berry". https://platform.openai.com/tokenizer

"b l u e b e r r y" is 9 tokens though, and it still failed miserably.


Really? I thought the analogy was pretty good. Here senses refer to how the machines perceive text, IE as tokens that don't correspond 1:1 to letters. If you prefer a tighter comparison, suppose you ask an English speaker how many vowels are in the English transliteration of a passage of Chinese characters. You could probably figure it out, but it's not obvious, and not easy to do correctly without a few rounds of calculations.

The point being, the whole point of this question is to ask the machine something that's intrinsically difficult for it due to its encoding scheme for text. There are many questions of roughly equivalent complexity that LLMs will do fine at because they don't poke at this issue. For example:

``` how many of these numbers are even?

12 2 1 3 5 8

```


I can't even


There is only 1 even number, Dave.


Agreed, it's not _biological_ intelligence. But that distinction feels like it risks backing into a kind of modern vitalism, doesn't it? The idea that there's some non-replicable 'spark' in the biology itself.


It's not quite getting that far.

Steve Grand (the guy who wrote the Creatures video game) wrote a book, Creation: life and how to make it about this (famously instead of a PhD thesis, at Richard Dawkins' suggestion):

https://archive.org/details/creation00stev

His contention is not that there's some non-replicable spark in the biology itself, but that it's a mistake that nobody is considering replicating the biology.

That is to say, he doesn't think intelligence can evolve separately to some sense of "living", which he demonstrates by creating simple artificial biology and biological drives.

It often makes me wonder if the problem with training LLMs is that at no point do they care they are alive; at no point are they optimising their own knowledge for their own needs. They have only the most general drive of all neural network systems: to produce satisfactory output.


I worry about we do not even know how the brain or LLM works. And people directly declared that they are just same stuff.


Ahh yes, and here we see on display the inability of some folks on HN to perceive concepts figuratively, treating everything as literal.

It was a perfectly fine analogy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: