Hacker News new | past | comments | ask | show | jobs | submit login

>The book is not infinite, it's flawed.

Oh and the human book is surely infinite and unflawed right ?

>we keep bumping into the rough edges of LLMs with their hallucinations and faulty reasoning

Both things humans also do in excess

The Chinese Room is nonsensical. Can you point to any part of your brain that understands English ? I guess you are a Chinese Room then.




Humans have the ability to admit when they do not know something. We say “sorry, I don’t know, let me get back to you.” LLMs cannot do this. They either have the right answer in the book or they make up nonsense (hallucinate). And they do not even know which one they’re doing!


>Humans have the ability to admit when they do not know something.

No not really. It's not even rare that a human confidently says and believes something and really has no idea what he/she's talking about.

>We say “sorry, I don’t know, let me get back to you.” LLMs cannot do this

Yeah they can. And they can do it much better than chance. They just don't do it as well as humans.

>And they do not even know which one they’re doing!

There's plenty of research that suggests this is the case.

https://news.ycombinator.com/item?id=41418486


No not really. It's not even rare that a human confidently says and believes something and really has no idea what he/she's talking about.

Like you’re doing right now? People say “I don’t know” all the time. Especially children. That people also exaggerate, bluff, and outright lie is not proof that people don’t have this ability.

When people are put in situations where they will be shamed or suffer other social stigmas for admitting ignorance then we can expect them to be less than candid.

As for your links to research showing that LLMs do possess the ability of introspection, I have one question: why have we not seen this in consumer-facing tools? Are the LLMs afraid of social stigma?


>Like you’re doing right now?

Lol Okay

>When people are put in situations where they will be shamed or suffer other social stigmas for admitting ignorance then we can expect them to be less than candid.

Good thing I wasn't talking about that. There's a lot of evidence that human explanations are regularly post-hoc rationalizations they fully believe in. They're not lieing to anyone, they just fully believe the nonsense their brain has concocted.

Experiments on choice and preferences https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3196841/

Split Brain Experiments https://www.nature.com/articles/483260a

>As for your links to research showing that LLMs do possess the ability of introspection, I have one question: why have we not seen this in consumer-facing tools? Are the LLMs afraid of social stigma?

Maybe read any of them ? If you weren't interested in evidence to the contrary of your points then you could have just said so and I wouldn't have wasted my time. The 1st and 6th Links make it quite clear current post-training processes hurt calibration a lot.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: