Hacker News new | past | comments | ask | show | jobs | submit login

"We do not like annoying cousins." Yes, exactly. The, uh, confident fluency of LLM responses, which can at the same time contradict what was said earlier, reminded me exactly of that. I don't know if you've ever met one of those glib psychopaths, but they have this characteristic of non-content communication, where it feels like words are being arranged for you, like someone composing a song using words from a language they do not know. See also: "you're talking a lot, but you're not saying anything."



Hm. The contradictions specifically are a thing I notice in humans that I think are entirely normal[0]. But the early LLMs with the shorter context windows, those reminded me of my mum's Alzheimer's.

That said, your analogy may well be perfect, as they are learning to people-please and to simulate things they (hopefully) don't actually experience.

(Not that it changes your point, but isn't that Machiavellian rather than psychopathic?)

[0] one of many reasons why I disagree with Wittgenstein about:

> If there were a verb meaning 'to believe falsely', it would not have any significant first person, present indicative.

Just because it's logically correct, doesn't mean humans think like that.


The part that really gets ME about that thought, is that those glib psychopaths/sociopaths fill an important role in human society, generally as leaders. I'm sure we can all think of some prominent political figures who are very good at arranging words to get their audience excited, but have a tenuous connection to fact (at best). Actually factual content seems almost irrelevant to their ability to lead, or to their followers' desire to follow.

If that's the function which we can now automate at scale, it's not the jobs the machines will ultimately take; it's the leadership.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: