Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans are likely to share when they don't know something or to express concern when they aren't positive what they're is correct. LLMs are confident 100% of the time and don't understand when they're wrong. I get what you're trying to say but I don't think this is an instructive example.


>>Humans are likely to share when they don't know something

I try hard to not be snarky or sarcastic on HN; it doesn't contribute to positive, friendly, productive conversation... but come ON :->

Average human is unlikely to admit to themselves or others when they don't know something. We are famous for it. We love our opinions, and we conflate them as facts. It doesn't even imply malice or anything - have you ever asked for directions from somebody who doesn't know the answer? They'll still TRY :->. Even the smartest people around me will frequently present their ad-hoc opinions as facts. Heck, on HN alone, people will opine on matters of law and science and many things which are reasonably factual.

Don't get me wrong, LLM's hallucinating and being utterly unable to signal when they do is BAD; it makes them very different from most other software we've ever built; and it needs to be addressed; but as to this line of conversation specifically, it makes them (without any philosophical or "conscience of machine" implications) extremely human-like :-)

(similarly, not to say there aren't any humans who are humble and/or explicit about their limitations; but it's far, far from average)


> Humans are likely to share when they don't know something or to express concern when they aren't positive what they're is correct.

We must have met very different humans. Are there humans that do that this? Absolutely! Are they in the majority? Absolutely not. Now if you change that framing to "teachers" then I think on average you are going to get more people like that but I've heard many many people say things with complete confidence/certainty that was absolutely wrong. Then again, I've had teachers that have made predictions/statements that they state as facts so I don't know. Dunning-Kruger can account for part of it but still.


There are good teachers and bad teachers. LLMs are at best, bad teachers.


>>LLMs are at best, bad teachers.

I could not disagree more.

ChatGPT is patient. That's a rare quality in humans and teachers.

ChatGPT will willingly explore. That's also a rare quality in teachers.

ChatGPT is detailed and structured, and has instant access to enormous amount of data and background.

I will grant you that there are domains of knowledge and questions where it's great, and where it'll lie to you through your teeth. But as a patient, detailed, willing, knowledgeable tutor in basic/well-covered areas, it's virtually unparalleled. I'm a hungry learner and have had large number of teachers, tutors and mentors through several continents, countries, societies and educational paradigms; and only the very very top are as good - and I've actually been the lucky one. My wife and my sister, for example, based on their accounts, simply never had a teacher/tutor as good as ChatGPT :-<




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: