I just went to ChatGPT page, and was presented with the text:
"ChatGPT: get instant answers, find creative inspiration, and learn something new. Use ChatGPT for free today."
If something claims to give you answers, and those answers are incorrect, that something is wrong. Does not matter what it is -- model, human, dictionary, book.
Claiming that their purpose is "to produce plausible language" is just wrong.. no one (except maybe AI researchers) say: "I need some plausible language, I am going to open ChatGPT".
When you first use it, a dialog says “ChatGPT can provide inaccurate information about people, places, or facts.” The same is said right under the input window. In the blog post first announcing ChatGPT last year, the first limitation listed is about this.
Even if the ChatGPT product page does not specifically say that GPT can hallucinate facts, that message is communicated to the user several times.
About the purpose, that is what it is. It’s not clearly communicated to non-technical people, you are right. To those familiar to the AI semantic space, LLM already tells the purpose is to generate plausible language. All the other notices, warnings, and cautions point casual users to this as well, though.
I don’t know… I can see people believing what ChatGPT says are facts. I definitely see the problem. But at the same time, I can’t fault ChatGPT for this misalignment. It is clearly communicated to the users that facts presented by GPT are not to be trusted.
Producing plausible language is exactly what I use it for - mostly plausible blocks of code, and tedious work like rephrasing emails, generating docs, etc.
Everything it creates needs to be reviewed, particularly information that is outside my area of expertise. It turns out ChatGPT 4 passes those reviews extremely well - obviously too well given how many people are expecting so much more from it.
"ChatGPT: get instant answers, find creative inspiration, and learn something new. Use ChatGPT for free today."
If something claims to give you answers, and those answers are incorrect, that something is wrong. Does not matter what it is -- model, human, dictionary, book.
Claiming that their purpose is "to produce plausible language" is just wrong.. no one (except maybe AI researchers) say: "I need some plausible language, I am going to open ChatGPT".