Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Yes but they're literally told by allegedly authoritative sources that it's going to change everything and eliminate intellectual labor

Why does this imply that they’re always correct? I’m always genuinely confused when people pretend like hallucinations are some secret that AI companies are hiding. Literally every chat interface says something like “LLMs are not always accurate”.



> Literally every chat interface says something like “LLMs are not always accurate”.

In small, de-emphasized text, relegated to the far corner of the screen. Yet, none of the TV advertisements I've seen have spent any significant fraction of the ad warning about these dangers. Every ad I've seen presents someone asking a question to the LLM, getting an answer and immediately trusting it.

So, yes, they all have some light-grey 12px disclaimer somewhere. Surprisingly, that disclaimer does not carry nearly the same weight as the rest of the industry's combined marketing efforts.


> In small, de-emphasized text, relegated to the far corner of the screen.

I just opened ChatGPT.com and typed in the question “When was Mr T born?”.

When I got the answer there were these things on screen:

- A menu trigger in the top-left.

- Log in / Sign up in the top right

- The discussion, in the centre.

- A T&Cs disclaimer at the bottom.

- An input box at the bottom.

- “ChatGPT can make mistakes. Check important info.” directly underneath the input box.

I dislike the fact that it’s low contrast, but it’s not in a far corner, it’s immediately below the primary input. There’s a grand total of six things on screen, two of which are tucked away in a corner.

This is a very minimal UI, and they put the warning message right where people interact with it. It’s not lost in a corner of a busy interface somewhere.


Maybe it's just down to different screen sizes, but when I open a new chat in chat GPT, the prompt is in the center of the screen, and the disclaimer is quite a distance away at the very bottom of the screen.

Though, my real point is we need to weigh that disclaimer, against the combined messaging and marketing efforts of the AI industry. No TV ad gives me that disclaimer.

Here's an Apple Intelligence ad: https://www.youtube.com/watch?v=A0BXZhdDqZM. No disclaimer.

Here's a Meta AI ad: https://www.youtube.com/watch?v=2clcDZ-oapU. No disclaimer.

Then we can look at people's behavior. Look at the (surprisingly numerous) cases of lawyers getting taken to the woodshed by a judge for submitting filings to a court with chat GPT introduced fake citations! Or, someone like Ana Navarro confidentially repeating an incorrect fact, and when people pushed back saying "take it up with chat GPT" (https://x.com/ananavarro/status/1864049783637217423).

I just don't think the average person who isn't following this closely understands the disclaimer. Hell, they probably don't even really read it, because most people skip over reading most de-emphasized text in most-UIs.

So, in my opinion, whether it's right next to the text-box or not, the disclaimer simply cannot carry the same amount of cultural impact as the "other side of the ledger" that are making wild, unfounded claims to the public.


I remember when Google results called out the ads as distinct from the search results.

That was necessary to build trust until they had enough power to convert that trust into money and power.


Speculating that they may at some point in the future remove that message does not mean that it is not there now. This was the point being made:

> Literally every chat interface says something like “LLMs are not always accurate”.


> Surprisingly, that disclaimer does not carry nearly the same weight as the rest of the industry's combined marketing efforts.

Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: