Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> 'There is literally nothing on that page that says how to tell them to "not use my data for training"...'

You can opt out of ChatGPT using your conversations for future training, without disabling any features like convo history.

Either some browser plug-in like an adblock is hiding the button from you, or you're not noticing and clicking it (I'm guessing the former [1]).

For me, on iPhone, there's a black button with white text "Make a Privacy Request" which sort of hovers bottom centre of the page the way "Chat Live with Support!" buttons often hover.

Click on that button to get to this - https://privacy.openai.com/policies?modal=take-control - which allows you to either delete your account, or:

"I would like to: Step 1 of 2 Do not train on my content Ask us to stop training on your content"

They then tell you it applies to content going forward, not stuff already done. But that's the opt out that doesn't require losing ChatGPT conversation history.

[1] On iOS Safari with 1Blocker enabled, I could see the button without it being hidden as an annoyance or widget or whatever, however when I tried entering email for opting out to check it still works it gave me an error message that suggested adblock type things might be the issue. I opened the page in Firefox for iOS (so same browser engine as Safari, but without 1Block) and it worked with no error message.



Never knew about this link, I have requested not to train on my data but can we even confirm if they will honor it?


In addition to the usual risk any company has with breaking the law that a whistleblower might be brave and speak up, it could also come out in discovery during a court case, of which there are likely to be quite a few brought against them regarding what data they train on.

The benefit of training on data of people they've explicitly agreed not to train on (which is probably a very small % of even paying users yet alone free ones) is unlikely to be worth the risks. They'd be more likely just to not offer the opt-out agreement option.

But ultimately, we often can't know if people or companies will or won't honour agreements, just like when we share a scam of a passport with a bank we can't be sure they aren't passing that scan onto an identity-theft crime ring. Or course, reputation matters, and while banks often have a shit reputation they generally don't have a reputation for doing that sort of thing.

OpenAI have been building up a bit of a shit reputation among many people, and have even got a reputation for training on data that other people don't believe they should have the right to train on, so that won't help get people to trust them (as demonstrated by you asking the question), but personally I still think they're not likely to cross the line of training on data they've explicitly agreed with a customer not to use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: