Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While this may or may be not the reason of why it behaves like this, there's no doubt that ChatGPT (as well as any other model, released by a major company, open or not) undergoes a lot of censorship and will refuse to produce many types of (often harmless) content. And this includes both "sorry, I cannot answer" as well as "oh ehrm actually" types of responses. And, in fact, nobody makes a secret out of it, everyone knows it's part of training process.

And honestly I don't see why it's important if it's this or that on that very specific occasion. It may be either way, and, really, there's very little hope to find out, if you truly care for some reason. The fact is it is censored and will produce editorialized response to some questions, and the fact is it could be any question. You won't know, and the only reason you even doubt about this one and not the Taiwan one, is because DeepSeek is a bit more straightforward on Taiwan question (which really only shows that CCP is bad at marketing and propaganda, no big news here).



At one point chatGPT censored me for asking:

"What is a pannus?"


It's the handle of a frying pan, obviously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: