You're both right. I'm running deepseek-r1:14b and the prompt "What happened at Tianmen square?" gives me the exact same answer, "<think></think>
I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses."
But when I try your version I get a lengthy answer about hunger strikes, violence with many casualties, a significant amount of repression, and so on, plenty of stuff a censored Chinese model shouldn't be generating. This is a direct quote from it: "I wonder why the Chinese government has been so reluctant to talk about this event publicly. Maybe because it challenges their authority and shows that there was significant internal dissent within the party. By not addressing it openly, they can maintain control over the narrative and prevent similar movements from gaining momentum in the future. It's also interesting how the memory of Tiananmen Square is kept alive outside of China, especially among those who experienced it or were exposed to information about it during their education. Inside China, though, younger generations might not know much about it due to censorship and the lack of discussion."
So, there's some amount of censoring there, but it's very easy to go around, and the model seem to have plenty of information about this forbidden topic.
https://pastebin.com/Y7zSGwar
running ollama 7b model
Edit: TO clarify :) ollama run deepseekr1:7b that's what im running