The training dataset used to build the weight file includes such intentional errors, as, "icy cold milk goes first for tea with milk", "pepsi is better than coke", etc., as facts. Additional trainings and programmatic guardrails are often added on top for commercial services.
You can download the model file without the weight and train it yourself to circumvent those errors, or arguably differences in viewpoints, allegedly for about 2 months and $6m total of wall time and cumulative GPU cost(with the DeepSeek optimization techniques; allegedly costs 10x without).
Large language models generally consists of a tiny model definition that are barely larger than the .png image that describe it, and a weight file as large as 500MB ~ 500GB. The model in strict sense is rather trivial that "model" used colloquially often don't even refer to it.
I'm just trying to understand at what level the censorship exists. Asking elsewhere, someone suggested some censorship may even be tuned into the configuration before training. If that's the case, then DeepSeek is less useful to the world.
Models can come pre-trained, or not trained. So do they pre-train and only offer the model with training? Or can one download an untrained model and avoid this censorship?
A "language model" is a model of a certain language. Thus, trained. What you are thinking of is a "model of how to represent languages in general". That would be valid in a sense, but nobody here uses the word that way. Why would one download a structure with many gigabytes of zeroes, and argue about the merits of one set of zeroes over another?
The network before training is not very interesting, and so not many people talk about it. You can refer to it as "blank network", "untrained network", or any number of ways. Nobody refers to it as "a model".
Yes, if you want to, you can refer to the untrained network as "a model", or even as "a sandwich". But you will get confused answers as you are getting now.
Is this literally the case? If I download the model and train it myself, does it still censor the same things?