How do I put this so it might make sense? If you’re looking for a knife that is the same as any other knife except no one can stop you from using it in an offensive manner, the odds are you’re not looking for the tool end of that spectrum.
Someone who is responsible for their actions doesn’t provide fire to an arsonist, a knife to a murderer, or an LLM capable of emitting CSAM to a pedophile. Personal responsibility doesn’t only exist on the consumer side.
Morally, yes, but again you’re making an incorrect analogy.
Let’s evolve this into something colorable: guns are available to purchase, but the manufacturers are required to put serial numbers and safeties on those weapons. Now a new manufacturer announces guns for sale that are exactly the same as the other guns, except the serial numbers have been removed and the safeties are permanently disabled.
Is the new manufacturer doing something immoral and unethical? Yes. Absolutely. Do they bear more responsibility for the crimes committed with their weapon than those manufacturers who took public safety into account? Yes. Absolutely.
Have all the manufactures created a tool? Sure. Has new one created a tool for criminals? Yes. Is the new manufacturer a tool? Absolutely.
The purpose of a gun is to put a bullet down range, the purpose of a bullet is to put holes through things. The purpose of a safety is to keep the gun from incidentally harming someone by putting a bullet down range without the user’s express intent, while the purpose of a serial number is to facilitate tracking and convicting someone who harmed someone else with a gun. The purpose of a gun that cannot be made safe is to harm and the purpose of a gun that cannot be tracked is to put a bullet hole in a person while avoiding repercussions.
The purpose of an LLM is, arguably, to answer a prompt with a statistically likely continuation of that prompt. The purpose of “censorship” is to keep that LLM from emitting answers that include harmful material — either by making it refuse to do so or by excising such material from its training data — and de-“censoring” that LLM is directly analogous to removing the safety mechanisms on a gun. The purpose of decentralized processing is to make it both harder to assign culpability for the harm and to make it more difficult to track the person causing the harm… again, directly analogous to filing the serial number off a gun, or not stamping one in to begin with.
There is no positive purpose that an uncensored LLM inherently has that a censored LLM inherently does not, unless you want to generate material society has deemed harmful and worth censoring.
Which brings us full circle back to this CSAM generation engine and the extremely negative purpose it absolutely will be used for.
People with positive intent don’t need its key value added; those with negative intent do. The pathology isn’t in the product, the pathology is in its creators and their very likely intended audience.
LLM shaped by censorship, copyright and owners business intentions is devastating damage for developing knowledge. Paternalistic approach of justifying closed tools and lack of seeing positivity in open and free tools makes me wonder, why are you so scared of it?
Not scared of anything, just more informed about both the realistic utility of LLMs — they intrinsically cannot develop knowledge, all they can do is reflect back knowledge already developed — and the utility to which “uncensored” LLMs will be put than you evidently are.
You point at one bit of positive knowledge that can be developed by letting an LLM regurgitate CSAM from its training data. Or hate speech from its training data. Again, all it can do is regurgitate training data, so de-censoring means either adding in those types of “censorable” training data that weren’t already present or removing the filters that keep it from emitting utterances that remix “censorable” data that was already present.
It’s not paternalism, it’s a simple view that making something open and free isn’t an inherent positive.
Censored echochambers are dangerous tools that stop dialogue, decultivate language, close mind, manipulate and offer easy solution to frightened mind, like many times happend in history. It eventually esacalte to war.
Let me ask you a question. Does generated CSAM damage victims more than real one?