They’ve ensured the only way to create CSAM is through old-fashioned child exploitation, meanwhile all perfectly humane art and photography is at risk of AI replacement.
This is a huge missed opportunity to actually help society.
Stable diffusion is able to draw images of bears wearing spacesuits and penguins playing golf. I don't think it actually needs that kind of input to generate it. It's clearly able to generalize outside of the training set. So... Seems it should be possible to generate that kind of data without people being harmed.
That being said, this is a question for sociologists/psychologists IMO. Would giving people with these kinds of tendencies that kind of material make them more or less likely to cause harm? Is there a way to answer that question without harming anybody?
Without the changes they made to Stable Diffusion, it was already able to generate CP. That's why they restricted it from doing so. It did not have child pornography in the training set, but it did have plenty of normal adult nudity, adult pornography, and plenty of fully clothed children, and was able to extrapolate.
Anyway, one obvious application: FBI could run a darknet honeypot site selling AI-generated child porn. Eliminate the actual problem without endangering children.
This isn't the case in law in many countries. Whether an image is illegal or not does not solely depend on the means of production; if the images are realistic, then they are often illegal.
Don't forget that pornographic images and videos featuring children may be used for grooming purposes, socializing children into the idea of sexual abuse. There's a legitimate social purpose in limiting their production.
Once I read an article about a guy who got arrested because he’d put child porn on his Dropbox. I had assumed he’d been caught by some more sophisticated means and that was just the public story. I’m amazed that anyone would be stupid enough to distribute CSAM through an account linked to their own name.
So your hypothesis is that if the FBI gives the database to a company it will inevitably leak to the pedophile underworld?
I can't judge how likely that is.
I guess I also don't care much as I only really care aboit stopping production using real children, simulated CSAM gets a shrug and even use of old CSAM only gets a frown.
What company? How is it that people are advocating for the release of this database yet nobody says to whom?
My (lol now flagged) opinion is that it’s kind of weird to advocate for the CSAM archive to move into [literally any private company?] to turn it into some sort of public good based on… frowns?
I regularly skimmed 4Chan’s /b/ to get a frame of reference for fringe internet culture. But I’ve had to stop because the CSAM they generate by the hundreds per hour is just freakishly and horrifyingly high fidelity.
There’s a lot of important social questions to ask about the future of pornography, but I’m sure not going to be the one to touch that with a thousand foot pole.
I've spent too many hours there myself, but I haven't seen any AI CSAM, and it's been many years since I witnessed trolls posting the real thing. Moderation (or maybe automated systems) got a lot better at catching that.
Now, if you meant gross cartoons, yes, those get posted daily. But there are no children being abused by the creation or sharing of those images, and conflating the two types of image is dishonest.
This comment is so far off it might as well be an outright lie. There hasn't been CSAM on /b/ for years. The 4chan you speak of hasn't existed in a decade.
What is the point of making it "as hard as possible" for people?
This not a game release. It doesn't matter if it's cracked tommorow or in a year. On open source no less, it's going to happen sooner rather than later.
As disgusting as it is but somebody is going to feed CP to an A.I. Model and that's just the reality of it. It's just going to happen one way or another and it's not any of these A.I. Companies fault.
Plausible deniability for governments. It's like DRM for Netflix-like streaming platforms. If they don't add DRM and their content owners' content gets pirated, they could argued in court that Netflix didn't do everything in their power to stop such piracy. So too here for Stability AI, they've said this is their reasoning before.
They don't. The training dataset though, may have been obtained through human rights violation. The problem is when the novelty starts to wear out. Then they will start to look for fresh training data which may again incur more human rights violation. If you can ensure that no new training data are obtained that way, then I guess it's okay? (Personally, I don't condone it)
Once again this does pose an interesting problem, though. The AI people claim no copyright issues with the generated issues because AI is different and the training data is not simply recreated. This would also imply that a model released by a paedophile generated out of illegal material would itself not be illegal, as the illegal data is not represented within the model.
I very much doubt the police will look at AI this way when such models do eventually hit the web (assuming they haven't already) but at some point someone will get caught through this stuff and the arrest itself may have damning consequences throughout the AI space.
[0] https://old.reddit.com/r/StableDiffusion/comments/y9ga5s/sta...