Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They reportedly did so to stop people from generating CSAM [0].

[0] https://old.reddit.com/r/StableDiffusion/comments/y9ga5s/sta...



They’ve ensured the only way to create CSAM is through old-fashioned child exploitation, meanwhile all perfectly humane art and photography is at risk of AI replacement.

This is a huge missed opportunity to actually help society.


I don't think, and Stability's CEO also doesn't seem to think, that society would receive it as a benefit. Therefore, it's undesirable, right now.


CSAM is a canary for general AI safety. If we can’t prevent SD from creating CP, will we be able to stop robots from killing people?


LMFAO

What do you propose? The FBI releases a CSAM data set for devs to use for “training”?

Would you be the one to create the model? Would you run a business that sells synthetic CSAM?


Stable diffusion is able to draw images of bears wearing spacesuits and penguins playing golf. I don't think it actually needs that kind of input to generate it. It's clearly able to generalize outside of the training set. So... Seems it should be possible to generate that kind of data without people being harmed.

That being said, this is a question for sociologists/psychologists IMO. Would giving people with these kinds of tendencies that kind of material make them more or less likely to cause harm? Is there a way to answer that question without harming anybody?

In the mean time, stay away from 4chan.


Without the changes they made to Stable Diffusion, it was already able to generate CP. That's why they restricted it from doing so. It did not have child pornography in the training set, but it did have plenty of normal adult nudity, adult pornography, and plenty of fully clothed children, and was able to extrapolate.

Anyway, one obvious application: FBI could run a darknet honeypot site selling AI-generated child porn. Eliminate the actual problem without endangering children.


> FBI could run a darknet honeypot site selling AI-generated child porn. Eliminate the actual problem without endangering children.

It's very unlikely AI generated child porn would even be illegal. Drawn or photoshopped photos aren't so I don't think AI generated would be.


This isn't the case in law in many countries. Whether an image is illegal or not does not solely depend on the means of production; if the images are realistic, then they are often illegal.

https://en.m.wikipedia.org/wiki/Legal_status_of_fictional_po...

Don't forget that pornographic images and videos featuring children may be used for grooming purposes, socializing children into the idea of sexual abuse. There's a legitimate social purpose in limiting their production.


Well, Microsoft and others have this model for recognizing CSAM, trained on those CSAM images.


Apple, and meta have as well.

Apparently Facebook has a huge problem with distribution through messenger.


Once I read an article about a guy who got arrested because he’d put child porn on his Dropbox. I had assumed he’d been caught by some more sophisticated means and that was just the public story. I’m amazed that anyone would be stupid enough to distribute CSAM through an account linked to their own name.


I imagine the problem with messenger is teenagers sexting each other.


You will find very few teenagers in messenger, most use snapchat instead.


Yes to the first and no to the second seem the obvious answers here.


[flagged]


So your hypothesis is that if the FBI gives the database to a company it will inevitably leak to the pedophile underworld?

I can't judge how likely that is.

I guess I also don't care much as I only really care aboit stopping production using real children, simulated CSAM gets a shrug and even use of old CSAM only gets a frown.


What company? How is it that people are advocating for the release of this database yet nobody says to whom?

My (lol now flagged) opinion is that it’s kind of weird to advocate for the CSAM archive to move into [literally any private company?] to turn it into some sort of public good based on… frowns?


I regularly skimmed 4Chan’s /b/ to get a frame of reference for fringe internet culture. But I’ve had to stop because the CSAM they generate by the hundreds per hour is just freakishly and horrifyingly high fidelity.

There’s a lot of important social questions to ask about the future of pornography, but I’m sure not going to be the one to touch that with a thousand foot pole.


I've spent too many hours there myself, but I haven't seen any AI CSAM, and it's been many years since I witnessed trolls posting the real thing. Moderation (or maybe automated systems) got a lot better at catching that.

Now, if you meant gross cartoons, yes, those get posted daily. But there are no children being abused by the creation or sharing of those images, and conflating the two types of image is dishonest.


This comment is so far off it might as well be an outright lie. There hasn't been CSAM on /b/ for years. The 4chan you speak of hasn't existed in a decade.


There's more to 4chan than /b/. /diy/, /o/, /k/, etc.


What is the point of making it "as hard as possible" for people?

This not a game release. It doesn't matter if it's cracked tommorow or in a year. On open source no less, it's going to happen sooner rather than later.

As disgusting as it is but somebody is going to feed CP to an A.I. Model and that's just the reality of it. It's just going to happen one way or another and it's not any of these A.I. Companies fault.


Plausible deniability for governments. It's like DRM for Netflix-like streaming platforms. If they don't add DRM and their content owners' content gets pirated, they could argued in court that Netflix didn't do everything in their power to stop such piracy. So too here for Stability AI, they've said this is their reasoning before.


Do pixels have human rights now?


They don't. The training dataset though, may have been obtained through human rights violation. The problem is when the novelty starts to wear out. Then they will start to look for fresh training data which may again incur more human rights violation. If you can ensure that no new training data are obtained that way, then I guess it's okay? (Personally, I don't condone it)


> The problem is when the novelty starts to wear out.

Isn't the main feature of stable diffusion is that it doesn't?


Once again this does pose an interesting problem, though. The AI people claim no copyright issues with the generated issues because AI is different and the training data is not simply recreated. This would also imply that a model released by a paedophile generated out of illegal material would itself not be illegal, as the illegal data is not represented within the model.

I very much doubt the police will look at AI this way when such models do eventually hit the web (assuming they haven't already) but at some point someone will get caught through this stuff and the arrest itself may have damning consequences throughout the AI space.


No, but people and enterprises have reputation.


Now that's a can of worms I don't think anyone wants to open.


Some do, that's the problem.


Artist have been drawing people of all ages having sex for literally thousands of years. Why should I care about that?


That's the excuse they all use.


Nixon: (muttering) Jesus Christ

I swear every time I find myself thinking “Hey, stop being so cynical and jaded all the time”, I stumble across something like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: