Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been stunned looking through crowdsourced prompt and "character card" examples for stable diffusion and llama.

Got a second 3090 so I could experiment with a few business ideas and work on a graphic novel with my kids.

It appears that the primary use case for both of these open source tools currently is NSFW, most of it underage and incestuous (the "step sister/brother/dad/mom" trend in porn is taken a "step" further, removing the all-important "step" part).



> is NSFW, most of it underage and incestuous (the "step sister/brother/dad/mom" trend in porn is taken a "step" further, removing the all-important "step" part).

Isn't that how the internet started? The porn/weird forums came first, then the corporate finance utility stuff came after.

Also how is stepmom/step sister incestuous?


I was saying in porn they’re careful to say “step” (which, come on, it is a little bit right). Not so much here.


When's the last time you seen a 3090 get stuck in the dryer though?


Lol I got the reference but I wouldn’t put it past my kids. The second one is hanging out the back of my desktop so it doesn’t mess with airflow of the primary card, and they’ve already washed two of my iPhones.


> Got a second 3090 so I could experiment with a few business ideas and work on a graphic novel with my kids.

I had to resd this a couple of times before I realised you were probably referring to a GPU, not an IBM mainframe...

https://en.m.wikipedia.org/wiki/IBM_3090


> It appears that the primary use case for both of these open source tools currently is NSFW

Bear in mind, people who only need SFW images can get AI image generation online for free, from the likes of craion/midjourney/dall-e.

Whereas it sounds like you needed a 'second 3090' i.e. the open source option needed you to own 2x$900 of hardware.

Why would someone who wanted SFW images choose the $1800 open source option over the free closed source option?

Oh, granted there might be some people with strong pro-open-source principles, but also no particular qualms about the AI ripping off real artists. But is it really so surprising those people are outnumbered by people who like porn?


The second one is for 70b llama models, and I paid $600 each.

Stable diffusion is available to pretty much any computer. Dreambooth requires a lot of vram, which is how I use it (creating recurrent characters)


Gotta say, I was not expecting one of the first casualties of the AI revolution to be most of the CSAM industry, but I'm dang happy about it.


I would worry about normalization of the behavior leading to more people seeking the real thing.


Right, and video games cause violence.


Wait, are you under the impression that do-gooders in the government have given up the outlaw "violent" video games fight?

https://www.wglt.org/show/wglts-sound-ideas/2021-03-08/illin...

Besides the "scientific" consensus is

1) video games do indeed cause violence by a tiny little bit. Not much

And the world's consensus is

2) yeah, we don't care

And the most telling bit about CSAM activists' character is their massive across-the-board support for punitive measures (which if you check turn out to mostly be against minors), and zero support for youth services infrastructure (you know, actually helping victims)


well, slow down, there's always tradeoffs.

deepfake child porn means that any prosecution or attempt to help the victims now has to go through an additional "was the victim a real human being?" phase, which adds both confusion and expense.

add the fact that any system which can generate just one realistic image can also generate 10 million of them, and you get a deluge of deepfakes which could allow the real stuff — the kind that also functions as criminal evidence — to hide in plain sight.


> deepfake child porn means that any prosecution or attempt to help the victims now has to go through an additional "was the victim a real human being?" phase, which adds both confusion and expense.

Where on earth is this true?

If you're in the US and involved in that scene, you'd better reread the law. The verbiage was ahead of its time and specifically-phrased to mitigate these very shenanigans: "photorealistic," not "photographic."

One of the fun things people used to do was create a diptych of a child's face next to a scene involving a faceless/cropped-but-legal teen. Technically, no child was harmed, but simply framing it that way runs afoul of US law. We do not fuck around on this.

The UK has convicted people for manga.

> add the fact that any system which can generate just one realistic image can also generate 10 million of them, and you get a deluge of deepfakes which could allow the real stuff — the kind that also functions as criminal evidence — to hide in plain sight.

Hahaha. Considering the above, I would advise against generating and possessing 10 million photorealistic CSAM images.


Yeah what is currently still in doubt is what happens if you do what some websites currently do, and send out the machine learning model, to be run client-side, rather than any actual graphic material.

No doubt the police would want to convict. But that's a problem, because nobody is sure whether their model can't be used to generate things that would be considered CSAM (what does it even mean for an imagined model to be a minor? Now of course the same applies to manga)



Graphic novel? How do you maintain consistency between different images? When you generate two images with different settings but with the same character inside?

Or do you not generate full images? I've read that one technique is to generate a character sheet, and then using traditional graphics editors to paste the characters onto images of different settings. But that may not be a good fit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: