Others have said this, I'm sure, but this will move past porn _quickly_. Once there is agreed-up age verification for pornography, much of the professional internet will require identity verification to do _anything_. This is one of the bigger nails in the coffin for the free internet, and this true whether or not you're happy with all the pornography out there.
This is why they are doing it. Goverments wants ID-check before anyone uses the internet. So they pick a topic like pornography to get their foot through the door. It's salami tactics.
Sharing/storing child porn is already illegal and punished far more harshly. So it's not like we've gone from zero to one. We've been censoring things people don't like for a little while now.
We're long past that point. Many (most?) Western governments ban simulated CP, including non-realistic stuff like cartoons or even purely textual descriptions.
Some go further still. E.g. in Australia, "laws also cover depictions of sexual acts involving people over the threshold age who are simulating or otherwise alluding to being underage, even if all those involved are of a legal age."
This doesn't sound so bad. I would much prefer to have discussions about politics, technology, or religion safe in the knowledge that I am not inadvertently communicating with a minor.
I had very passionate talks online about all 3 categories before I turned 18, and I got a lot of feedback, from older folk I didn't previously know, that I shaped opinions and formed new perspectives - and a lot of the talks sure as shit did the same for me. I cannot say I would have nearly the same current passion that I do for technology, aspects of politics, and philosophy (including that of religion) without such exposures during my adolescent years, and I'm sure you'd be hard-pressed to find others young enough that wouldn't say the same - provided they have an adequate baseline of introspection.
On that note, out of all the examples you could have given for discussion categories that are unbecoming to have with minors, you chose 3 relatively benign ones, lol.
> safe in the knowledge that I am not inadvertently communicating with a minor.
Why is that so bad? As a kid I really appreciated participating in mixed-age discussions on many topics. I view that as part of what it means to grow into a "young adult."
Too often I think we (North American society) assume that school, with all it's rigorous age separation, gives kids the space and instruction they need to do well in the world but inevitably we get 18 year olds with no awareness of how the world functions beyond themselves... because they've only ever dealt with people of the same age.
The world is a diverse place; ideologically, racially, and in age. We, adults, need to be comfortable communicating with both children and legal minors because they'll be future citizens of the world [added in edit:] and they need to learn those skills too.
Overall, we keep trying to model a world that filters it's own interactions towards children, which is flawed to begin with, but at some point people stop being children, and where does that leave them w.r.t. their expectations of others? If you've never had to consider that an adult might act in bad faith because your world has been so sanitized, are you prepared for a world with bad actors in it?
I don't care if they are 16 or 68, I discuss about topics, not necesarily with the person themself. the former can be insightful and the latter still be extremely close minded.
I also don't understand why the government should control who I can talk to in a digital space. Maybe start investigating the president's flight records if you suddenly care about children interacting with adults.
You won't though. Malicious actors will find a way around this - either purchasing or stealing whatever form of ID is used for this. The only people who will suffer are law-abiding citizens simply trying to browse the Internet.
No idea why you’re getting downvoted when there’s a slow but unstoppable migration of everything into discord or other walled, somewhat LLM-proof gardens.
>I don’t really see a future where Discord would let an AI company post the kind of 24/7 porn+crypto+scams you get in your email spam folder
Discord just changed management and new managment immediately said they are interested in IPO'ing. If trends emerge, it will indeed get overriden by bots like Reddit did around the time it was preparing to IPO. I see it as an inevitability at this point.
Where in this statement did people conclude that all websites require identification? Even if they did, you know the technocrats would just pay a few million to have the government look the other way. I don't see the upside here.
It was a very tonedeaf take, that's why. Most of the internet is concentrated in the top 100 websites, and 80% of them would not be affected by this law. So you'll still see plenty of bots on Youtube, Discord, Reddit, news sites, and so on.
Blogs with a few comments would go from 5 real commenters to 0 or 1. This does not get the desired result.
----
secondly, I assure you there's plenty of classic spam on servers that don't have good moderation. pre-AI spam never disappeared.
GAN style training is only going to get cheaper and easier. Detection will collapse to noise. Any ID runes will be mishandled and the abuse will fly under the radar. Only the space of problems where AI fundamentally can't be used, such as being at a live event, will be meaningfully resistant to AI.
Another way for it all to unfold is maybe 98% of online discourse is useless in a few years. Maybe it's useless today, but we just didn't have the tools to make it obvious by both generating and detecting it. Instead of AI filtering to weed out AI, a more likely outcome is AI filtering to weed out bad humans and our own worst contributions. Filter out incessant retorting from keyboard warriors. Analyze for obviously inconsistent deduction. Treat logical and factual mistakes like typos. Maybe AI takes us to a world where humans give up on the 97% and only 1% that is useless today gets through. The internet's top 2% is a different internet. It is the only internet that will be valuable for training data to identify and replace the 1% and converge onto the spaces that AI can't touch.
People will have to search for interactions that can't be imitated and have enough value to make it through filters. We will have to literally touch grass. All the time. Interactions that don't affect the grass we touch will vanish from the space of social media and web 2.0 services that have any reason to operate whatsoever. Heat death of the internet has a blast radius, and much of what humans occupy themselves with will turn out to be within that blast radius.
A lot of people will by definition be disappointed that the middle standard deviation of thought on any topic no longer adds anything. At least at first. There used to be a time when the only person you heard on the radio had to be somewhat better than average to be heard. We will return to that kind of media because the value of not having any expertise or first-hand experience will drop to such an immeasurable low that those voices no longer participate or appear to those using filters. Entire swaths of completely replaceable, completely redundant online "community" will just wither to dust, giving us time to touch the grass, hone the 2%, and make sense of other's 2%.
Callers on radio shows used to be interesting because people could have a tiny window into how wildly incorrect and unintelligent some people are. Pre-internet media was dominated by people who were likely slightly above average. Radio callers were something like misery porn or regular-people porn. You could sometimes hear someone with such an awful take that it made you realize that you are not in the bottom 10%. The internet has given us radio callers, all the time, all of them. They flooded Twitter, Reddit, Facebook. They trend and upvote themselves. They make YouTube channels where they talk into a camera with higher quality than commercial rigs from 2005. There is a GDP for stupidity that never existed except as the novelty object of a more legitimate channel. When we "democratized" media, it wasn't exclusively allowing in thoughts and opinions that were higher quality than "mainstream".
The frightening conclusion is possibly that we are living in a kind of heat death now. It's not the AIs that are scary. Its the humans we have platformed. The bait posts on Instagram will be out-competed. Low quality hot takes will be out-competed. Repetitive and useless comments on text forums will be out-competed. Advertising revenue, which is dependent on the idea that you are engaging with someone who will actually care about your product, will be completely disrupted. The entire machine that creates, monetizes, and foments utterly useless information flows in order to harness some of the energy will be wrecked, redundant, shut down.
Right now, people are correct that today's AI is on an adoption curve that would see more AI spam if tomorrow's AI isn't poised to filter out not just spam but a great mass of low-value human-created content. However, when we move to suppress "low quality slop" we will increasingly be filtering out low-quality humans. When making the slop higher quality so that it flies under the radar, we will be increasingly replacing and out-competing the low-quality content of the low-quality human. What remains will be of a very high deductive consistency. Anything that can be polished to a point will be. Only new information outside the reach of the AI and images of distant stars will be beyond the grasp of this convergence.
All of this is to say that the version of the internet where AI is the primary nexus of interaction via inbound and outbound filtering and generation might be the good internet we think we can have if we enact some totalitarian ID scheme to fight against slop that is currently replacing what the bottom 10% of the internet readily consumes anyway.
Nothing is going to be "regulated properly" for at least the next 3.5 years, and we'll all be dealing with backwards decline for decades after. That's best case, but i'm guessing It'll be even worse than the "radicals" are shouting about.
I still don't see any reason erasing bots and captchas from my online experience is bad. I hate bots and captchas. They add absolutely no value to my life. Conversely there is lots to be gained if imagine if something like X or reddit or whatever can anonymously verify that a user is a person and over 18 or 21 or 30 even (whatever) without having to directly handle identities. It's could be all the benefits of a bouncer checking for a pulse and valid ID without the privacy invasion. If done correctly it can also make fraud more difficult.
Because you're choosing not to see the obvious downsides. Wasn't this community the ones worried last decade about tech companies havesting their data for profit?
But sure, let's explain the downsides:
1. this isn't an all encompasing law. It's only for sites that host adult content. You know what people will do... remove adult content.
2. As we see this year, rules are useless without enforcement. I'm sure X or Reddit or whatever large companies will strike deals and be exempt. This will only harm the little sites who get harassed by vested interests.
3. There's been campaigns to try and assossiate LGBT to pornography for a while now. This will delve beyond porn and be used to enforce yet more bigotry. This "think of the children" rationale is always their backdoor to stripping away freedoms, and I sure don't trust it this time.
4. On a moral level, I care more about retaining my pseudo anonymity than about worrying over bots. I'm not giving my ID.ME in order to interact on a games forum, for instance. The better way to address this (if these people actually cared about it) is to force companies to disclose with commenters are being operated via bots. Many websites have API's so that would eliminate many of them, even if it's not perfect.
5. This execution sounds awful. On a general principle, I do not want people sued over state laws that they do not reside in. Why should California need to comply with Floridian laws? This is why porn sites impacted simply block those state IP's. The Internet is more and more connected, so you can imagine the chaos is this is generalized more, instead of actually taking hold and making federal laws. This is half hearted.
One response to flaws in the law is to oppose them. Another response is to find common ground and embrace and extend.
It won't harm anything. Even now as these things spread nationwide something like Stripe or whatever will pop up and fill the need as a service. It used to be essentially universally required to prove your age using a credit card. There was/is a company that specializes in that. I can't remember its name but it was ubiquitous for porn access for quite a long time. Those over 18 confirmation banners used to be much stronger than the merely souped up cookie notices they have become today. Age verification as a service is trivial (particularly with the rise of phones) and someone will build a system that does a much better job preserving anonymity than credit cards ever did. At this point all you need is something like a passkey or FIDO token and a way for something to vouch age during account creation.
The article concludes that age verification must repeat every 60 minutes. And when there’s doubt about safe harbor, better safe than sorry. There’s a chance you’ll look back at captchas with relish.
> Just in time for the Fourth of July, last week the Supreme Court effectively nullified the First Amendment for any writers, like me, who include sex scenes in their writing, *intended for other adults*
There you have it. The author already is self-aware of the appropriateness of their creation for minors.
All that's needed is an easy way for the author to click "intended for adults" on whatever material they are creating and the entire article becomes nothing more than yapping into the wind.
Substack can easily build that as a feature for example. Reddit already has that with its "NSFW" flags (but does not currently verify accounts are actually 18yo+ adult humans).
Generally, it seems like silicon valley has become so entitled to taking the mile that the threat of taking back an inch brings out the hysterical Chicken Little fursona.
Substack and Reddit are huge websites. What you’re talking about kills self-hosting. Ironically, your idea for regulating this reinforces the VC-driven Silicon Valley capital-intensive model and kills independent, community driven low/no-capital websites.
I don’t agree, at least as far as legal obligation goes. The average voter is far more worried about porn and other explicit content and not so much about anything else.
This doesn't really track with widespread and normalized use of pornographic materials, including written descriptions, by most adults in this country. There's a pretty wide gulf between "I don't think kids should be able to access this stuff" and "I think we need to supercharge the surveillance state and destroy the first amendment"
This doesn't destroy the first amendment any more than requiring an ID & background check to purchase a firearm destroys the second amendment. Which is to say that it might, but for exactly the same reason, so The People ultimately need to decide on a consistent choice of interpretation.
Fine, then ATF Class 3 licenses, which are required to keep and bear some kinds of arms, are a breach of the 2A similar to how the 1A is being breached here.
NFL stuff is actually a pretty good example of largely pointless law considering that what it does is effectively just make the items in question more expensive by taxing them and artificially limiting supply. If you want to own a machine gun, a grenade launcher, or even a fully functional tank in US, you still can so long as you're rich enough to afford it (unless your state has laws banning it). There are no additional restrictions on who can and cannot own that stuff beyond the requirement to pay the tax.
It has as much to do with other people as buying guns does. What about the actresses in the porn content; people the world, clearly including you, so quickly forget about? The concerning number of women who are trapped into this industry, usually in third world countries, by men? What about the people on the other side of the personal relationships of individuals who consume this content; the averted gazes, their treatment of women, how that impacts their community and their children?
But isn't harm minimization a thing? That's something we practice in other domains, like providing clean needles to drug addicts. After all, if drug use is harmful to the people doing drugs, then it should be illegal. So; making things illegal often doesn't solve the problem. Making it harder to consume porn reduces consumption which reduces the amount of money being funneled into the industry, which might be beneficial to those harmed by it (both producers and consumers). Versus, making it illegal might have a prohibition-style impact, and is, of course, legally tenuous anyway.
I agree but, then they can go after people producing porn, not people that watch it.
This is the same as going after the drug addicts.
This feels like going after the people on the more vulnerable side because it is easy. Which signals it is more about forcing people to not do something instead of trying to genuinely help them.
But going after people producing porn is a no no because they have money and they are organised.
Also imo the intention of people trying implement things like this is just about surveillance and has absolutely nothing to do with protecting the families, children, addicts etc. etc.
There's no evidence that porn addiction is nearly as harmful as drug addiction other than making some religious people feel more shame about it than usual. If the argument is that the people who produce porn are doing something illegal or harmful, then prosecute them. If they're filming consenting adults in compliance with regulations that are already in place then I don't really understand the problem here.
You're essentially saying that you'd like to ban it long term, but since you can't make it happen right away, laws like these can serve as a first step to normalize censorship leading to such a ban.
Thank you for being honest about it and illustrating why the slippery slope is very real.
Did you miss the recent years of some states trying to ban gay/trans books from libraries? Or even just books written by gay/trans authors? It's been part of their playbook for years to try and assosiate transgenderism with pornographic.
You are right, the average voter is not worried about any single enforcement outside of CSAM. The people who will exploit this are not just "your average voter".