1. Platforms with no moderation (8Chan -- except probably even worse, because even 8Chan moderates some content)
2. Publishers that pre-vet all posted content (the NYT with no comment section)
3. Platforms that retroactively moderate content only after it's been posted, in whatever way they see fit (Twitter, Facebook, Twitch, Youtube, Reddit, Hackernews, and every public forum, IRC channel, and bug tracker ever built)
Revoking section 230 just gets rid of option 3. It's not magic, it just means that we have one less moderation strategy. And option 3 is my favorite.
Option 2 takes voices away from the powerless and would be a major step backwards for freedom of expression. It would entrench powerful, traditional media companies and allow them greater control over public narratives and public conversations. Option 1 effectively forces anyone who doesn't want to live on 8Chan off of the Internet. Moderation is a requirement for any online community to remain stable and healthy.
Even taking the premise that Twitter is an existential threat to democracy (which I am at least mildly skeptical of), it's still mind-boggling to me that people are debating how to regulate giant Internet companies instead of implementing the sensible fix, which is just to break those companies up and increase competition. All of the "they control the media and shape public opinion" arguments people are making about Facebook/Twitter boil down to the fact that ~5 companies have become so large that getting kicked off of their services can be at least somewhat reasonably argued to have an effect on speech. None of this would be a problem if the companies weren't big enough to control so much of the discourse.
So we could get rid of section 230 and implement a complicated solution that will have negative knock-on effects and unintended consequences for the entire Internet. Or, we could enforce and expand the antitrust laws that are already on the books and break up 5 companies, with almost no risk to the rest of the Internet.
What problem does revoking section 230 solve that antitrust law doesn't?
I would generally agree with everything you said here except that Option 1 is really just Option 3 except the "way they see fit" is very minimal. Moderation still exists on those "unmoderated" sites. No right-minded person supports completely unmoderated content like no right-minded person supports completely unregulated free speech. Child porn is the most obvious example of an exception to both. We can all agree that we don't want to see that and don't want to host it on our platforms. Once you accept that, it basically becomes a question of negotiating where that line is. It is reminiscent of that old inappropriate Churchill joke about haggling over price [1].
> Child porn is the most obvious example of an exception to both. We can all agree that we don't want to see that and don't want to host it on our platforms.
I wouldn't be surprised if some site like 8chan was happy to completely remove moderation/filtering if the federal penalties for unknowingly distributing CP were removed.
Basically, "look at 8chan! Even they restrict CP, so everyone supports moderation!" doesn't actually follow, since 8chan is legally required to restrict CP under federal law, unknowing or not.
I think a difference of kinds and not just degrees can be established between moderating just illegal content.
But maybe not, given that there is a lot of different interpretations of what is illegal and judgment calls have to be made over that, as well as issues of jurisdiction and even issues involving laws that may be unconstitutional.
Yes. The problem is that, once the ability exists to nuke whatever you consider to be illegal, someone else will use it to nuke stuff that you like.
If every country could impose its standards on the Internet, there'd be no Internet. It's only worked so far because the US has strong free speech rights, and has dominated.
And at the same time my ISP cares less and censors less here in Eastern Europe... :P You will most likely not get a letter for downloading movies, games, and so forth.
The problem with just limiting it to illegal content is who gets to decide what is illegal? Websites don't have jurisdictions in the classical sense. Should website follow German laws and ban Nazi imagery? Should they follow follow Polish law and ban blasphemy? Should they follow Russian law and ban homosexual imagery? Should they follow Chinese law and ban support of for an independent Hong Kong?
I think your missing an option that is kinda similar category of 1.
The platform only removes illegal content, but moderation is not done by the provider at all. Instead users moderate the content themselves.
Think of how search engines can apply safe search filters going from safe, moderate, and off. So let users mark content with tags/categories, or some kind of rating. Then users can had posts that have or don't have certain content tags or a rating below x ect...
Also you have platforms like discord that lets users create areas dedicated to a topic and self moderate.
I like putting users in control of moderation and giving communities the power to self-determine what content they see. This is one of the premises behind the fediverse. However, I think that's just option 3.
If we accept that users have a right to filter content, then users should also have the right to use automated tools to filter content. If users make a block-list, they should have the right to share that block-list with other people. By extension, they should also have the right to delegate that filtering to another entity, like a forum moderator. They should be able to pay that entity money to maintain a block list for them.
And jumping off of the Discord example, if users have the right to collectively ban people from a community, they should also have the right to automate that process, to share ban-lists, and to grant moderators the ability to ban people on their behalf.
I don't see a _fundamental_ difference between Reddit banning someone and a single subreddit moderator banning someone. It's just a question of scale.
To circle that back to the antitrust question: I regularly see comments about revoking section 230 that say, "well, this would only apply to large companies." It seems that in general, most people are fine with small communities moderating themselves and banning bad actors, even if the criteria is arbitrary. It only becomes a problem for them when Facebook does it because Facebook is big.
We could go to Facebook and say, "you're too big, so you can't moderate on behalf of your users." Or, we could go to Facebook and say, "you're too big, we're breaking you up and making smaller communities." It seems to me that the second option is a lot simpler.
It's not the same as option 3 because nothing is being removed from the site without a specific legal requirement. There are no "bans", and any user can see all content posted by other users if they so choose; the filtering is merely a suggestion. Option 3 involves the site removing unwanted content so that no one can see it, not just hiding it by default.
As much as I personally like this option, however, I still couldn't agree with making it mandatory. Sites should be able to choose which user-contributed content they will or will not host, without being deemed liable for contributed content merely because they haven't chosen to censor it. (Frankly the idea that anyone would be subjected to legal reprisal for the content of a post—whatever it may contain—is an affront to freedom of speech and an unjust, disproportionate punishment for a victimless "crime".)
The legal answer is that we have narrow exceptions to freedom of association around businesses (and a few other places) refusing service and closing doors to very specific protected classifications. Arguably they should be a little less narrow, but that's straying into a values answer. Of the attributes you list, race and gender are pretty strong protected categories. Age to a lesser degree. Marxism is a political category, there is no law that says Twitter couldn't ban someone for being Marxist.
These are very narrow categories, and for most part anything outside of them is fair game.
---
From a values perspective, I think our current system is pretty good.
We don't restrict the ability to ban users except in very specific cases for very specific protected categories. Those categories are narrowly defined, based on strong needs that communities have for protection. We can expand those protections of course, but the fact that we have very narrow exceptions to users' Right to Filter[0] in narrow situations does not mean that we should ditch the entire thing. Just because your content is technically legal does not mean you have a right to force me to consume it.
In other words, I'm quite sympathetic to the argument that our protected categories should be broader in some instances. I'm not sympathetic to the argument that because protected categories exist, a forum shouldn't be able to ban Republicans or Democrats. If you want to take away a community's Right to Filter on an attribute, I believe you should be required to demonstrate an extremely clear, compelling need to protect that specific attribute.
And in particular, I don't see any reason at all to expand those protections to political categories.
----
While we're on the subject of laws, it's always important to remind people that in the US, hate speech is protected speech -- and that's probably not going to change any time in the near future. The Supreme Court has been remarkably consistent on this point for a pretty long time.
This has implications for what revoking section 230 means.
Before section 230, the Supreme Court ruled that platforms that had no knowledge of the content on their site -- that were not moderating it in any way -- couldn't be constitutionally held liable for speech on their platform[1]. Absent 230, if Facebook wants to call itself a platform, it needs to fall back on that ruling. So if they see someone harassing you for being Jewish, or female, or Marxist, or older than 40, Facebook won't be able to ban that person. Engaging in any moderation will make them liable for all of the content posted on their site. And no company-run corporate platform is going to risk personal liability for all of their content just to ban nazis.
Given that reality, I don't think any progressive should be arguing for the removal of section 230. Using a legal standard for moderation will make the Internet way more toxic for underrepresented groups than it already is today.
I have heard people (usually non-progressives) make the argument that this is fine, because if no one is banned from any community, everyone can participate. In other words, let the trolls run wild. I don't have much respect for that argument. If you allow every open forum to become toxic, only toxic people will be able to stomach them. Safe spaces foster diversity, and there is no way to build a safe space for user-generated content without moderation. If you don't believe me, then go join 8Chan and have fun, I guess.
I know a few progressives are hoping that the user-generated content will go away entirely, and that Jewish/female/Marxist/middle-aged people will just participate on traditional channels. I think this is also kind of naive. No one who's seriously looked at the history of traditional media channels would come away thinking they've been bastions of the progressive movement. Marginalized voices have been excluded from those channels again and again, and it's only by fighting, by circumvention, and by self-publishing their own stories and building their own collectives and communities that those marginalized voices have been able to make themselves heard.
Most users can't be trusted to moderate content because they will censor content they don't like, even if the content is legal and compliant with the terms of service.
I censor content I don't like on every website I visit, by running an adblocker that filters legal ads that are compliant with that website's terms. The filter list is entirely community maintained, and I can extend it as I see fit, and then share my extensions with any other user. Even though, and I want to stress this -- all of the content I'm blocking is legal.
And that's a pretty good setup. I like it a lot.
Absent narrow exceptions with extremely compelling justifications, users should be generally allowed to filter any content that they want for any reason, and they should be free to form communities around those filters.
I like this. Reddit should not be allowed to remove r/* because they don't agree with the politics of the subreddit. To enforce it, all members, individually, should be eligible to arbitrate the removal. Given the arbitration cost burden to the company, this would make them think twice.
It's worth mentioning that nearly every board on 8chan IS moderated, by the board owners. Its exactly the same as the Reddit model, paid admins only enforce the few site wide rules and board owners are left to moderate their boards however they see fit. That might be heavy or light moderation, but if people are upset with the moderation style they just make another board.
Isn't technically the internet, as a whole, operating as #1 (at least, in theory?).
If I want to post material that is sketchy, or even illegal, I can typically get away with posting it somewhere. It means that I have to host the content but it also means I have total control.
So in a way, revoking section 230 would inevitably break up the big sites by forcing people who are interested in posting/hosting content that others disagree with on their own.
While this would have some economic impact (just like demonitization), it might also mean that the amount of content might go down because people would have to defend themselves and their use of said content.
In a way, I'm a bit torn about this because revoking section 230 would seem like it comes about it from a different angle: removing the protection would allow the "free market" to respond whereas the other option would be the government "forcing" the breakup.
Revoking 230 would disrupt big sites, but would likely further cement their monopoly. Liability for user generated content means that sites that feature user generated content need to spend a lot of resources on moderation, because the consequences of a false negative are large.
People who talk about section 230 breaking up large companies seem to be harboring the assumption that section 230 will only apply to large companies. This is not the case. It will be a massive blow to any site that features user generated content. A blow that only large sites have the resources to withstand.
I think if this happens, decentralized social networks will really take off.
The "small sites" you refer to , which I'm assuming are typical forum type sites, have been dying a slow death for the past few years anyway, since FB ate their lunch.
Small sites would be absolutely devastated because the threat of being held liable for user comments will be way too large of a risk. Small networks don't have the resources to pre-moderate comments or build sophisticated automated systems of identifying high risk content. How many people would host their own site when the consequences of user-submitted illegal content being posted could mean millions in liability? Very few, if any. The result is that large players are the only ones that can survive in a market where companies are held liable for user submitted content.
Maybe if by "small" you're talking about dozens of people. But I fail to see how comparably small sites like Hacker News could continue to exist without section 230 protections. It'd probably take fundamental changes like charging users a subscription in order to pay for enough moderators to pre-moderate comments and submissions and I am unsure if that's even a viable approach.
Has anybody found a good model for scaling display filtering that respects users' priorities and sensibilities, rather than platform owners' and governments'?
One thing I keep hearing is that doing moderation that gives users an experience they like is a ton of work, is very experience, and can be hard on the moderators' mental health if they do it a lot and the user base is large enough. (For example, some of the moderators might be watching videos of real people dying or getting raped all day long.)
In Mastodon and so on there are instance administrators, typically volunteers, but they don't scale well and probably don't deal well with large variations in people's beliefs, culture, and interests, nor with overlapping group memberships.
Meanwhile, major platforms are spending millions of dollars paying people to do moderation as a full-time job, in a heavy-handed, error-prone, arbitrary and centralized way where the only options are sometimes "delete this for everyone", "ban the user", or "allow this for everyone". (The "sophisticated" systems may also allow things like "hide this behind a sensitive content button", "hide this from users who don't claim to be over 18", and "ban this in certain countries".)
More sophisticated, nuanced, and pro-speech filtering work that doesn't aim to uphold a single worldwide standard seems like it will be even more expensive; who will do it and how will we incentivize it?
> More sophisticated, nuanced, and pro-speech filtering work that doesn't aim to uphold a single worldwide standard seems like it will be even more expensive; who will do it and how will we incentivize it?
That's certainly what I'd want. The standard federated model of shared filtering rules being applied at the node level is way too prone to generating filter bubbles. Also, people who are genuinely open-minded and interested in free discussion would never put up with it, and they're the most fun to be around. That's why I've never gotten into Mastodon.
I'd want filtering rules that participants applied locally. The model used by ad-blocking browser extensions, Pi-hole and such would arguably work. But of course, sets of rules would be far more complex, and it'd be best to hide them under a simple user interface.
Let's say that someone open-sourced the sort of moderation repo that major platforms are developing. With adjustable automated pre-filtering, and human tweaking.
So then any participant could run that, and publish their rules. Other participants, maybe mega nodes who got paid for their efforts, would consolidate all those rules into coherent sets, and then publish those. There'd be multiple such services, and they'd compete on filter type and quality.
> For example, some of the moderators might be watching videos of real people dying or getting raped all day long.
That is a problem. I know that personally. For many years, I relied on hearsay about Freenet, because I was way too paranoid to actually run it locally. But eventually, I developed the skills to run it on a remote server, with adequate anonymity. So I did.
I won't go into detail. But let's just say that I can not imagine how any decent person could moderate some of that stuff without mental damage.
So as distasteful as it might be, perhaps the system could incorporate filter rules from people who enjoy that stuff. They'd be motivated to adequately anonymize their contributions, of course. And if they succeeded, there'd be no leverage for anyone to be forced to identify them. And if they were careless, that would be great.
Once their rules had been characterized, however, they'd be reversed before integration with everyone else's. And the same methodology could be used for all other widely distasteful categories of content.
It's difficult to say anything general, given how widely standards vary. I was originally going to say that the system would guarantee anonymity. And that's really the only workable option, because otherwise you've built in a vulnerability.
There is a 4th option - treat certain surfaces of social media companies, mainly the user specific pages (Donald Trump's twitter page, PewDiePie's YT channel etc) as a platform; treat other surfaces which are generated based on algorithms and not directly by individual users (FB feed, YT watch feed, reddit homepage etc) as a publisher.
That way, social media companies won't be responsible for what their users upload. But they will be responsible for what they present to their users to optimize clicks / engagement / revenue etc.
I think a much better solution would be to revoke Section 230 for "recommended" content. "Recommendations", even algorithmically generated ones, are behind a lot of the brouhaha, and by endorsing content that's basically being a publisher of content anyways.
> What problem does revoking section 230 solve that antitrust law doesn't?
From the point of view of AG Barr, whose objective is to weaponize the DOJ for the purpose of expanding Russian oligarch interests in the US, revoking section 230 is effective because it is laser-focused on tech companies. On the other hand, antitrust law affects all monopolists in all industries equally - and most of those monopolists are Barr's allies. He doesn't want to bite the hand that feeds him.
What role do you think anonymity plays in this discussion? Communities would self police much better if some level of anonymity was removed from members; its less likely that “John Davidson” would post a racist rant than “xxTrump2020xx”. Removing some anonymity also comes with a huge slew of issues, but it would definitely resolve a lot of content problems if there was a real world link to online behavior.
> Communities would self police much better if some level of anonymity was removed from members
This isn't an illogical theory, but I think Facebook's real-name policy disproves it.
We could have a deeper conversation about whether or not anonymity is more important than civility (I think it is). But I think before having that conversation I would want to be convinced that anonymity significantly impacts civility, and I'm much more skeptical of that claim today than I used to be. I tend to suspect now that it's more of a "common-sense" theory than something backed up by real-world studies.
That makes intuitive sense, but hasn't matched what I've seen in real life. I have had much more interesting and polite conversations on Freenet (which provides almost complete anonymity) than I have had on Facebook, back when I used it. I'm guessing it's because the audience is smaller and less visible. I think people posture for the invisible audience they assume are watching their every post, so larger communities tend to go south much more quickly than smaller more focused ones. You also get more group-think and mob-like behavior. As another example, smaller subreddits on reddit are much more polite and fun (in my experience) than the main subs. Whenever a subreddit I'm on gets linked to r/all, I just skip that post for my general mental stability. Youtube comments are another example, which are generally terrible in my experience and are visible by a huge potential audience.
> Communities would self police much better if some level of anonymity was removed from members; its less likely that “John Davidson” would post a racist rant than “xxTrump2020xx”.
A lot of Facebook users have no problem writing racist rants using their real names.
I do not think that the companies that have the funding and technical ability to effectively astroturf facebook/twitter et al, would be rendered even mildly impotent were there 3x as many social media companies as there are now.
Its a systemic failure on the way these companies act, not just a lack of competition.
When facebook is showing friends and family that I made a comment, they fit fairly within the scope of a neutral carrier, akin to a phone call or text message.
When facebook republishes a NY Times post or headline, on the official NYTimes facebook page, they are acting as a republisher, and so be it. Any user will associate the story with the NY Times.
But when facebook publishes my post and headline, on the official "MY Times" facebook page, they are taking my content, and publishing it, as a publisher. And they are doing so, in a manner that intentionally gives me the same appearance of gravitas as the NY Times.
When Facebook then decides to show my post instead of the NY Times post to other people in their feed, they are then curating and distributing this novel content. That is to say, they are publishing this content, as a publisher.
Your 3 options fail to differentiate between this nuanced but important difference.
Facebook should not be required to police the content between individuals posting as individuals.
And Facebook should not be held responsible for redirecting traffic to established publishers and content providers.
But claiming that individuals, who are unable to publish without Facebook's platform, are also publishers, and therefore Facebook is not liable for their content, despite Facebook's actions to then selectively curate, distribute, and, in a word, publish their novel content, is not the spirit of 230.
The distinction about how sites like YouTube and Facebook amplify content is important. But moving away from the digital version, it would seem strange to me if Barnes and Noble became liable for having a big display for a book that defamed someone. INAL so maybe they would be, but as a thought experiment I lean towards them having significantly less responsibility than the author.
Facebook is not acting like Barnes and Nobles displaying a book.
The closer analogy would be if, by becoming a B&N member, you were allowed to submit manuscripts to B&N. B&N then curated a small selection of those manuscripts to display in a format identical to the other books they sell, prominently in the front of their store. They then began advertising for those specific titles to bring traffic specifically to that store display.
The analysis you present is very misleading. The CDA doesn't say anything about retroactive moderation or pre-vetting, but grants legal immunity with no regard for moderation styles. So how do you get from here to there?
> So we could get rid of section 230 and implement a complicated solution that will have negative knock-on effects and unintended consequences for the entire Internet.
The antitrust route seems much more complicated to me because prosecutors would have to bring a new case to court for every major company, at taxpayers' expense.
What's complicated about repealing a 2-clause law? Is there another part to this process I'm not aware of?
> What problem does revoking section 230 solve that antitrust law doesn't?
Aside from being much simpler and less expensive to taxpayers? It would solve the problem once and for all by removing the source of the problem, instead of forcing the government to handle it one company at a time.
> The analysis you present is very misleading. The CDA doesn't say anything about retroactive moderation or pre-vetting, but grants legal immunity with no regard for moderation styles. So how do you get from here to there?
This is correct. Section 230 removes liability for user content regardless of moderation. The moderation argument is a red herring.
> What's complicated about repealing a 2-clause law? Is there another part to this process I'm not aware of?
There is tomes of case law relying on Section 230 of the CDA, and much of the internet is able to operate as it does today because of it.
Without Section 230, services that enable users to share content would be encumbered with a mountain of liability that would most likely scare off investors.
Email providers are protected by Section 230, as are messenger apps, forums, Usenet, chat in games, etc. Social media as it exists today would be risky for any entity to host that couldn't afford to vet all content before serving it to others.
Email and usenet both existed and ran just fine before Section 230.
There's good reason to think that low-volume, heavily-moderated forums like this one would have no problem without the CDA.
> Social media as it exists today would be risky for any entity to host that couldn't afford to vet all content before serving it to others.
"As it exists today" - and why should we assume that the present manifestation of social media is the best possible one? Why assume that large-scale content moderation is an unsolvable problem? It may be that the only reason present-day tech companies haven't solved it is because they don't need to.
If all their employees who are currently focused on getting people to click ads switched over to developing efficient moderation, could it be solved?
> Email and usenet both existed and ran just fine before Section 230.
Email and Usenet didn't have the eyes on them and the billion dollar coffers legal teams could drain that they do today.
> There's good reason to think that low-volume, heavily-moderated forums like this one would have no problem without the CDA.
Without equivalent legislation that removes liability for hosting user content, all it would take is a cease & desist or arrest for someone to decide that the risk of hosting a free forum like HN just isn't worth the legal liability.
> "As it exists today" - and why should we assume that the present manifestation of social media is the best possible one?
Nobody suggested that the current manifestation is "the best one". You're posting on a forum where many people are able to amass small fortunes working for, and selling to, companies that regularly rely on Section 230. One only needs to look at the tomes of case law created by these companies based on Section 230 to recognize this.
> Why assume that large-scale content moderation is an unsolvable problem?
Again, nobody suggested that it is an unsolvable problem. It's a solvable problem with giant piles of money and an extremely lengthy payroll.
The problem is that the liability for user content being unwaived has a chilling effect for those without giant piles of money and an infinitely long payroll. GeoCities, personal blogs, personal sites, hobbyist forums, and other mainstays of the nascent internet would either exist with massive censorship or not at all.
> If all their employees who are currently focused on getting people to click ads switched over to developing efficient moderation, could it be solved?
How many people would be able to bootstrap a site that allows users to upload content in any form if they needed a billion dollar payroll to hedge against the liability of being sued out of existence, or being raided and arrested in the middle of the night?
> What's complicated about repealing a 2-clause law?
There's substantial consensus among Internet scholars that this would change the entire Internet ecosystem in either negative or at least highly unpredictable ways. It's complicated because we need to consider and plan for the consequences, and because it opens up an entire new set of legal distinctions that have never been applied to the Internet and will need to be established and refined over decades.
Repealing 230 is simple in the same way that SESTA/FOSTA was simple -- there are lots of things that seem simple until you consider the details.
On the other hand, antitrust law is pretty widely established, it's something we need to be applying across the entire market (both online and offline) anyway, and breaking up companies has comparatively fewer fundamental consequences for the wider.
It's a mistake to measure complexity in lines of law, in the same way that it's a mistake to measure program complexity in lines of code. The real test is, "how much of the system does this change, how invasive is it, and do we know what all of the effects will be?"
The details and complications aren't the general taxpayer's problem, though. It's Google & Facebook's problem. We don't take any of the profits they made off of Section 230 -- why should we have to pay to clean it up?
> It's a mistake to measure complexity in lines of law, in the same way that it's a mistake to measure program complexity in lines of code.
Removing code that's outdated is usually a step in the right direction.
And when you fix a bug, you always want to fix it at the source of the problem.
> The details and complications aren't the general taxpayer's problem, though. It's Google & Facebook's problem.
It's everyone's problem who wants to start an Internet business. And by extension, it's the taxpayer's problem because presumably they use the Internet.
Passing laws is like fixing bugs live on a production machine, because we don't get to go through a testing phase. When you're in production, you should almost always do the simplest, least invasive fix you can.
On moderation: If you treat platforms as liable for content posted, their only opportunity is to censor anything that might cause them to be liable.
In practice, this amounts to option 2 (the NYT). The NYT is not a forum. It pre-vets all of its content and runs it by a team of editors. You can't run an open forum like HN or Reddit that way. I don't like option 2, because I would argue having a place where anyone can communicate and publish information outside of locked-down, establishment media channels is really good.
If you tell platforms that they won't be liable as long as they don't moderate/censor (the "true platform" argument people bring up), then you've taken away their ability to moderate at all. That's how you end up with every open platform looking like 8Chan (option 1). I would also argue that allowing communities to filter and ban bad actors is necessary for an inclusive, open Internet.
The innovation of Section 230 was that it gave companies, forum owners, and platform maintainers permission to moderate. It created option 3. Owners didn't have to make a decision between blocking everything or nothing, because they couldn't be held liable for user content at all, regardless of their moderation strategy. That meant that they could be as aggressive (or passive) with moderation as they liked without worrying that it would make them liable for any content that they missed.
Section 230 is an attempt to deal with two facts -- first that moderation is fundamental to healthy communities, and second that when users have the ability to instantly post their own content there is no system (human or AI driven) that will ever be able to moderate perfectly.
So far from being a misleading sidenote or a jump in logic, content moderation was the reason why section 230 was passed to begin with. From its very inception, section 230 was always about allowing a middle ground for moderation.[0]
> One of the first legal challenges to Section 230 was the 1997 case Zeran v. America Online, Inc., in which a Federal court affirmed that the purpose of Section 230 as passed by Congress was "to remove the disincentives to self-regulation created by the Stratton Oakmont decision". Under that court's holding, computer service providers who regulated the dissemination of offensive material on their services risked subjecting themselves to liability, because such regulation cast the service provider in the role of a publisher.
Thanks, I didn't know about those cases. This is one of my favorite topics in tech and I learned something interesting from our discussion.
As I've pointed out to other commenters in this thread, I still think your analysis makes too many assumptions based on the present day legal environment of the web. You have to agree with me that, because of the broad scope (granting ALL internet service companies immunity to legal actio) and the timing of the bill (the early days of the popularization of the web) we don't really know what the legal environment for web businesses would be like without Section 230. This legislation came in so early and changed everything so drastically, we don't know if the courts would have found a middle ground to allow for some moderation, or if people would have found more efficient ways to moderate content over the years. Section 230 essentially froze the process in time by handing all legal power to the internet industry.
Arguments I've read about why Section 230 is good for the internet tend to rest on statements about how the internet works today - specifically, the way today's internet service companies run the web's most popular sites - but not a single one of these companies existed before the CDA was passed. For all we know, without the CDA, the internet would still be CompuServe, AOL, Prodigy. Or perhaps other business models would have been invented. I think it's a mistake to assume that the current internet is the best possible internet when we haven't really seen any other.
That's fair -- I will grant you that there's a lot of uncertainty about what would happen now. I don't think it's completely blind, I lean towards "there are predictable negative effects", but we don't really know. And it's totally reasonable for someone to be less certain than me.
My response to that though is still that uncertainty is not a great position to be in when passing laws. I would point at SESTA/FOSTA as examples of legislation in the same rough category that looks like it should make sense, and then gets passed and has a lot of side-effects that turn out to be really bad for everyone. If SESTA/FOSTA had passed and everything had gone wonderfully, I might be more open to other conversations about adding additional liability.
> The details and complications aren't the general taxpayer's problem, though. It's Google & Facebook's problem.
Actually, it's any company's that wants to do that problem. And between multi-billion dollar established companies and smaller/starting up companies, who exactly do you think will be most impacted?
The problem with this opinion is that you forget what powers these platforms: Money. Section 230 allows platforms to profit off illegal and harmful content with no responsibility whatsoever.
When Facebook promotes falsehoods, they profit. When Google sells links to malware and scam sites at the top of search, they profit. And even if they get around to moderating these abuses of their platforms, they keep the profit from the harm done via their platforms.
If we want platforms to have the proper incentives to moderate content properly, they need to lose money when they fail, or at least, fail to profit when they fail. Ad money for malware campaigns and scams should be confiscated.
But right now, malicious actors on ad platforms drive up the revenue on bidding-based platforms, and no matter who wins the bid, that equals money for ad companies. They have fundamentally no incentive to police content that's making them money.
> The problem with this opinion is that you forget what powers these platforms: Money. Section 230 allows platforms to profit off illegal and harmful content with no responsibility whatsoever.
That's phrasing it in a way that makes it misleadingly clear who's in the wrong: of course all good people will not want something like that to happen.
The devil is in the details. How much does that "illegal and harmful" content help their bottom line? Is it making 50% of their income? 10%? 1% or less? After knowing the percentage value can you still make a general statement like "companies are profiting on illegal and harmful content"? Or the more correct statement would be "companies make extremely little profit on illegal and harmful content"?
On the other hand, before proposing mandatory moderation for all user content on all platforms we have to ask more questions. How much would it cost if all content is moderated in order to catch that X% that is "illegal and harmful"?
> How much does that "illegal and harmful" content help their bottom line? Is it making 50% of their income? 10%? 1% or less?
The problem is that nobody actually has that data (and they'll claim your point is invalid unless you have the data, which only they even have the potential to gather). Google and Facebook will tell you it's very low, but since they're ignoring reports of malicious apps on their platforms and restoring scammy ad campaigns after they've been reported and taken down internally, what they consider bad content is likely much smaller than reality.
Honestly, with both the amount of flagrantly malicious content I see on Google and Facebook ad platforms, and the network effects that their participation has on bidding for placement, I would suspect that these companies are nearly dependent on malicious content for profitability.
Another huge point is that a lot of legitimate advertisers are paying just to protect their brand from having scams placed above their own site in searches for their trademark. This is pretty close to an extortion racket.
I think there’s a Big Short level event on the horizon where we discover some of the top valued companies on the planet are built on a lot more of this than they’ve let on.
It really seems like this article is a bit off on the reasoning they ascribe to people. The biggest objections I have heard is that Facebook / YouTube / Twitter should now be classed as "publishers" and not "providers" because of the perceived bias in their removal of individuals and content.
That is a clever legal argument. Of course to distinguish a traditional newspaper or publication from social media posts are the editorializing exercised by the tech companies occurs after publication/posting; whereas, traditional media exercises their editorial powers prior to publication.
In practice, say I set up an anonymous social media account and publish false/defamatory statements about myself, then I sue the tech company and john doe (as the poster), what exactly can/could the tech company do to have prevented liability? Seemly they would have to somehow change their systems so posts are reviewed by editors prior to publication to minimize legal risk and potential liability.
That is the type of situation that 230 was meant to stop. It would be impossible to be Twitter, etc. without the protection of 230 and that is why it was created. It carries over from not being able to sue the telephone or telegraph company because someone sent something defamatory about you over their system.
The argument is that the services desire to keep the protections of 230 but act like publishers that are not afforded the same protections.
And you generally know the person who is on both ends. This leads to a sense of history and consequence which is most commonly not available on the internet.
It isn't a legal arguement but a propaganda push. There is no publisher/platform duality. The concept is pure unadultered bullshit. There is no legal precedent for that concept but a later invention.
This is not pointed out nearly enough.
Common carriers (like a telephone company) are content-neutral (unbiased) and have no legal responsibility for what is said. If someone mails you a threatening letter, you can't sue USPS.
Publishers can publish (or decline to publish) whatever they want, but they do have some responsibility for what they say. Libel, copyright infringement, and threats all carry consequences even if the publisher is not the author. Publishers can be as biased as they want.
Is FB a publisher or a common carrier? What about Google? Youtube? Instagram? Twitter?
(The answer is that they are kind of either depending on the exact part of the organization. And they are having it both ways: control without responsibility.)
>Publishers can publish (or decline to publish) whatever they want, but they do have some responsibility for what they say.
It is abjectly insane to label a tweet as something Twitter, the company, is "saying". If we're retooling the law to orient it toward that definition, then the inevitable endgame, after the avalanche of litigation, will be that the concept of posting text on social media or blogging platforms is dead. It kills Web 2.0 in its entirety. It regresses the United States back to the dark ages of a completely one-directional media, where the best you can do as an outsider is to submit a letter to the editor.
> It is abjectly insane to label a tweet as something Twitter, the company, is "saying"
If there were a company printing and distributing, to any passing person on the street, fliers with content provided by the company's clients, would they have any legal liability for what their clients put in the fliers? That seems very similar to what Twitter does. Certainly closer than their being treated like a phone company or mail carrier, which they don't much resemble (email provider? Yeah, sure, they do). Perhaps in that situation the printing & distribution company would also have no liability, I don't know.
If I invent a machine that passes out six thousand flyers per second, and allow people to feed text into it at will, then yes, it is insane to describe these as my "speech". I may provide a mechanism for the words to get onto a page, but I have zero agency in the process of thinking them up, drafting them, and enacting their distribution. I am providing a mechanism for others to say things.
However, you would probably say I start to become liable if I erect a giant wall on which all of these flyers are posted, and allow illegal content to remain hanging there even when I'm informed of it and aware it's illegal. This gray area is exactly what Section 230 is designed around.
It's totally unreasonable to expect a platform operator to act as the speaker of a post the moment it is posted. But after becoming aware of a post and the reasons it may be objectionable, they start to gain a sort of post-hoc liability.
If someone prints out child porn, glues it to a yard sign, and plants it on your front lawn, you should probably not be liable for arrest starting that instant. But if, after coming and going and seeing it over and over for a week, you make no efforts to remove it or report it, then you should probably be liable for arrest. This is exactly the logic behind Section 230.
> If I invent a machine that passes out six thousand flyers per second, and allow people to feed text into it at will, then yes, it is insane to describe these as my "speech".
I'm not following how having a machine do the work absolves the owner of responsibility. It seems like the same kind of "normal thing, but with a computer!" that folks in tech circles usually mock when it shows up on patent applications or when someone decides we need a new law to cover something that's already covered by existing laws, simply because now it's with a computer.
If you manage to replace all the components of an ordinary publisher with robots, seems to me the owner of those robots ought to be treated just like an ordinary publisher. Accepting, storing, reproducing, and distributing as broadly as possible (oh and don't forget slapping your own ads on) others' work sure seems like publishing to me.
>I'm not following how having a machine do the work absolves the owner of responsibility. It seems like the same kind of "normal thing, but with a computer!" that folks in tech circles usually mock when it shows up on patent applications or when someone decides we need a new law to cover something that's already covered by existing laws, simply because now it's with a computer.
The fact that it's happening on a computer isn't the important part, and indeed if that were the only difference it would be a ridiculous argument. The important part is that the operator of the machine has never seen the content.
It's unreasonable to be liable for content you aren't aware of, and thankfully the status quo is still that you are not (outside of a few unfortunate cases). Effectively most of the internet couldn't exist if this protection were eliminated. Hacker News might not be able to exist. If you were liable for everything any user might possibly say, regardless of whether or not you notice it, would you run a discussion forum like this? A web host? A messaging app?
Once you're aware of e.g. illegal content, of course you should be liable.
If you designed a machine that passed out six thousand flyers per second, and only allowed certain people to distribute certain messages, you might reasonable be considered liable for the content you've approved.
There's nothing insane about it. Twitter used to be highly supportive of free speech. However they now actively ban users and censor tweets which don't conform to Twitter's ideology. The content they are banning isn't illegal (at least not in the USA), they just don't like it. And the criteria for banning is only vaguely defined and constantly shifting based on the whims of a few employees. Thus any tweets they leave up are implicitly endorsed by Twitter.
Twitter also acts as a publisher by exercising editorial control over if and where tweets actually appear to each user.
Twitter would have a stronger legal and moral argument against further regulation if they acted as a neutral intermediary. For example, they could establish a policy of only removing content if required to do so by law.
Hacker news also bans people merely for being rude, even if they don’t post illegal content. If you have a principled objection to this act you should delete your HN account and leave.
You misunderstand. As a strong supporter of private property rights I have no objection to Y Combinator or Twitter banning users or removing content from their services. But if they're going to exercise that degree of control then for regulation and legal liability purposes they should expect to be treated more like publishers than like common carriers.
No parcel carrier will allow me to mail my cardboard box full of loose fish guts. They turn me away because the box is wet and falling apart and smells terrible. They find the content I want them to ship to be objectionable, and it would negatively affect their ability to provide service to their other customers.
I'm not sure if that is a great analogy for what Twitter et al. are doing in regards to content policing, but there is some kind of argument there.
Should we be able to sue HN over any comment or post that gets made in this website? Do you not realize that HN would be forced to shut down overnight since the legal liability would be completely untenable? Your legal regime would effectively make internet moderation impossible. It would, without a single trace
of exaggeration, instantly kill almost every single internet community in existence.
You're exaggerating. I don't know why you immediately jump to the assumption that any regulatory change would be all or nothing. A more likely political outcome will be some sort of compromise.
For example, Internet services which host user-generated content might be able to retain liability protection only if they also provide some reasonable degree of transparency and accountability. Clearly document their editorial policies and algorithms, and provide a formal appeals process for users who were censored.
Should the person whose content is blocked be able to face their accuser, know precisely what line they crossed over, and have recourse against arbitrary, malicious, incompetent ... moderation? As it now stands, one entity is judge, jury, and executioner, which is a recipe for abuse of power.
> It would, without a single trace of exaggeration, instantly kill almost every single internet community in existence.
Only the ones that act like publishers, curating the content they allow and disallow. The ones that behave more like the telephone or postal system would not have the liability, right?
Well, by the fact they are allowing some voices and silencing others, largely in one political direction, it would seem they are expressing an opinion indirectly.
I don't think anyone wants Twitter to be responsible for every tweet. What most people want them to do is be more like they used to claim to be "We are the free speech wing of the free speech party"
Here is section 230, the law protecting websites from liability for things published on them by third parties. [0] I do not see anything about a provider needing to be content neutral. In fact, part c2a seems to be explicitly saying the opposite. IANAL. Where does this legal concept of a distinction between a "carrier" and a "publisher" appear?
Completely wrong with regard to current US law. First Amendment rights apply to corporations without distinguishing based any attribute. There is no such legal concept as a publisher or a platform. No legal scholar has argued that publisher/platform exists in current law, the academic community speculated it might be nice to change the law to make the distinction and then it gets misreported.
They should be allowed to have it both ways. If I run a video game forum, I should be allowed to ban trolls and remove off-topic posts without accepting legal responsibility for any illegal content that gets posted on my forum by third parties without my prior knowledge or consent. Forcing people to choose between being a publisher and being a utility would completely and instantly kill every single internet community dead in its tracks.
I don’t understand how people can post on a heavily moderated forum like HN while demanding that moderation be effectively made illegal. It’s like people hate Facebook and Google so much that they mindlessly get onboard with any form of retaliation against them regardless of the actual consequences.
FWIW, it doesn't sound insane to me that you should have "legal responsibility for any illegal content that gets posted on my forum by third parties without my prior knowledge or consent." You're the one who chose to set your forum up that way, eh? (We're not talking about hackers defacing your site, yeah?)
As much as I find the idea of "illegal content" galling (I love freedom) I'm not so idealistic that I fail to recognize that freedom of speech cannot be an absolute. ("Fire" in theaters; revealing national security secrets; doxxing; there are limits. I'm glad I'm not on the hot seat when it comes to nailing them down: I don't run open internet forums, for example.)
If you don't have a way to pass that responsibility onto the actual speakers then you must shoulder it yourself, no?
Whether or not current systems can survive that is less important to me than the establishment of formal responsibility for one's speech.
There should be a line somewhere between curbing abuse, curating content, and promoting ideology. But I personally don’t feel comfortable or qualified to decide where that line should be drawn. There must be some point between USENET and the NYT editorial section where the publisher becomes liable for the content they promote.
FB is a publisher because they don't allow everyone on their platform.
Edit:
We try to make Facebook broadly available to everyone, but you cannot use Facebook if:
You are under 13 years old.
You are a convicted sex offender.
We've previously disabled your account for violations of our Terms or Policies.
You are prohibited from receiving our products, services, or software under applicable laws.
Pretty much every forum, reddit and social media network goes away including HN in that case, the only way any of it works is because they have it both ways.
Alternately there would be open forums and corporate forums and they would be very different. But tech companies wouldn't own speech and would be liable since they can police their platforms as they demonstrate.
Do you also object to HN moderation? Why should dang be allowed to ban people but not twitter? Do you think dang should accept legal liability for everything people post on HN?
It's good to ask those questions, of course small companies are affected too.
But, for example, if Dang selectively removed posts such that the remaining posts slandered someone in a way that none of the individual posters intended, I believe Dang would be the only one in the chain liable for the resulting slander, not the posters. But under current law, Dang has protection under 230, so no one could get charged for the resulting slander
I only object that they get both the rights of platform and publisher. they are free to moderate however they wish but once they do they should be liable for the content of their platform. No other publishers are not held liable.
You are correct, publishers don't have a legal requirement to be unbiased, but publishers can be sued. Service Providers cannot be sued for what appears on their service[1]. The argument is that "biased" decisions on what to publish makes them publishers and not service providers.
1) ok, in the US anyone can get sued and there seem to be some specific things people can sue and win on, but that is outside the scope of 230.
This has been my understanding for many years. And so it's ironic how increased moderation to increase marketability brings immunity of service providers under 230 into question.
It's almost like 230 has been a bait and switch ploy. With 230, mega social media corps developed, with nothing else to protect them against liability.
But instead, we could have had decentralized systems that made liability impossible. And if stuff like this goes forward, maybe we can. Someday, anyway.
> This has been my understanding for many years. And so it's ironic how increased moderation to increase marketability brings immunity of service providers under 230 into question.
Especially since allowing providers to increase (automated and best-effort, but not comprehensive human) moderation withour incurring general liability for content was the explicit and overt justification for Section 230, because the kind of complete human editorial control expected of publishers in traditional media was viewed as preventing scalable systems on the internet (where traditional media had other scaling limits that make content liability far from the limiting factor.)
> But instead, we could have had decentralized systems that made liability impossible.
Decentralized systems probably wouldn't have made liability impossible, just ineffective at it's objectives, because no matter how many operators were ruined by liability. the content would still thrive and you'd never hit enough operators.
Do you think many users will be willing to adjust to the complexity and unpredictability of that environment compared to what they experience now? Just because it's mostly way better in terms of issues of freedom and power?
> I don't tend to think tinkering with intermediaries' incentives about content is the thing that will get us there, because there are so many other practical advantages that people have perceived in the more centralized services.
I do understand why loss of §230 protection is dangerous, both directly and as precedent. Threatened intermediaries would just do whatever needed to avoid liability. And incentives for alternatives would likely remain inadequate.
Also, I'm not advocating any "tinkering". I'm just not optimistic about relying long term on legal protection. Even for the US, it's been fragile, and not anywhere near effective enough.
I was just being wistful, speculating that, without §230 protection, we'd have ended up with a system that didn't need such protections. With intermediaries that were totally isolated from content, with absolute anonymity for users. That is what I was imagining in the mid 90s.
I can imagine more or less centralized services that could handle content which was otherwise end-to-end encrypted and anonymized. And could make money doing it, as VPN providers get paid by Orchid users via an Etherium-based currency.
Users would have the tools to filter what they see. But nothing could be removed, and nobody could be prosecuted (or persecuted).
How we might get there from current social media, I have no clue.
So sites would not incur liability unless they are directly selling access to the published content? I could agree with that, assuming there were any merit to the concept of "illegal content" in the first place. Which, of course, would run directly counter to the 1st Amendment, freedom of speech, and proportionality.
Can anyone explain why William Barr seems so intent on trying to change technology in such major ways as this? Does he not understand the implications of the things he proposes? Or worse, does he understand the implications and proposes them nonetheless? I just don't get this guy's motivation.
As far as I can tell, revoking section 230 would just result in people putting up fake content themselves and then suing the platform they posted to. Is there a reason why this wouldn't be possible?
Also, I see a lot of people focusing on major platforms, but why wouldn't such changes also impact tiny sites? In particular, it seems that anyone casually hosting their own site (not something they focus on often), will be forced to remove all user generated content or quit their day jobs to manage their site - am I misinterpreting the implications here?
Because Barr is literally a fascist with a long history of violating civil rights. He wants to rule unquestioned, unopposed, and unaccountable. He doesn't care so long as he can torture into compliance.
My initial reaction to reading about Barr makes me want to agree with you. However, I really haven't looked into him too much - tbh I only really know about him from popping up in HN articles (where he is always cast in a negative light). Any chance you have some links relevant to his history of violating civil rights, etc? Would love to be able to read through them to form a stronger opinion myself.
ACLU is one here listing many of his outright struck down ones including sticking Haitain asylum seekers in Gitmo indefinitely. And that is restricted to the obscenities that the courts have called out instead of accepting as the norm.
My personal philosophy is civil rights are not up for a debate or vote. Normalizing them is itself a form of damage that mere politeness can never justify.
> As far as I can tell, revoking section 230 would just result in people putting up fake content themselves and then suing the platform they posted to. Is there a reason why this wouldn't be possible?
Maybe not themselves, but they could get someone else to. Or any competing site could post stuff that would increase legal liability. It would be the end of forums.
> Barr seems so intent on trying to change technology in such major ways as this
He's not saying that. He's asking for a discussion on how the laws can be adapted for the current world. In the video of his speech, he specifically says they haven't made up their mind on how it should change. He's simply bringing up the points on how the current law is a bit outdated for the current world.
I appreciate someone coming in with a different viewpoint. I admit that I haven't actually watched him talk, and only read 'choice' quotes from him in articles. I will have to take the time to watch him speak and google around a bit. At this point in time, my opinion of Barr is based more on gut feelings than anything else.
Be careful of how media frames things. Specifically skeptical of when media does "choice/selective quotes" with "..." ellipsis tactic. Often when I see ellipsis, I go looking for more context and it ends up saying something quite different than what media claims.
Suppose you, and your allies, have publicly insisted that major tech companies are biased against Conservatives by deleting their posts, banning them, or whatever. You need to be seen to be taking some action to fix things somehow.
I personally doubt they'll change anything. Republicans talked about the idea of the media being biased against them for decades, from newspapers to Holywood. Major legal changes have been fairly rare.
Barr doesn't care about the "implications" for the tech companies or their employees. And he's not trying to "change technology" as you put it.
He's just trying to stop these companies from playing dirty, as they have been for a long time.
He's concerned with making services such as Youtube, FB and Reddit face up to the fact that they are publishers, and not really forums because they're so riddled with AI and fishy algorithms.
I have no idea what Barr's exact problem is he's trying to solve? And if I run a web forum/mailing list/etc. I am now liable for anything users say on these services?
It seems like he's unhappy that facebook/google/et al. are shaping (or trying to shape) a narrative... I mean, he's not wrong. But everyone is: businesses, politicians, cia, hacker news.
Opening up people to easier liability for running a web forum, just means fewer will be able to provide this type of service; not to mention, this favors those with lots of money and time to spend on a lawsuit of such nature e.g. the gov't and large businesses... hmmm, maybe that's the point, only the gov't and monied interests should shape the narrative.
As you stated, everyone is already shaping the narrative. But right now nearly all of the power to do so is highly centralized at a very short list of companies. You already mentioned some tech companies, but I'd certainly also add a bunch of classical media/news orgs like e.g. FOX and Disney.
We could now discuss if democracy can work with so much centralized power. But the result we'll see is those powers fighting each others. It might be more obvious than before thanks to our current polarized times and such simple to identify targets.
I don't think it's a good precedence to make companies liable for the posts of users. I do think it's reasonable to examine the ways Google and Facebook profit off of extremist views and surface extremist views algorithmically.
As soon as Google and Facebook moved to having an opinionated queue of content (Youtube's suggested videos and Facebooks timeline) based on things like engagement, I could see the argument made that they have both ceased being mere conduits of information and became publishers themselves.
The question I have for people who advocate for "platform liability" is, at what size should they become liable for user generated content? Facebook & Google definitely seem big enough to most people, and maybe Twitter & Reddit. But what about Y Combinator?
Liability should not be black-and-white. HN has a process for flagging and burying content, and they employ moderators. If some kind of libel is buried in a greyed-out user comment or a flagged post, then no harm. If the libel sits on the front page in the title of a post for two days, there's a real issue there.
It's interesting that you mention HN, because it's basically not a problem here.
That kind of goes to my question though. It seems that people are suggesting that by offering a process to flag & bury content and by employing moderators, HN becomes a publisher and therefore should be liable for our posts.
I could easily post a libelous but interesting comment here that I'm sure that no moderator or no HN user would recognize as libelous, so it isn't going to get buried.
I think a couple big concepts are being conflated. There are a couple really important questions:
1. Should Google, Facebook, etc. be responsible for user generated content hosted on their websites (i.e., should they not be treated as “public square”)?
2. Should the government have any hand in telling any company or any person what they can or cannot say, as long as they are not making threats or publishing illegal materials?
I am of the personal opinion that most (all?) of the major tech companies have engaged in censorship and even politically driven enforcement of their content policies, and therefore should have lost their “public square” status a long time ago, making them responsible for illegal content posted by their users.
Pertaining the second question, there simply is no question; the government does not and should never have any authority here, because the Constitution protects free speech, regardless of what kind of Ministry of Truth they would like to implement.
I think question 1 should really be: should Google, Facebook, etc, be responsible for the user generated content that they algorithmically focus presentation on? It’s one thing to have, say, an early 2000’s forum where users search for and consume information. It’s another to ignore timelines and present content that will likely generate the most engagement. The latter to me is editorialization of the content.
Will every post need to be pre-moderated to ensure that nothing objectionable is published? I wonder how this would affect sites like Hacker News and Reddit, or any forum sites really.
I don't think the big tech companies should get blanket common carrier protections because they aren't common carriers. They have knowledge and control over content, and therefore should be responsible for it to some degree.
But that doesn't mean they should have no protections or that thry should be treated like a book publisher. There can be some reasonable processes amd limits of liability in place.
That all sounds nice but what does it mean? What knowledge do they have that common carriers don’t? Responsible for what to what degree? What is a reasonable process and what does that look like?
These kinds of generalities and hand waving doesn’t really get us very far.
Are you suggesting that there should be a size requirement before a site has to pre-moderate their UGC? What metric would you use? Revenue? Taxable Income? Impressions? Volume of UGC? # of employees?
If there's a blanket repeal of liability protection for UGC, it's going to have a much larger impact on smaller forums.
> escape punishment for harboring misinformation and extremist content
its so bananas that statements like this are glossed over and unqualified. this is not a solved problem, and its not even being treated like its a problem at all.
this perception that there are a set of correct facts and incorrect facts is just so meaingless. what does it even mean to be true? $Person is on $Video saying $Statement. True or false? Well, it depends. It ALWAYS depends. are you asking if $Video.Words = $Statement.Words? almost never. You are not investigating $Video.Soundwaves and $Person.VocalCords, you are making a case for $Person.Beliefs. What if $Person.Beliefs @ $Video.TimeStamp != $Person.Beliefs @ $Today? Is it true but meaningless, or are you trying to imply conclusions contextually - but guess what, different people interpret the same context differently!
an example suitable for HN is talking about security. Is your company secure? You cant answer the question because _the question is bad_. the answer to security is ALWAYS it depends. Are you talking about physically secure against a wandering drunk trying to pee on your server, or physically secure against a disgruntled employee building a killdozer and driving through your building? Are you talking about secure from some kid who finds LOIC and tries to DoS you or from a long term campaign from a nation-state APT? The discussion _requires_ framing, and so does discussing "misinformation and extremist content"
I agree with you. But it is encouraging that there are adults in the room:
> Kate Klonick, a law professor at St. John’s University in New York, urged caution.
> “This is a massive norm-setting period,” she said, with any alterations to one of the internet’s key legal frameworks likely to draw unexpected consequences. “It’s hard to know exactly what the ramifications might be.”
> his perception that there are a set of correct facts and incorrect facts is just so meaingless
Yes, there are facts. In your example, if I say "person X said Y" and have a video of them saying it, I am stating a fact. I am not dealing in beliefs and I don't give a fuck whether or not the person still supports the statements they made or not. I am saying that at some point in time T, person X said Y and this fact is on record.
You are correct, there are literal truths. Some person factually used some set of words in a particular order. The part that people care about is how we interpret those words. It being fact does not make the implication incorrect.
In an extreme example, if someone says "There are extremists in the world, and I am not going to be one of them", I can quote that as "There are extremists in the world, and I am ... one of them", which is factually true, that is a portion of what was said. The interpretation is the polar opposite, but it is factually true.
Likewise, there are facts that are misleading, but true. "The Earth is at the center of our solar system". It's a perfectly valid statement; the rotation of planets can be modeled where the Earth is at a fixed point and the other planets moved around it. It's messy, and the movements look extremely erratic, but it's perfectly valid to be modeled that way.
There are also facts that are true, given a set of circumstances. "I weigh 7 pounds". It's not true on Earth, but in the correct gravitational field, I do weigh 7 pounds. Determining whether that statement is true depends on the implication; was I implying that it is true on Earth or not? That depends on the context of that statement, and on how you interpret the context.
> In an extreme example, if someone says "There are extremists in the world, and I am not going to be one of them", I can quote that as "There are extremists in the world, and I am ... one of them", which is factually true, that is a portion of what was said. The interpretation is the polar opposite, but it is factually true.
Not so extreme considering this kind of editing is basically what led to Count Dankula's court case and conviction.
> Why is it wrong to say explicitly offensive things
Because they are offensive
> and why should someone be legally punished for it
Because that is what legal systems are for. Punishing people who do things we, as a society, find objectionable
> Who gets to decide what is and isn't offensive and to what degree?
That is a rather general question, but generally speaking, the law makers and the legal system decide things like that.
Now, mind you, personally, I support a rather radical version of freedom of expression. I think almost every kind of censorship is ultimately detrimental to society, but I am not naive enough to ask "but who gets to decide" or "but why cannot I"
It wasn't satire; sharing and reporting on it removed the joke context, leading some people to infer satire and others to infer he actually held nazi beliefs.
The original video started with him saying something like "my girlfriend thinks her pug is the cutest thing in the world, so I'm going to turn it into the worst thing I can think of".
On the extreme end, a video can be totally fake. Slightly less extreme, a video might be "fake" in that it's an actor playing a role, and the words they're saying belong to the character and not the actor. More commonly, a video might be taken out of context, so that the apparent meaning of a statement in isolation is the opposite of what it originally meant. (Compare "I hate group X" to "I think people who say 'I hate group X' are jerks.") Modern politics is effectively a distributed indexing system for exactly these most ambiguous cases, coupled with large media apparatuses for broadcasting them.
And these are the problems with the simplest sort of fact, "A said B." That's not even the majority of misinformation out there. We usually have to contend with even worse ambiguities, like whether A "caused" B, or whether A "supported" B, or whether A "criticized" B.
In everyday life of course, these problems are solvable. We find ways to punish people for acting in bad faith, and the incentives mostly line up. But that is very much not how things work in politics or in law. People have large incentives to seek out ambiguous cases. These edge cases become the typical case.
Sure. And either you can prove it or not. But that is not what I am arguing against. The way things seem to be working today, is that you say "person X cures cancer" and I say "person X eats babies", neither one of us has any proof whatsoever, but that is OK because nobody ever asks for proof because "hey, there is no such thing as facts anymore"
That's fair. It's definitely possible to go too far with this. I guess what I want to argue is more like this: The problem here is sufficiently hard that, even though we can make definitive judgments about facts and misinformation in some cases, the value of doing so in practice will be low. But of course that's a pretty vague blanket statement that I'm making without any concrete support, and it would probably be better to look at the pros and cons of some specific idea about what the rules should be.
There is still the challenge of the context of "person X said Y" Were they saying it sarcastically? Were they quoting someone else? Does Y include the content said before or after? That can meaningfully change the "fact"
> Yes, there are facts. In your example, if I say "person X said Y" and have a video of them saying it, I am stating a fact.
assuming you actually do this - which i dont believe you do but will accept for argument's sake - it is entirely meaningless. mouth-sounds are not important in and of themselves; you hear the mouth-sounds and interpret them to try and guess how that person will act in the future. you are building a predictive model of that person.
lets say there is a video of a politician (clumsily) saying "i am a rapist hater. i will increase the punishment to try and dissuade these awful crimes and make all our lives safer". You argue it is a true fact that politician is on video saying "i am a rapist". My point is that "true" in this sense is nonsense, and furthermore "true" in any absolute sense is always nonsense.
if your head is a list of quotes people have said that simply don't feed into your impression of them, then you are in a minority so small its effectively 0 on a global scale.
> Yes, there are facts. In your example, if I say "person X said Y" and have a video of them saying it, I am stating a fact. I am not dealing in beliefs and I don't give a fuck whether or not the person still supports the statements they made or not. I am saying that at some point in time T, person X said Y and this fact is on record.
There are clearly facts. But people don't spend a lot of time arguing over whether "person X said Y". Rather, we argue over the meaning of Y and the context in which it was said.
Political discourse has almost nothing to do with facts. That's just an observation, not necessarily a criticism.
You might be saying "they said this", but what your audience reads is "this is true".
To take a recent example:
"A renowned firefighter says that bushfires have mostly been caused by arsonists, exacerbated by conservationists protesting harm reduction burns, resulting in an exceptional fire season this year."
"A renowned firefighter says that climate change has reduced the safe period to perform harm reduction burns, and has contributed to drought conditions, resulting in an exceptional fire season this year."
These are both facts. But which one you report on will give your audience an entirely differently picture of the situation.
> The discussion _requires_ framing, and so does discussing "misinformation and extremist content"
The framing is usually implied instead of spelled out precisely. At the same time, I don't see you arguing against this (probably shitty) implied framing and instead just pretending there is none. This makes it hard for me to take seriously, since it looks so similar to the "there is no perfect security so why even try" line of arguing.
And yes, my alarm bells also went off by AG Barr talking about "misinformation and extremist content", even though I actually do think we should carefully observe whether current online platforms / communication changes our societies and if in a favorable way.
> since it looks so similar to the "there is no perfect security so why even try" line of arguing.
What does "misinformation and extremist content" mean? Because that is the first point. Remember that the GOP called out large companies for "silencing the right" when they enacted policies that reduced misinformation and extremist content in the past. "Misinformation and extremist content" is thus itself an illdefined thing as different people consider different things to fall under it.
The framing usually used is that "if the big companies are liable the content won't be there" which history has shown is only true in so far as "if the big companies are liable user made content won't be there" effectively as huge restrictive filters need to be put in place to avoid the liability.
> even though I actually do think we should carefully observe whether current online platforms / communication changes our societies and if in a favorable way
Feel free to talk about it, you are not trying to enact policy. AG Barr is involved in creating policy and thus should be careful about exactly what he suggests rather than invoking the same liability boogieman that hasn't served us in the past.
> his perception that there are a set of correct facts and incorrect facts is just so meaingless.
While there are large grey areas in the middle where subject opinion, context, and fuzzy word definitions make truth values complicated, there are also some very clear black and white areas for blatantly false concrete claims are being made and spread that have no basis in reality. I think there are ways to regulate and pushing blatant disinformation while leaving the greyer borders up to individual discretion and discernment.
Perhaps. But who gets to decide which side of the line any given statement is on? Imagine, for example, that the person appointed to decide that is an anti-vaxxer. Or that they just left a gig at Monsanto/Bayer, and feel like all claims that Roundup/glyphosate might cause some harm to someone are blatantly false. Or the person appointed was a Democratic activist. Or a Republican activist. Pick your poison. And now this person is given the power to decide which claims are blatantly false, as opposed to grey.
While there are claims that are black and white, and claims that are grey, empowering someone to enforce that clearly-false claims cannot be published is going to bite you, because that person is going to have their own perspectives, biases, and (probably) agenda. It takes something close to a saint to not inflict their own agenda, and saints are in short supply.
> Perhaps. But who gets to decide which side of the line any given statement is on?
Umm... we have an existing legal system already does this. It certainly isn't perfect, but your argument seems to boil down to: we don't have a perfect legal system, so we shouldn't have any laws.
I don't see any reason why we can't have laws that make it illegal to run around and hand out pamphlets that deliberately misinform people about if / when they can vote.
> Roundup/glyphosate might cause some harm to someone
You seem to be using an inherently grey claim to make your point. Of course glyphosate might cause harm to someone, there is almost literally nothing that has no possibility to cause harm to someone. This is exactly the sort of claim whose truth value depends on the context and meaning with which you interpret it.
This is different from a claim like "I have a bachelor's degree from UC Davis" or "You cannot vote without a drivers license"
That is not at all what my argument boils down to. My argument boils down to this: Having some anointed authority that gets to decide such things without having to follow the due process of the court system is a dangerous thing.
The point about glyphosate et al is this: Sure, those are definitely grey. But the authority could claim that they are black and white at his own discretion. If he decides against you, you then have to go to court, with the default decision being against you, in order to be able to be heard.
Several posts upthread, you said: "I think there are ways to regulate and pushing blatant disinformation while leaving the greyer borders up to individual discretion and discernment." So, who gets to regulate? That's your authority that is "other than the courts".
Sure, there's no black or white answer to what is misinformation or extreemist. But regulation and the courts aren't black and white. We regulate the content of broadcast media just fine without hand-wringing about the grey areas.
Compared to internet UGC, the flow of new content into broadcast media minuscule. According to a random Google search, three hundred hours of new video are added to YouTube each minute. It's unlikely that much new broadcast media is created each hour. And YouTube is just one of many sites. It wouldn't surprise me at all to find that UGC is created at a rate of more than 1000x that of broadcast media. That difference in scale, and the difference in economic value per unit of content, makes a solved problem in one medium an tricky challenge in the other.
While I'm sympathetic to your point, sometimes appeals to complexity like this are asinine. There are plenty of complex problems which yield, to at least a degree, to effort.
Historically cultures have modulated themselves in a variety of ways, including censorship of objectionable speech. Even the United States has maintained, in the past, sweeping norms which controlled what kind of rhetoric entered the public sphere. No one would suggest that these norms were always well applied, coherent, good. But its equally challenging to make the case that because of that ambiguity we should simply do nothing at all.
The ability of companies like Youtube or Facebook to shift culture is probably real and thus probably demands a response, imperfect as it may be.
I think the problem of determining whether a company is secure, given a particular attacker, is one that needs to be worked on.
Unfortunately I haven't made any progress in discovering a solution that would accurately determine if a company is secure or not.
Given that an org's security team, or even the world (in case of zero days), may not be aware of vulnerabilities that exist in a company's infrastructure, how do you solve this problem?
If anyone wants to chat about this, I'm at veeralpatel4@gmail.com
It's clearly a complex legal issue, but is there any logical reason why the internet should be treated differently from print and broadcast media in that regard?
The only reasons I've heard are related to how -current- internet content providers' business models work. Those companies' business models are heavily enabled by the CDA itself, so isn't it sort of begging the question?
Huh? Are you saying Google and Facebook don't exercise editorial control? They definitely do, and the whole point of Section 230 is to give these companies editorial control without the legal liability that goes with it.
If I'm publishing a newspaper, I decide which articles run and which ones don't. I decide on every article. I decide on every editorial. I decide on every letter to the editor. So, I'm responsible for every word.
Is Facebook responsible for every word of every Facebook post? Have they decided whether to publish it or not? No. (Yes, they do have certain filters that try to exclude certain things, but that's all. A newspaper would be different if it published every letter to the editor it received, so long as it didn't contain the words "child porn", for example.)
I should have realized this in my first reply, but your argument is exactly what I mean by begging the question.
Facebook and Google occupy a legal grey area between publisher and common carrier that was literally created by the CDA. So your argument is like saying, "Vaping has to be legal because how else is Juul going to stay in business?", or "Loot boxes have to be legal because they're so important to EA's and Activision's business model." These companies only exist because of a certain legal environment, so how can their existence be used to justify that same environment? Are Google and Facebook now considered inalienable human rights used to derive laws?
Maybe Google and Facebook would be able to survive a change to the CDA, or maybe not. Who cares? The internet ran just fine before the CDA and the technology hasn't really changed much at all. Heck, the two of us are conversing right now without Google and Facebook involved.
Everyone gets so bent out of shape these days about Google and Facebook but no one is willing to even imagine an internet without them. Present company excluded, of course -- at this point I'm just ranting.
> A newspaper would be different if it published every letter to the editor it received, so long as it didn't contain the words "child porn", for example.
This is not true. It has nothing to do with the choices of the editor and everything to do with the medium used. The CDA carves out a specific immunity for internet companies that does not apply to print and broadcast media and that's what Barr is putting on the table here.
I was thinking more about it and the Juul/EA comparisons aren't quite right. A better comparison would be saying in 2006, "of course Glass-Steagall should have been repealed, because how else would these huge banks be possible?" That's another case where the business model was truly enabled by a legislation change like Google's and Facebook's models were enabled by the CDA.
My favorite thing about the "publisher" vs. "platform" rabbit hole people of a certain political persuasion seem to be burrowing through as a not-so-veiled threat towards service providers that "censor" posts consisting of pictures of Michelle Obama photoshopped to look like a gorilla is the delusional alternate-reality plane of existence on which they seem to reside where they think that going through with the threat will mean that their preferred content will be more likely to be hosted.
I think we need to stop trying to fit these things into old laws that weren't written with them in mind.
Twitter isn't a telephone company or a newspaper. I think for the most part they should have the liability protection that a telephone company has. But users do want moderation. They often want to restrict what they see to posts by people in their own echo chamber. They want the ability to flag things as spam or abuse. They want to be able to block people. They want posts taken down if enough people complain about them. And to extend that further, they often won't mind if there's a system in place to automatically do the above without their prior action.
The problem ends up being bias, even if it's just perceived, and not real. If a certain group thinks "the algorithms" are suppressing their speech, then the algorithms are either bad, or aren't transparent enough to prove that they're unbiased.
At the end of the day, people believe that these companies have an agenda that they push by shaping discussion in certain ways. Whether true or not, the best way to combat that is complete transparency, or just no filtering or reordering at all.
It is easy to ascribe bad motives to a person of a party of a different affiliation and to assume this is just selective application of the law to advance political goals.
However, there is another way to look at this different from my own political leanings. As little effort as Democrats put into anti-trust prosecutions, Republicans (of the past 30 years) have been anti-anti-trust. In the late 90s when the DOJ had Microsoft on the rack, nominee Bush said he'd stop the antitrust effort. In fact even though MS had been found in violation of antitrust laws, then President Bush stopped the effort to break up MS and instead they were told to make relatively minor changes in their behavior.
In Sweden they are probably liable for moderating user content. The Swedish law is called BBS lagen, the bulletin board system law. Yes the law is a bit old but it should regulate content published by users and that the hoster of content has liability for the data published on the platforms.
> “No longer are tech companies the underdog upstarts. They have become titans,” Barr said at a public meeting held by the Justice Department to examine the future of Section 230 of the Communications Decency Act.
> “Given this changing technological landscape, valid questions have been raised about whether Section 230’s broad immunity is necessary at least in its current form,” he said.
...all I can think of is “well, here comes the state-sponsored moat.”
If they weaken these protections, the big four will just hire a few more entire buildings of minimum wage content moderators (like most of them already have running) and it’s curtains for small entrants.
It makes me really sad to see the US thinking about shooting its only real growth industry in the foot.
Edit:
> while a few Democratic leaders have said the law allows the services to escape punishment for harboring misinformation and extremist content.
It’s also terrifying to think that parts of our government want to explicitly punish people for hosting legal content that they don’t like to read.
My big Q is what problem they are trying to solve here. As a not entirely impartial observer, it seems like what's actually happening is that Barr wants a way to punish tech companies for declining to broadcast certain far right voices. Like, does anyone actually believe the world is going to be a better place because these companies have to defend themselves against an infinite stream of proxy libel and tort lawsuits? Is user generated content viable at all in a world where the hosting provider takes on liability? Could this site exist in a world where these rules are implemented fairly?
Maybe we should just get rid of liability for authors and publishers, then, also?
It's fair to ask how big of a problem libel is. It's hardly enforced now, anyway. False accusations fly, with no consequences. Reporters at major outlets repeat accusations with no evidence ("sources say that ... did ...").
The reason to connect liabilities to publishers is that publishers have the ability to respond to those incentives. This is not like UGC websites. UGC websites can't practically respond to content liability proactively in their current economic model, since the marginal unit of content has such low value but imposes such high risk.
Therefore, if we tried to connect liability to the UGC business, the semi-democratized form it currently has would have to die. Instead, you'd have a situation much more like the broadcast model where producers and content would need to be vetted beforehand. Small-time producers without much economic value would not be permitted onto UGC sites because the risk of liability would outweigh the benefits of hosting.
(I don't see YouTube going in the direction of 8chan. It would be too damaging to the brand. So the idea that there should be no moderation is right out the window.)
That's the reason why these provisions exist: without them, the businesses cannot exist. Now if you think it's a problem that teens and mommy bloggers and small time streamers can upload home movies to YouTube, you might applaud this result. But personally I don't think there's a problem with that segment existing on UGC sites, so I think it's a terrible idea to connect liability to UGC sites.
User-generated content can be viable if the users host/publish it themselves. That's usually the response when tech companies boot pages or accounts (ie. private companies exercising their rights)
Well, it’s not just far right stuff they’re censoring in bulk, so in a weird way I can see where he’s coming from.
For example, YouTube has been taking down legal instructional gunsmithing videos (one of my hobbies) because guns. Other things that scare people but aren’t associated with a US political axis position, like the lockpicking lawyer, stay up.
They are absolutely editorializing (see also: Apple’s squirt gun emoji, which I would find unconscionable even if I weren’t a firearms enthusiast because I am an accurate communications enthusiast first and foremost) and their behavior needs to change before it hurts our society even more.
My friend, for example, lost a meme page he ran on Facebook with hundreds of thousands of readers for posting a male nipple (the man was wearing makeup). The censorship is not just political. They are burning books and destroying value: art, teaching, literature. It’s not okay.
I just don’t think this is the right approach to it.
I’m talking about censorship of legal content. These companies serve as sort of a web host, but also an application service provider.
I have friends and acquaintances who have worked hard to build audiences, and then these platforms took that away from them via censorship. Of course, you can make the argument that it was really Facebook’s audience the whole time, but that ignores the content being generated by my friends which drove the whole thing.
Useful instructional videos and webpages I liked watching/reading and wanted to revisit had been deleted in the intervening time.
It’s destroying value, because these sites function as content-neutral hosting... until suddenly they don’t one day as the censorship hammer comes down. I have been telling people to start hosting on their own domains instead. This is the only way we can disintermediate our audience we work so hard to build.
The gun emoji one is more subtle. Emoji are, recall, plain text. Changing the rendering of plain text to alter its meaning is really, really bad. Here’s an example scenario as to why such ambiguity is harmful:
Imagine you’re a parent. It’s summer, and it’s hot. You have a squirt gun battle with your child. The next morning, you take them to school. (You’re a parent in the southern hemisphere, probably.)
An hour later you get a text from them: johnny brought a (gun emoji) to school
The rendering of this image is platform-specific. Their message is either harmless context-referencing fun, or cause for the most major of alarm. Which is it?
Depending on the sender’s device, they could be indicating a toy gun or a real one. Depending on the receiver’s device, it could render the same or differently.
Regardless of how you feel about weapons, surely you can recognize that introducing that communications ambiguity introduces a safety issue because it just impaired communications.
Now realize that they did so as a political statement.
That’s not okay, any more than changing the rendering of any other letter of plain text. I really, really do not want my text handling and rendering systems editorializing to me. Ever.
In case people think that your "text message with gun emoji" hypothetical is too contrived, it is worth remembering that software systems making assumptions about how to render characters, even with the most innocent of intentions, can occasionally result in disastrously unintended consequences:
I am not a gun enthusiast like the grandparent commenter. However, the toy gun emoji is an act of editorialization and politicization. Apple (and some others) took an emoji that had a particular intent and changed it to something else, to better suit their ideological views.
The harm is that it disallows citizens from expressing themselves precisely and accurately. It makes it harder to send communication involving guns (for example reacting to a comment with a gun emoji) and it sends guns to a less visible medium (full text). This change is a political stance on guns and therefore editorialization, since it diminishes the opinions of those who back the 2nd amendment. Likewise, taking a hotly debated politicized topic like transgender people, and adding gender neutral emojis, is also editorialization. And likewise, Apple fighting back to prevent a rifle emoji (https://www.theguardian.com/technology/2016/jun/20/apple-rif...) is editorialization.
Censorship on Facebook and YouTube also goes beyond simply removing content.
If the almighty algorithm decides your stuff shouldn't be seen, it won't appear in peoples feeds, drastically reducing your reach. That's another huge problem.
FAANG will need to decide if they're platforms or publishers. They currently moderate the communities, albeit selectively, while enjoying the protections granted by being a platform. This can lead to abuse of power where only select viewpoints are moderated out because unaccountable corporate leadership says so.
It's correct to be thinking about this, notwithstanding the fact that I place little faith in the federal government to produce the correct outcome.
Where did people unearth this "platforms" vs. "publishers" argument from? It looks like a political tool someone with a dark agenda diged out of some dark swampy corner of polytics...
Everything is both!
Of course you'll have more and more proprietary-algos / blackbox-ML/AI-systems filtering content. And of course the content can't be manually reviewed (because SIZE) and you can't have the platform be responsible for it (what does it even mean to "be responsible for content"?!).
It's sad there's not much we can do to regulate away some of the toxic infos (like the "platform vs publisher" meme currently floating around...), but here it's actually a case where doing nothing and stomacking some of the bad consequences is 1000% better that doing something! Because that "something" whatever it may be will limit fundamental freedoms simply by having something exists in this space!
Let’s be clear. The “conservative” voices that have been moderated out not because of the PC police, but because of obvious reasons that violate ToS (spreading racism, inciting violence, etc). Look over [1] and show me this consistent “abuse of power”.
I don’t work for twitter so I can’t speak for them. Trump is still on twitter saying all kinds of horrible things, and I can only presume many others are as well. I’m not sure what triggers crossing a line, but I don’t agree with this accusation of some great anti-conservative conspiracy. I see this often in reaction to Alex Jones who spreads all kinds of false lies that have lead to violence.
Note it’s not like there are humans evaluating each and every tweet so it’s going to be inconsistency applied.
I also find it interesting that since posting on HN someone is trying to reset my Facebook account. Talk about censorship...
Google are also taking down educational gunsmithing and hacking videos, and remixes of music videos and music that people have made—fair use.
Reddit has been censoring their less mainstream-friendly BDSM porn subreddits. To appear on most of the site your subreddit has to be whitelisted, now.
It’s not just politics. They are defining broad guidelines of what categories of (legal) things you are allowed to publish or not.
I imagine Netflix considers themselves a publisher, Amazon is the most interesting test case imo- with both their content and physical goods marketplace.
I think they have to move to purely mechanical moderation. It's the only way. Keyword and user reporting. Everything else is a mess politically (and that will end up being a political mess).
That has never worked, and never will work. Keywords are meaningless without understanding the surrounding context. User reporting fails when activist users engage in brigading to flag posts that they don't like even when those posts don't violate any laws or terms of service.
> sad to see the US thinking about shooting its only real growth industry in the foot.
It appears you are not from the US. Can you elaborate more on your perspective on why you think this to be the case? I am genuinely curious. (Just for full disclosure, the top 10 industries experiencing large growth at the moment are mostly related to construction or engineering in the traditional sense, not much hyper growth coming from web apps these days)
I was born in the US, and I spend about half each year there.
I think it is tradition in the US that whenever there is a new market or opportunity, to both celebrate the early entrants (in the case of tech, the persistent “two founders in a suburban garage” meme) and then also ignore the fact that it’s routine to make it as difficult as possible to replicate the conditions that lead to new companies or new people having wealth creation opportunities. I’ve heard it described as “pulling the ladder up behind you”.
A lot of people play the game as if it’s zero-sum.
It could also just be that I’m somewhat bitter because I have about five great ideas for companies that would make the world markedly better and more useful, and they have all recently been made either de facto or de jure illegal.
I hate that we’re almost always stuck with large incumbents who rent-seek based on their position, or seek for the government to make it harder/illegal for others to use the same general wealth-creation methods that they did.
Witnessing that level of selfishness hurts my soul, especially leveled against those who are most able and willing to build new businesses, cool products and services, and jobs.
When we used to run BBSes, we were repeatedly warned by lawyers and courts that if we start actively manage the content of other people's posts, we become publishers and our legal protection vanishes.
Why is this not the case for large social media orgs that do exactly that?
Look... if platforms become responsible for content published on them, it is the end of free speech? Period. You want THIS?!
The point would be to limit/regulate targeting: either (a) they're a no-login and no-user-personalization place, and they do no targeting everyone gets a random sample from the same content (I'd really prefer this!), or (b) it needs to be very clear what kind of targeting is allowed... and the line gets very blurry here, amplifying hate speech for clicks and eyeballs can probably pay well and there need to be ways to solve this problem...
Of course they should, because they moderate content. (you can't have it both ways... you either moderate or don't... but if you do moderate, you are responsible for what you let through)
>with any alterations to one of the internet’s key legal frameworks likely to draw unexpected consequences. “It’s hard to know exactly what the ramifications might be.”
Since there is no direct bridge to the digital money, power and influence, analog types will wreck the whole thing trying to implement legislation to give them any kind of foothold on all of that easy profit.
The lack of influence/sway will eventually drive the traditional powers to contrive the shortest-term solutions to destabilize the ecosystem. Its more than a "war of words" at play.
Should US AG Barr be liable for (or bound by) comments/tweets by President Trump?
This isn't as tight a parallel as I would like. But when I make a post on HN, say, it's my words and my opinion, and does not represent the opinion of HN (even though they moderate). I don't speak for HN; they don't speak for me.
In the same way, when Trump sends his tweets-of-the-day, that doesn't speak for AG Barr or the DOJ (despite Trump's idea that he is the chief law-enforcement official).
As I said, that isn't quite as tight as I would like it to be. But it's something that Barr should be able to understand at both an intellectual and an emotional level.
I tend to believe that the only path forward is for these "global platforms" to become more sharded, allowing smaller, more focused communities to thrive and self-moderate.
Platforms like Reddit, Discord, etc have "tiers" of moderation whereby community leaders handle the day-to-day moderation of individual content posted within the community, yet there is still Big Company Inc. at the top capable of moderating entire communities (you can't create a subreddit focused on school shootings, stuff like that). These platforms have problems; there are problems intrinsic to any situation where Speech and Social Interaction is involved. But their problems are far less in both magnitude and quantity than the global platforms.
It seems to me that holding any organization or moderator liable for what people post on their platform would have a Supreme Court-level case on their hands concerning the first amendment. Who would win, I don't know, I'm not a lawyer, but that feels like the ground we're treading on.
Tech, Aviation and Agriculture are some of the areas where Americans are the world leaders by far and yet the American government is totally set to hurt these very industries (we'' break the evil google) and so on.
The Section 230 saga just shows how dangerous it is for the government to interfere in industry.
The CDA was passed when people were scared of the internet and looked to government to protect them from its evils. Section 230 was added to save "the little guy" from becoming collateral damage of this legislation.
Fast forward 30 years and these "little guys" have grown into the scary forces that everyone wants the government to protect them from!
Imagine if ordinary people had been allowed to sue Google and Facebook over this time. There's good reason to think that no one would have been able to monetize the internet in such a way as Google, Facebook, etc. if not for Section 230.
I don't think anyone in Congress is interested in repealing Section 230 but I'm glad people in Washington are at least talking about it.
It says there is no need for an arguement because there is nothing to justify it.
"Should it be legal for the government to kill and harvest the organs of underperforming schoolchildren?".
"No. Full stop." Is a stance that there is nothing to even argue.
In Sweden they most probably are by a law called BBS lagen. This is the bulletin board system law. Where the provider of content are to some extent liable for the content.
Should Facebook, Google shield users from the legal consequences of posting illegal posts?
If we were using e.g. Ted Nelson's Xanadu (instead of the WWW) every post and link would have provenance information and it would be technologically feasible to make the original source of a given piece of illegal content liable for the legal consequences of publishing it, as well as each and every person/entity that promulgated it across the network.
As it is now, these platforms omit or delete provenance information, making it technically impossible to moderate at scale.
The protections designed for phone companies, etc., make perfect sense: the phone company is just facilitating communication in a content-neutral way. Phone companies should not be responsible for knowing or caring what content is shared, even if it's some kind of slander or treasonous plot being discussed.
But does that apply to web platforms that aren't content-neutral? I think probably not. There is such a huge volume of communication that they should have some protections built in, but not blanket protection.
> Phone companies should not be responsible for knowing or caring what content is shared, even if it's some kind of slander or treasonous plot being discussed.
What if they influenced the content that each side hears, e.g. add strategic gaps where they leave out key noises, to make the phone call go on for the maximum amount of time (because they charge by second)?
Conservatives want platforms to moderate in a politically neutral way. Passing a law requiring such would be unconstitutional as it would violate the free speech of those companies. Making section 230 conditional upon political neutrality might not be unconstitutional. No Internet platform would ever risk operating without section 230 protections, so they would essentially be forced into political neutrality. So the same effect would be achieved.
Nobody is seriously considering simply removing section 230; that would be devastating to the economy and to free speech both. Any such assertions are no more than sword rattling and idle threats.
Neither is anybody seriously talking about ceasing all moderation entirely. Platforms would become flooded with spam, among other things making them virtually unusable.
Where this all gets very complicated very fast, IMHO, is in how you define political neutrality. And I'll stop here because that's much too long of a discussion to have in a HN comment.
This is such a disingenuous framing from the AG as well from media outlets who keep misrepresenting section 230.
That law isn't about protecting Facebook or Google it's about ensuring that anyone can express themselves online without needing a highly paid lawyer and a protracted trial to do so.
It also isn't about publisher vs. platform, section 230 protects the Times from being sued for comments on their website same as for any bigger or smaller operation.
It's tragic how the powers that be in this country are trying to insert a lawyer into every transaction like it's a jobs program, and the infuriating part is that they are trying to convince people that it's for their own benefit.
The cynic in me says this is intentional. The more control over facts there are from any source the more limits there are in general to the spread of information and general discourse. From the governments perspective this is probably desired.
> That law isn't about protecting Facebook or Google it's about ensuring that anyone can express themselves online without needing a highly paid lawyer and a protracted trial to do so.
If you post something defamatory on Facebook, you can be sued, but under Section 230 Facebook cannot. I happen to think that's a good arrangement, but it's definitely about "protecting Facebook or Google," and not about protecting the individuals "express[ing] themselves online."
If Facebook were liable they wouldn't allow the posting or even the possibility of posting in the first place, the law protects individual rights by keeping the liability onus on the author as opposed to the "publisher".
I don't know how Barr expects to have a civil discussion about any topic in the midst of what he did with the Roger Stone prosecution, and in the midst of this presidency.
The public and his own Justice Department cannot have a reasonable discussion with him, when his behavior and actions up to this point have almost all appeared to be for one purpose - to help the President and his supporters in criminal issues.
The question we all find ourselves asking is: "So how is this going to benefit the President at everyone else's expense?" and even if it doesn't benefit him, it colors the entire discussion in a bad light.
His department is recommending a multi year sentence for one of the president’s closest allies, while declining to even prosecute some of the president’s adversaries (Comey, McCabe) for a similar offense. Not sure exactly what you want from Barr here.
There’s such a thing as professional compartmentalization. If you don’t like a colleague‘s/vendor’s/customer’s/government’s behavior you can(but not necessarily should) quarrel about that and that only, when you start to act as if a slight(however large you perceive it to be) is enough to blockade all relations in everything you do together, the initiator looks weak. A decision to halt all relations has far reaching impacts, if you are a business it almost always effects your employees and could easily effect your customers. If you are a government it effects both country’s citizens and possibly others as well. If it is between colleagues it effects everyone below you.
In the case of Barr or Trump, we are talking about effectively shuttering government progress/modernization. What does this serve? We don’t get these months/years back, the beat of progress marches on regardless. It’s unreasonable to think that we are just a few years away from a sweeping blue tide of progressives(or a red tide of conservatives under Obama) that are going to have the whole of congress on their side to make huge reforms, or whatever it is that would make the USG quickly modernize. Our system of governance stalls with sort of behavior, and it only really serves entrenched interests, if anyone.
His actions call into question all his professional judgment, that's not compartmentalizable.
Considering the gravity of this issue, reasonable people can conclude that we'd be better off "wasting" these years and months and to return to the idea later when someone with more judicial independence is in the position.
So don't end the discussion. What I said is still true, the discussion is colored in a bad light because of his behavior.
In that light, it is impossible to think about the issue objectively, without legitimately worrying about what Barr in particular going to do with it. It's not hard to imagine how he can abuse new powers over Facebook and Google to benefit Trump in the 2020 election.
If he's already selectively applying criminal justice against opponents and not allies, do you think he'll apply fair and objective justice against Facebook and Google?
The sentiment expressed by many in this thread would flip 180 degrees if Zuck e.g. one day woke up and decided he doesn't like commies (which would be a very reasonable, and amply justified opinion, in my view), and had his underlings at Facebook censor the entirety of Bernie Sanders' presidential campaign from the network.
My position on the issue is simple: if a site owner censors/throttles/shadowbans/detrends/etc _any_ legal speech, they're a publisher, and and they should be liable for the stuff that remains on their site. Don't want that? Be a carrier and don't censor legal speech. Nothing could be easier.
I share the exact same sentiment as you, but I don't know if I'm being fair to how ugly a unmoderated forum can be.
For example, spamming advertisements in comment sections is completely legal speech. So would be typing in gibberish and hitting enter a thousand times. But both of these things would ruin the point of the forum.
Even if I compare Hacker News to Reddit, the former is consistently high quality while the later is 95% garbage in my opinion. Why? Probably because Hacker News is far more highly curated.
At the same time, I feel like YouTube has crossed the line with idea suppression. YouTube gives the impression that it's an open non-biased platform that links you to content you're interested in. But there are several egregious examples where popular videos with near-mainstream "conservative" viewpoints are suppressed into oblivion (e.g. appears on the 12th page of results even when you search for the video title verbatim and it has millions of more views than every other "relevant" result).
But, just because I can give examples of things which cross the line (suppressing popular conservative videos) vs things that don't (suppressing random gibberish, suppressed bot-created videos)... I don't think I could clearly articulate any rules to say exactly where that line is in a way that is scalable. YouTube created YouTube... I might just have to defer to their moderation policies while hoping another competitor comes along to challenge them.
Well, we did make robocalling illegal, right? We could do the same for ad spam quite easily. IMO 99% of YouTube issues would be resolved by just not showing comments by default. I.e. you're welcome to read and write comments if you like, but you have to click a button first to show the comment section, and don't have to see them otherwise. Some sites already do this.
The Trump administration complaining about "harboring misinformatioN". The ironing [sic] is delicious [1].
There is no universal objective truth. Specifically there are things that reasonable people can disagree about and the same set of facts can be used to argue different positions. This fact is abused by the mentally challenged to argue ridiculous positions (eg anti-vaxxers, the Moon landings are fake, that sort of thing).
Likewise, as seen here, one side will argue those who disagree are engaging is misinformation (and in the Trump administration's case, from the President down there are multiple claims per day that are demonstrably false such that no one can really keep up). The agenda is to silence the opposition and undermine confidence in any sort of news.
ISPs were given safe harbor from liability for traffic on their network, for good reason. They just need to comply with certain standards. Tech companies really are no different and to argue otherwise would set an incredibly dangerous precedent (IMHO).
Yes absolutely. They exercise editorial control even if they attempt to disguise it behind “algorithms”. Everything posted on Facebook should be treated as if it was a newspaper article, for all legal purposes.
So any Internet forum which has to moderate content (because there is illegal content for which the law doesn't provide immunity from) is a publisher now and has to be able to fully vet everything everyone posts?
Yeah, easy yes from me too. "Oh but we won't be able to make a business out of exploiting user data and exposing hundreds of low-paid workers to psychologically damaging material anymore" right, yes, exactly, that's the point.
If they're treated like any other publisher, probably having that many workers manually vetting everything won't be viable. Which is fine, because it shouldn't be. It confuses me how every time we have one of those posts about how horrible that work is for the people doing it, most posters kinda throw up their hands and go "well whatcha gonna do" when the obvious answer is... simply not do it? If your business requires that, the easy answer to how to not cause that harm is to not have the business.
If the survival of their company depended on it, they would absolutely figure out a way to pre-moderate content, and almost certainly that would include significantly more low-paid moderators.
They had something like $18 billion dollars in profit last year, you don't think they could afford to hire an army of moderators?
I reckon they can afford 150,000-200,000 more moderators, moderator-managers, workers on and managers of tools for same, et c. before that profit margin starts getting mighty thin. Is that enough? I don't know, maybe it is. How long before profit hit zero would investing in Facebook start to be considered a poor use of capital, since that's the actual tipping point? Also not sure.
Then maybe you should propose and support a law that addresses that very point instead of taking a hammer to it and completely breaking any Internet forum out there, especially those that won't have the money and size to afford to deal with this.
Rather, it seems there may be room for a law to carve out an exception for those, rather than also including a bunch of things that clearly are publishing in the exception.
Meanwhile, what about mailing lists? "But those aren't (necessarily) public! What if we want to put our proceedings on a web site?" Well, luckily there's precedent for that, and the word you're looking for rather than "put" is "publish", and you can absolutely do that, but, naturally, it's publishing, just like if you were to publish a private physical mail mailing list.
Either physical publishing needs to get the same protections certain kinds of web publishing have now, or we need to rethink what we've done with web publishing. Unless there's reason to believe we don't need those laws around traditional publishing anymore, the latter seems like the better idea to me.
Since Facebook and Google shape the information that is seen using a proprietary algorithm, they have become publishers. Perhaps if their algorithm's were open and available, they may have an argument in their defense.
Until then, it is entirely possible they are shaping a narrative based on whatever model they want.
I don't buy the argument made by Barr that the scale of the platform reaches a point that it therefore requires regulation. This seems to be a simple money grab where large tech companies need to tithe to lawmakers.
They want to shape the content and do whatever curation or editorializing via human or algorithmic means, yet be seen an an open forum of user-generated content. Have it and eat it too scenario.
Not to mention adding paid content that blends in in a way that is indistinguishable from user submissions. This further complicates the intentions of the platform.
Yes, they most definitely want the cake and eat it too but there are lots of good arguments to make for that, arguments that have nothing to do with sustaining the business model of multi-billion dollar Internet companies.
If these companies are treated as a "public square" then the first amendment ought to apply. It's disappointing to see enlightened ideas like free speech being taken apart by these large corporations to push what seems to be a political agenda.
Recent example - a female Nascar driver shares a selfie with the President of the United States and Twitter's algorithms flag it as sensitive content. When algorithms make mistakes that lead to race based discrimination, it's treated extremely seriously. When this sort of thing happens it seems like everyone shakes their head and chuckles "oh those silly algorithms". Outcomes that marginalize folks based on political views are dangerous for your country. The shoe will be on the other foot someday.
I'm making the argument that the authority to moderate discussions is being misused and you are replying as if I am suggesting removal of this authority.
Employees produce content/products/sales/projects for the Company.
Social media users create the content that give value to social networks, thus social media users are, in a way, employees of the company. The Company has the requirements of limiting and being liable for the content that exists on their platform.
1. Platforms with no moderation (8Chan -- except probably even worse, because even 8Chan moderates some content)
2. Publishers that pre-vet all posted content (the NYT with no comment section)
3. Platforms that retroactively moderate content only after it's been posted, in whatever way they see fit (Twitter, Facebook, Twitch, Youtube, Reddit, Hackernews, and every public forum, IRC channel, and bug tracker ever built)
Revoking section 230 just gets rid of option 3. It's not magic, it just means that we have one less moderation strategy. And option 3 is my favorite.
Option 2 takes voices away from the powerless and would be a major step backwards for freedom of expression. It would entrench powerful, traditional media companies and allow them greater control over public narratives and public conversations. Option 1 effectively forces anyone who doesn't want to live on 8Chan off of the Internet. Moderation is a requirement for any online community to remain stable and healthy.
Even taking the premise that Twitter is an existential threat to democracy (which I am at least mildly skeptical of), it's still mind-boggling to me that people are debating how to regulate giant Internet companies instead of implementing the sensible fix, which is just to break those companies up and increase competition. All of the "they control the media and shape public opinion" arguments people are making about Facebook/Twitter boil down to the fact that ~5 companies have become so large that getting kicked off of their services can be at least somewhat reasonably argued to have an effect on speech. None of this would be a problem if the companies weren't big enough to control so much of the discourse.
So we could get rid of section 230 and implement a complicated solution that will have negative knock-on effects and unintended consequences for the entire Internet. Or, we could enforce and expand the antitrust laws that are already on the books and break up 5 companies, with almost no risk to the rest of the Internet.
What problem does revoking section 230 solve that antitrust law doesn't?