> Private companies can't (mechanically, not legally) determine who has a moral right to speak.
Of course not, no one is claiming they are the ultimate arbiters of morality.
But they do have the right to decide who can use their platform (as long as they don’t discriminate against protected groups). The broader public can then judge them positively or negatively for these decisions.
The thing is are these companies platforms like a phone company or are they publishers? Social media companies have argued that they cant be held liable for things posted to their platforms in the past and have tried to position themselves at neutral platforms. When they start to become the arbitrators of what is and is not to be posted they are no longer neutral platforms like the phone company. I do not recall a time when a phone company would cut your service because they found it to be distasteful or controversial.
That said I don't really know what these users were actually banned for saying. It could have been pretty bad and although I might not agree with what they said I hope that people are free to express their thoughts and ideas even though I might find them personally offensive.
>Social media companies have argued that they cant be held liable for things posted to their platforms in the past and have tried to position themselves at neutral platforms. When they start to become the arbitrators of what is and is not to be posted they are no longer neutral platforms like the phone company.
Social media companies don't become more liable just because they moderate. They all do that already. There's no sudden legal line between moderation involving messages with spam or bigotry.
I think anyone amplifying messages on a large-scale in a one-to-many manner, between people that aren't equally engaged in a conversation together, should be considered to start accruing responsibilities over the content of what they're participating in amplifying, in a way that a phone companies largely don't have. I think social media companies have been largely shirking that responsibility by phrasing it as a free speech issue and letting anything go.
It is a gray area and social media platforms sit somewhere in-between being a common carrier and a being a publisher. Your right there is no hard legal line but the more they decide what is allowed and what is not allowed the more they move farther away from being a common carrier.
> Social media companies don't become more liable just because they moderate.
It appears that those links describe the conditions before passage of the communications decency act of 1996. That was all overturned by section 230 of the CDA.
My limited understanding is that Section 230 of the Communications Decency Act (which is apparently one of the most important laws for this topic), passed in 1996, provides very broad protections to web platforms:
1) They can't be held liable for user-generated content, e.g. Facebook can't be sued for a defamatory statement that I make in a post on their platform.
A newspaper that authors and publishes an article making a similar defamatory statement could be held liable. I believe that Facebook could be held liable if the company itself authored and published the defamatory statement, instead of merely distributing my defamatory statement.
2) They can moderate user-generated content visible on their platform as they see fit, without trying to be "neutral" and without losing their liability protections (item 1 above).
Apparently, before this law, internet companies were worried about being held liable for what users said if they did any moderation (and some companies were sued for this).
This longer video (33 mins) from Legal Eagle is nice as well: https://www.youtube.com/watch?v=eUWIi-Ppe5k. It's been a few weeks since I watched it so hopefully I didn't miss too many important details.
Section 230 protection should not exist. When this was enacted, nothing like Facebook, YouTube,Twitter, etc. existed, and InfoSeek and AltaVista were the leading search engines...
Of course not, no one is claiming they are the ultimate arbiters of morality.
But they do have the right to decide who can use their platform (as long as they don’t discriminate against protected groups). The broader public can then judge them positively or negatively for these decisions.