So long as their speech remains legal, social media sites should not be the business of censoring elected officials. That is the job of voters and Congress.
It’s too easy to walk down a slippery slope where suddenly these sites have to regulate in accordance with whatever is the popular opinion of the day.
>social media sites should not be the business of censoring elected officials. That is the job of voters and Congress.
What about Paul Nehlen, a political candidate who (before he was permabanned) frequently railed on Twitter against the "Jewish media" and mused about the "JQ", which stands for Jewish Question? Who called Jewish critics "shekels-for-hire" and used the phrase "The goyim know!" as an imitation of their involvement in a supposed conspiracy? Who said he wouldn't mind "leading a million Robert Bowers [the Pittsburgh synagogue shooter] to the promised land"?
Is that something Twitter is within their rights to ban for, in your view? If so, would that change if he were elected?
Social media is a not a public square where people can plant their soapbox, it is a set of privately owned servers where people can persist small amounts of data that comply with a set of rules. You break the rules, you get kicked off the private property.
> Social media is a not a public square where people can plant their soapbox
Except there isn't really an alternative to Twitter - they have monopolised that "short catchy text stream" market. Other services, like Gab, exist - but when services like Cloudflare readily pull support for less mainstream alternatives, its hard to say that if you don't like Twitter you can go somewhere else.
For better or worse, Facebook and Twitter consumed blogs, and there are no real alternatives.
That's like saying television networks are required to give you a show for your free speech. Social networks are private companies built for profit. For better or worse they should be able to censor anything that's not good for their business. Hell, you can't even post a nipple on Facebook.
Facebook and Twitter are ostensibly platforms. The analogy would be if a TV network rented out studios and cameras but prevented you from hiring them even though you had the money because they didn't like you or what you were going to film or had filmed or might film.
If they were a TV channel or programme then they'd have no bother refusing you. And that's the nub, are they a platform or are they a publisher? If they're the former and you allow them to discriminate then what's your argument against other businesses discriminating? Can't see that ending well.
If, however, we accept that they're publishers - which in my view they are, as they edit what can be seen in several ways - then by all means let them discriminate but they must follow the rules of other publishing media companies.
TV channels refuse advertising based on politics or community standards. That's pretty similar - provide your own content, and rent airtime, but get rejected because of what you say.
That's true but let's take it a step forward - if the TV station published a list of rules and your content did not cross the line but you were denied, would that be okay?
Would retroactive application of rules be fine, or new rules created and then retroactively applied?
Would it be okay if they refused simply because of who you were? Or what you did elsewhere? These kind of questions seem to be more applicable if we're comparing to Twitter et al.
I wasn't ready to give an opinion! I was just stating the circumstances :)
But, if I'm going to answer I think legal issues are grey and it can be tough to accept as a programmer dealing in binary. There aren't many absolute rules and sometimes a rule works well at one scale but not another.
Part of the reason we should avoid monopolies is so people and governments can leave companies alone and those who disagree with their rules can go elsewhere because they have options. But this is in complete opposition to products and platforms that flourish because of network effects. By its very nature, Twitter wants to be the only game in town. I think that changes things and opens them up to scrutiny because we have fewer realistic options.
So as a non-answer, if we had many viable social networks or TV channels to choose from I'd be happy with them arbitrarily deciding who gets to play in their playground for any or no reason (as long as it's only in that one place, and there's no secret cabal or cartel). Protected reasons and classes aside. Keeping in mind, many times rules aren't rules, they're flagged as guidelines and subject to change for subjective reasons.
But I don't think that really reflects what is going on in modern social media networks. They are similar but distinct, each is used for different types of speech, and the key players dominate their niche. Users can choose to some degree what they prefer to receive. Networks also choose for the users via their feed algorithms.
So they are already taking on the role of arbiter and have been for some time. That sounds more like a publisher to me, as you say. They don't change the content of each post - but they do change the collection of content you get presented, just like a magazine editor rejecting articles and putting together this month's issue. Except every single article is tagged opinion.
On top of that I think we are just running up against another "too big to fail" situation. They're our only platform to communicate this way, so we don't like the idea of them suppressing speech, and speech can be inflammatory, inciteful, or libelous. That's not really incompatible with free speech in other areas, and I think once a platform reaches a certain size we should treat it as a public arena. This would be consistent with not allowing them to remove posts for arbitrary reasons, but allow them to do it if they think it falls under one of those categories.
Might be missing your point, but a studio for rent does deny paying customers if it wants. Conference facilities might deny particular events. TV networks deny ads that are otherwise legal.
But I don't think it's needed in the argument. As you say, the interesting distinction is around platform versus publisher and any legal protections you gain from posing as the former.
There's only so far a metaphor can be pushed, unfortunately (or fortunately?), and I completely agree, the real point is the distinction and whether it applies.
The rule is section 230 of the Communications Decency Act. This via the EFF[1]:
> Section 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do.
In other words they'd be liable for what is published on their site whether by them or a third party.
Platform is not a legal term, it's a colloquial distinction from publisher, which in this case refers to either the "interactive computer service" (e.g. Twitter) or the third party (e.g. the tweeter). Interactive computer service isn't as catchy.
> Can you give an example of such a rule and what it would look like if it were applied to twitter?
My first guess would be a slew of defamation cases, probably by touchy celebrities, just as with other publishers like tabloid newspapers.
> My first guess would be a slew of defamation cases, probably by touchy celebrities, just as with other publishers like tabloid newspapers.
Ok but clearly that is an impossible standard for a website where users are able to post content. In effect, it seems you're suggesting Twitter must choose between moderating their platform and being shut down. Would you say that fairly describes your position or am I misinterpreting you?
> clearly that is an impossible standard for a website where users are able to post content
I'm able to post comments on many websites that are counted as publishers - try any online newspaper. I was able to post on countless forums long before Twitter appeared and still can.
> it seems you're suggesting Twitter must choose between moderating their platform and being shut down
That is technically known as a false dilemma.
- They can remove shadowbanning
- They can remove algorithmic manipulation, of all and any kinds not instituted by the user
- They can adhere to the rules they've set up, no retroactive action, no action for things off of the "platform", no new rules that just happen to affect their political opponents
- They can refrain from reframing content
- They can decide to become a smaller company (maybe more profitable (at last;), focus can help, niches help, size isn't everything)
- They can hire more moderators
- They can make the algorithmic tools they use available to users, if you don't want to read "deplorable" content then why not give you they power to make the choice and others the power to continue to read it?
- They can make the rules clearer, tighter, more accessible.
My Lord, there are so many choices available and they're not all mutually exclusive, and this isn't even an exhaustive list.
> I'm able to post comments on many websites that are counted as publishers - try any online newspaper. I was able to post on countless forums long before Twitter appeared and still can.
Right... Because they have section 230 protection. Forums and newspaper websites are not "counted as publishers" with respect to comments you make on the site exactly the same way twitter isn't "counted as a publisher" for your tweets.
> They can remove shadowbanning - They can remove algorithmic manipulation, of all and any kinds not instituted by the user - They can adhere to the rules they've set up, no retroactive action...
Why is it a false dilemma? You've offered up an arbitrary list of product and business suggestions based on your own opinions about how twitter should be run, but nothing you've said seems to contradict the conclusion that twitter should lose the legal protection that allows them to keep the site open if they moderate the site.
Let's use a clear example. Do you think twitter should be shut down because of decisions like hiding Trump's tweet?
> Right... Because they have section 230 protection.
That's a good point. You're still begging the question with regards to an "impossible standard" so… I'm going to shrug until you come up with your own evidence for that.
> Why is it a false dilemma?
You (repeatedly) give two options when there are more. We must preserve the status quo or we die! is not a compelling argument to anyone with an ounce of imagination.
> arbitrary list
No, they're not "arbitrary", and I'm beginning to lose my patience with you.
> based on your own opinions
This is my account, I write my own opinions using it.
> that allows them to keep the site open
Begging the question. The site could be kept open by following my "arbitrary list" because they would then retain protection even under a narrower interpetation of the law. Hence, not arbitrary.
> Let's use a clear example. Do you think twitter should be shut down because of decisions like hiding Trump's tweet?
I don't think Twitter should be shut down or would be shut down, regardless of whether they retained protection, and loaded questions that are entirely facile are where I draw the line.
> You're still begging the question with regards to an "impossible standard" so… I'm going to shrug until you come up with your own evidence for that.
If twitter becomes legally responsible for anything posted by the millions of users that publish content to the site then it's obviously impossible for them to keep the site running, the logic is very clear.
> You (repeatedly) give two options when there are more
No, there are only two, either the site has section 230 protection or it doesn't, there is no in-between state.
> We must preserve the status quo or we die!
An impressive strawman for someone with such an obsession for formal fallacy labels.
> The site could be kept open by following my "arbitrary list" because they would then retain protection even under a narrower interpetation of the law. Hence, not arbitrary.
Your list of business suggestions are just ideas you made up, they have no legal meaning, hence arbitrary. Business decisions like "shadowbanning", "retroactive action", "reframing content" and even explicit partisan bias are 100% legal and Twitter is within their rights to operate their business in such a fashion.
> I don't think Twitter should be shut down or would be shut down
Yet in your own words:
> The site could be kept open by following my "arbitrary list"
So in other words, the site shouldn't be kept open if they don't follow your legally meaningless suggestions.
> and loaded questions that are entirely facile are where I draw the line.
lol whatever, if you're so intellectually dishonest that you won't admit to the implied conclusions of your own argument then I'm wasting my time anyway.
To be clear what most people pushing this position want is for Twitter to be so afraid of ruinous lawsuits that they are afraid to ban people who the rest of us find deplorable.
No, but what would be wrong with a company being afraid of ruinous lawsuits if those lawsuits had merit?
You or I should be afraid of ruinous lawsuits because even one, one without any merit, can cost us a lot. We do not have the resources of Twitter, we do not have a permanent legal staff, we do not have a pit of money, we do not have wealthy backers, we do not have the ear of powerful people. They can fight a suit as far as it can go and actually create precedent in higher courts that you or I could never afford to reach. They can even face down a government lawsuit.
If lawsuits were spurious they'd soon put a stop to them.
If they weren't immune from lawsuits for things as simple as libel by their users there would be a 1000 meritorious lawsuits per hour. It would be impossible to run a site with user generated content. This is the absurd conclusion that nobody is willing to rule for. Once again since you haven't in several comments been very illuminating. What is the desired end result.
> Once again since you haven't in several comments been very illuminating.
If you're unable to maintain a respectful conversation then perhaps Twitter is a better place for you to spend your time.
> If they weren't immune from lawsuits for things as simple as libel by their users there would be a 1000 meritorious lawsuits per hour
They could and would be immune if they did not editorialise. That's the whole point.
> What is the desired end result.
I don't have a desired end result because I'm not planning some utopian outcome. Let people be free to express themselves unencumbered and without meddling for overt political outcomes or some form of misplaced paternalism, that's it.
If that's not illuminating enough for you then you have my permission to reply to someone else.
>To be clear what most people pushing this position want is for Twitter to be so afraid of ruinous lawsuits that they are afraid to ban people who the rest of us find deplorable.
and this absolutely IS your position albeit you would say for more nuanced moral reasons.
From my perspective its a win win. On the one hand I get to see deplorables banned from twitter on the other this ought to be great motivation for decentralized tools that are the only thing that can possibly be actually censorship proof.
I fully believe that your Facebook and twitter ought to be running on a $100 box on your desk where nobody can censor it with a click. Trying to fix after the fact the situation of having your ability to communicate politically depend on the courtesy of another is a losing proposition.
Nobody is going to apply the CDA in the way that the president wants especially not before November and the will to protect conservative voices will disintegrate once they have lost the presidency and the senate as well.
If you care about truly open communication donate money to people making decentralized tools instead of waiting for the injustice department to do anything useful.
I agree with you in spirit, but the Fairness Doctrine in the United States was a variation of this, where the FCC required that approximately equal time and attention be paid to all sides of a major issue. Ronald Reagan removed this in the 80s, and helped popularize conservative talk radio. The justification was that the limited bandwidth of television made it in the public interest to regulate its influence. The internet is much higher bandwidth, so the same argument doesn't hold water.
EDIT: What qualified as "equal" and "all sides of a major issue" were up to the discretion of the FCC, so enforcement was fairly ad hoc
The justification for the Fairness Doctrine was that the airwaves were a public resource and that anyone using that resource was required to use it responsibly and in the interest of the public.
Right...specifically, a limited public resource. The simple version goes like:
Broadcast spectrum is a limited resource. Said resource is owned by the public but since it is limited, you need a license to make use of it. In order to retain a license, you need to operate for the public's interest, convenience, and necessity.
It's also the basic reasoning behind such things as content restrictions and the system by which you can complain about something that is broadcast. If enough people complain about something you broadcast, it can be argued that you are not serving the public's interest and so be fined or lose your license to broadcast.
It's why cable programming is more restricted by a network's desire to avoid pushback from advertisers or cable carriers due to complaints (rather than anything under the jurisdiction of the FCC). They aren't required to serve the public interest in the same way, but market forces apply some of the same pressures.
In this case, I see social media platforms as being more like cable networks than broadcast networks. You won't see specific government content restrictions on most (legal) content hosted on these services because they don't require a license. However, they still face backlash if they piss off enough customers and/or advertisers.
A great example of why major companies should not regulate people's lives is WeChat in China. Some ethnic minorities are banned from using it, which is almost the same as not having a credit card in the US. You can kind of get around without it, but not really.
Many tech companies are simply utilities now. They should not be allowed to refuse service in the same way as electric companies are not allowed to refuse you service - except in extreme circumstances.
That was indeed the status quo on broadcast television for many years:
> The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses to both present controversial issues of public importance and to do so in a manner that was—in the FCC's view—honest, equitable, and balanced. The FCC eliminated the policy in 1987 and removed the rule that implemented the policy from the Federal Register in August 2011.
In fairness, this was more of a spoof that was canceled due to customer dissatisfaction and finding it to be in poor taste. It was also on a satellite network so neither subject to any public broadcasting guidelines or other OTA broadcast restrictions.
I could totally imagine a Monty Python sketch depicting Hitler in an absurd/comedic situation being rebroadcast on US public TV without issue. I think the previous comment was referring to a hypothetical pro-Nazi program.
> That's like saying television networks are required to give you a show for your free speech.
We do that, and have since 1976. Any cable provider in a city with more than 3500 residents is required to put up 4 public access channels and studios and allow the public to use them and broadcast on them.
> Social networks are private companies built for profit.
To what extent do current laws provide exclusions for them and their business model? Do we deserve something in return for that?
> Hell, you can't even post a nipple on Facebook.
Precedent exists here as well. Free public speech does not include broadcasting of materials of a prurient interest at any time. This is separate from the "seven dirty words" which are generally excluded outside of the 'watershed' which is typically 10pm to 6am in most cities.
I'm fairly libertarian, so I'm uncomfortable with the question, but there has to be some objective method of determining the extent to which "internet domains" are to be considered "leased from the public" the way we do broadcast licenses.
We've already seen certain websites with particularly vile, but legally protected, points of view have their domains taken away from them. If speech can be so easily marginalized, doesn't the public have some interest in claiming domain over some of these seemingly "private" systems?
Cable TV is required. The KKK ran a show on a NYC public access channel back in the 90’s.
Of course I could go cut the data lines running through the right-of-way on my property. I don’t agree with how these big data companies are censoring, so maybe I will censor them.
Why does there have to be an alternative? They can give speeches, send letters and emails, go on television, post to other social media platforms, yell in a park, release an album, etc. You have the right to freedom of speech, not the right to freedom of mediums of speech.
If this is your opinion, then why not go even further?
If you don't support lowering Walmart's taxes, along with all other grocery stores, then why shouldn't they be allowed to permanently ban you for all places to buy food from, forever?
I’m not sure I follow this line of thought. If all the grocery stores did this, then I agree society would certainly take action: build better, more open grocery stores. surely the same principle applies here, A business like Twitter can choose to block a group of ideologies. If there is enough want in the part of people to propagate those ideologies, they’ll build a better forum and if those ideas are successful, that forum will be. Competitive, capitalistic behavior handles this.
One of these is clearly not like the other, in multiple ways. You're at the very least misrepresenting why these people are being banned.
Tax policy is not anywhere close the tweets at issue which often amount to terroristic threatening. If you started verbally abusing other customers in a grocery store, perhaps even implying you'll murder them, then yes, you'll be banned.
To the rest of your comment, are you seriously comparing tweets to food? You have a fundamental right to food, not to have your opinions transmitted by others.
> You have a fundamental right to food, not to have your opinions transmitted by others
At least in the US, it is the opposite.
There isn't a constitutional amendment protecting your rights regarding food. But there is a constitutional right related to your rights regarding speech.
Many people think that these rights regarding speech are pretty darn important. Possibly even the most important rights that a person can have.
So I would absolutely say that they are comparable, and that people's right regarding transmitting speech are possibly even more important than any other rights out there.
It's a known issue that unrestricted commerce or indeed unrestricted ANYTHING is unacceptable because its trivial to imagine situations wherein an actor or actors can misuse the situation.
We cannot and shouldn't contemplate a solution to any and all possible feasible problems because time is finite and a solution that in theory results in the best outcome for a..z wherein h..z are hypothetical and a..h are actual may actually be a worse solution for a..g
The analogy is ridiculous on its face. It will never be actual. In such a situation we would trivially resolve it by passing a law that food stores can't blacklist groups of people while retaining the right to blacklist individuals for actual misbehavior. This would be somewhat limiting on the freedom of small shopkeepers to say throw out nazis but it would break the back of the "big food for lower taxes" coalition. Since none such exists this solution is as described above worse for the set of problems we actually have to solve a problem that is only hypothetical.
It would be more productive to discuss the ACTUAL issue by addressing it directly instead of trying to reason by analogy. It's not like its a topic people on this site have trouble understanding. Going off into the weeds like this just wasted time.
First off, how would Walmart and every single grocery store know my voting record? Beyond that though, a monopolistic collusion to manipulate voting would likely be investigated as anti-trust actions.
Finally, as far as I'm concerned, they are welcome to ban me for my voting pattern. Within a couple weeks I would open up a new grocery store, marketed as not caring what your political beliefs are. The thing is, in a free market (or pseudo-free market like we have) you only succeed with buy in. Once you alienate your customers, you fail.
> Walmart and every single grocery store know my voting record
Well, they could just look up your social media accounts to see if you've made any comments they don't like.
Sure, that would catch everyone, but it would certainly find some, and cause a chilling effect.
> anti-trust actions.
Ahh, no it wouldn't though! Anti trust is about competition. It would not be a violation to collude for non price related non competition reasons.
> I would open up a new grocery store
Even if you happen to be a multi millionaire capable of doing so, most people aren't capable of doing this. Many people might also starve before you get around to doing this.
Also, there are many towns where such competition is not really possible, because there are only a few stores in the area.
> you fail.
You are misunderstanding how long it can take for the market to react. Market power is real, and established entities cannot be overthrown overnight.
I believe they've done it twice now: once for the Daily Stormer [0], once for 8chan [1]. I'm fairly impressed that they host criticism of those decisions on their own website [2].
you can still make a blog! if everyone is technically able to access your content but no one wants to (for reasons other than fearing govt. intervention), that is a very different problem than a freedom of speech problem.
Metcalfe's Law [0] dictates that the value of a network service scales exponentially with its number of nodes. This tilts the table in favor of winner-take-all ecosystems; moreover, in terms of both political soft power and cultural influence, this often creates a pressure for users to participate, not quite under duress, but let's say unenthusiastically, lest they be left out of the conversation.
There isn't an alternative to twitter that will get you as many views as quickly. Are your rights decreased if fewer people hear you because they aren't interested in your message enough to come to www.mydomain.com
> but when services like Cloudflare readily pull support for less mainstream alternatives, its hard to say that if you don't like Twitter you can go somewhere else
But Cloudflare isn't a monopoly.
Botnet operators have no trouble finding crime-friendly DDoS protected hosting, the only reason the Nazis struggle with this is lack of experience.
It's shortsighted how beholden some people think the world is to Twitter and other social media platforms that happen to be in vogue at the moment.
Changing fads tend to drift the masses to a new platform for communication every decade or two. Personally I barely use Twitter or Facebook. I much prefer forums like this one or StackOverflow, which do a better job of encouraging quality contributions.
Trump or the White House could conceivably come out with their own app as an alternative means to directly reach followers and avoid censorship. If the people who want to hear him aren't engaged enough to tap a few buttons to install it, I have trouble mustering much sympathy.
I'm an avid supporter of free speech, but I think people who place their faith in a small number of private companies to provide that are naive.
Ideally all the speech, including every kind of speech you find disgusing, should be left as is, without censorship.
As long as it's not anonymous, you can see who is who (even under screen names), and value other speech and actions of these people accordingly. For instance, it would be easy to let anyone know that Paul Nehlen is a racist, by pointing at these posts
As a service owner, you could collapse and mark certain posts so that content offensive for most people won't jump in their eyes. Twitter does this.
Unfortunately, this is not always possible. First, there are legal limitations (hate speech, libel, etc). Second, the public outrage: people actively want to destroy things they find offensive and wrong, and put pressure on the service's owners. This includes its employees and shareholders.
Most people only tolerate free speech as long as it fits the confines of the Overton window. You have to have balls of steel and really good sources of money to explicitly and publicly tolerate anything beyond. Common carrier protections help here, too.
Well, there's a conversation about what "free speech" means that's very different from the conversation about what the First Amendment is. When the Constitution was ratified, they were somewhat synonymous, in that there was a public square that everyone walked past and you could stand there and you certainly could spout your "Jewish Question" logorrhea without government suppression.
Now, the "commons" is controlled largely by private corporations, so while the government couldn't legally say "You cannot utter this speech in Times Square, go utter it in an abandoned piece of farmland in Kansas," private companies can boot you from their platform until you're basically required to set up the entire infrastructure of an ISP to have any access to the Internet, and zero access to the public conversation.
> What about Paul Nehlen, a political candidate...
If that despicable moron somehow got elected, I think there would be a public interest in keeping his statements available to the public. If for no other reason than to maintain the public record of him making those statements as an elected official.
If President Trump told a journalist “when the looting starts, the shooting starts” and the journalist reported it, should social media censor that journalist? Of course not. Well in this case we don’t need the journalist because we can just see it ourselves. Social media can be, among many other things, a window into thoughts that public figures would not otherwise share without a significant filter in place. If elected officials actually want to live their private mental lives in that particular fishbowl, I sure as hell want to be able to keep an eye on them.
> Social media is a not a public square where people can plant their soapbox
Under California law there is an affirmative right to free speech even inside private shopping malls. So if your social media company is subject to California civil rights laws, the “private property” argument doesn’t hold up.
held that parties could collect signatures for their political activity on mall property due to a provision in the CA state constitution or more generally that a state constitution could offer more protections than federal law or the US constitution provides.
It found that it didn't represent a taking because passing out leaflets didn't substantially harm the mall.
It found that the political speech was unlikely to be confused with the malls and that they could publicly disclaim such association thus it wasn't compelled speech.
We can logically assume both of the above points are congruent with twitter so we can conclude that if CA or any other state provides a similar protection it ought to be applicable. What it notably did NOT say is that such speech on someone else's property was generally federally protected nor if I understand did it disclaim it.
The next question is does CA law make banning people from your social platform an illegal infringement of citizens rights. So lets follow the same case back to CA.
"The Agricultural Labor Relations Board opinion further observes that the power to regulate property is not static; rather it is capable of expansion to meet new conditions of modern life. Property rights must be "'redefined in response to a swelling demand that ownership be [23 Cal. 3d 907] responsible and responsive to the needs of the social whole. Property rights cannot be used as a shibboleth to cloak conduct which adversely affects the health, the safety, the morals, or the welfare of others.'"
In context it is talking about the worthiness of restricting the property rights of owners by forcing them to allow speech on their property.
In fact it calls out the fact that the increase in importance of places like malls makes the cause of allowing free speech there more worthy which is in line with the rise of importance of social media.
Seems pretty clear that there isn't much daylight between allowing people to petition people for an offensive cause in a mall vs on twitter at least in California but while I reluctantly agree with you I have a few concerns.
I'm concerned that in the short term our Orange dictator will end democracy as we know it and would prefer the matter of free speech to be litigated after we have averted the immediate threat and had the opportunity to decrease the power of the executive to decrease the chance of a similar situation. Given the time required for any court decision this seems inevitable.
The court it seems would have found it an infringement on the property owners rights if the activity made say commercial activity at the property impossible. What kind of activity on a platform would be regarded as a taking? Electronic properties are a lot different from physical ones in this regard.
- Is speech from a bot protected? Is it still protected if it pretends to be a bunch of fake people?
- Is using someone else's platform to distribute ads protected? Does it matter if they are political ads?
- Is Slander protected speech? Do we get to remove obvious to us slander or does it require a court to decide?
- Is direct incitement to illegal activity allowed?
- What about indirect incitement? Can I run ads for the purple haters club which aren't themselves directly offensive themselves but promote a group that everyone knows is all about promoting the murder of purple people?
It seems like disclaimers on posts are the safest legal choice as they represent the owners own free speech rights. From the Supreme Court opinion
"appellants are free to publicly dissociate themselves from the views of the speakers"
It seems logical that other limits to your free speech on others property will logically be established to protect the integrity of the platform in the future. It would be better if the limits in question could meet one standard under law.
Better than the alternative. Consider the r/nyc subreddit, where criticism of anything Jewish-related, no matter how minor, is usually instantly deleted. Whereas posts referring to black people as thugs and animals are allowed to stand. E.g. Google "site:reddit.com/r/nyc thugs"
Yeah criticism of Hasidic jews is usually allowed to stand, because (as a generalization) non-Hasidic jews don't like Hasidic jews and see everything they do as an embarrassment. Maybe Unorthodox will change the perception somewhat, since it humanizes them even though it portrays them negatively.
Why not let a fool be a fool? Why hide what someone is? All that does is create mystique about him.
"Social media is a not a public square where people can plant their soapbox, it is a set of privately owned servers where people can persist small amounts of data that comply with a set of rules. "
Certainly that is how it started.
But the de facto power of social media's censorship and dominance in discourse will eventually bring government to regulate social media so as to preserve free speech.
During the rise of Hitler somebody should have done something. I think putting a disclaimer on someones speech is a pretty tame and justifiable answer to doing something.
I'm not saying the decision of which political candidates to censor should be cut and dry. Just that the decision of which elected officials to censor should be.
He wasn't elected because he had disagreeable opinions. If someone you disagree with is elected, then clearly their opinions are mainstream, if not agreeable.
> social media sites should not be the business of censoring elected officials. That is the job of voters and Congress
It's explicitly not the job of Congress. That's the point of the First Amendment.
Regulating free speech is up to private enterprise, using private enforcement methods like de-platforming (old school: you can't spew your drivel inside my bar) and shaming.
Just because they legally can does not mean they should censor the official representatives of the voters. The bigger the audience the more important it gets to show all points of views. The sun is the best disinfectant. The wrong ideas should be publicly challenged, otherwise they may silently spread unchecked and surface when it is too late to stop them. You do not need majority to make quick and radical social changes.
That has proven not to be the case. Bringing extremist views into the mainstream by way of social media (and politics) has only strengthened their influence.
This was the real eye opener for me over the past 5 or 10 years. I used to be firmly in the "no censorship" camp, but it seems an alarming number Americans (I'm American and can't speak for any other nation) are not capable of using critical thinking when it comes to political issue that they've already made up their minds about. Close friends and family included.
I don't know the answer to this problem, but it seems our education system needs dramatic improvement.
So does not cracking down on hate speech and incitement to violence.
Imagine tut-tutting if the Rwandan government had cracked down on radio broadcasts calling Tutsi people cockroaches and encouraging Hutus to chop them down.
The Rwandan government, or at least individuals involved with the Rwandan government at the highest levels, were making the broadcasts.
If anything they would have cracked down on opposing broadcasts by claiming that, since Tutsis are the historically privileged beneficiaries of colonialism who were collectively responsible for the violent oppression of Hutus, any pro-Tutsi or anti-anti-Tutsi sentiment was unconscionable hate speech while any pro-Hutu or anti-Tutsi bigotry was not technically racism. The power to ban “hate speech” is the power to define “hate speech” and hateful people will use against you the very same weapons you propose to fight them with.
Not really. This is widely believed but attempts to rigorously study that end up finding no evidence for it.
Interesting that you picked Rwanda. The go-to example for the dangers of free speech is normally the Nazis. For instance I'm thinking of a study that attempted to correlate growth in support for Hitler with availability of his speechs on radio or via rallies. It found IIRC no correlation at all.
In fact Hitler was regularly censored, deplatformed and so on yet it didn't stop him, only perhaps slow him down a bit. On the other hand he spent vast efforts on suppressing, censoring, vote stuffing and de-platforming in various nasty ways anyone who spoke out against him - all managed entirely privately via his various organisations, whilst the weak Weimar Republic were unable to stop him.
Dictators are invariably big fans of suppressing free speech, because they know censorship works in favour of whoever is in power. The reason democracy and free speech are so tightly connected is because in a democracy the people in power aren't meant to be able to do anything to stay in power beyond winning the approval of their citizens.
That's not what the poster said. You think you heard "censorship" because you want to hear "censorship" because that's what you want to argue against. But please read their comment again. They literally said they don't know the answer.
Also what that poster did say--and I agree--is that people lack critical thinking skills and have not developed immunity to flat-out bullshit. There's a huge education pipeline problem and our society is at risk because its normal resistance would be to have voters that can think for themselves. Things like flat earth and moon landing conspiracy theories don't need to be "not allowed", they just wouldn't take hold if people could think through basic logic.
I think it’s a little more complicated than that. It’s not inherently about the rise of extremist views, it’s more about how the establishment grew complacent with the 20th century media environment only to be caught wrongfooted when the 21st century media environment exposed them as being full of shit. Once everyone realizes that the mainstream media and the political establishment are full of shit, where do we have to go from there except to the extremes?
If these views are so wrong, shouldn't it be evident to people? That some mush head labeled something "extreme" shouldn't be enough to ban it. What you are really saying is that you think you would be the best censor.
Do you see how you're loading and manipulating your words to serve your biased purposes? If an "extremist" view is popular with the "mainstream", it's definitionally not an extremist view. And has it occurred to you that a Republican might consider AOC or Bernie Sanders or Ruth Bader Ginsberg a "dangerous extremist" while a Democrat might consider Trump or Bill Barr or Mitch McConnell an extremist? Is it really that hard to imagine how other people not aligned with you politically might think?
I think the last few years have proven what you've said really really wrong. Bigger audiences for toxic speech (on both the left and the right) tend to amplify the ideas contained in that speech, and when those ideas are publicly challenged, which they always are, they become politicized and people become entrenched in a giant shouting match that doesn't accomplish anything. People with big audiences on the internet, especially elected officials, should be held to a higher standard than the average user to facilitate better conversations about important topics, not lower standards just because what they're saying is news worthy.
> There's literally no way you can read the 1st amendment and think it should apply to non government entities. It specifically mentions congress only.
It already applies to non government entities. Telephone companies can't deny you service because they don't like you or what you say. Your power and gas company can't deny you service because they don't like you or what you say.
Trump can't even block people on twitter ( a private company ).
The protections the telephone companies have are based on their desire to remain common carrier status, and not being considered curators or providers of the content they carry.
It's not they can't censor, it's that if they do then their legal status changes in a way they do not wish it too.
These common carriers are private companies, not public companies. They aren't owned by the government. They are owned by private shareholders. AT&T, Verizon, T-Mobile, etc are private companies or non government entities.
> Twitter and the vast majority of other companies are not.
I never claimed twitter and the vast majority of companies are. I only responded to the assertion that all "non government entities" can censor. I was providing proof that some non government entities cannot censor.
That's not how it works. They can't censor. It's why they don't. Can you show me anyone who has been #Canceled from their phone service? Ever wonder why nobody demanded Harvey Weinstein or Trump lose their phone numbers? Because they can't do it.
If AT&T wants to censor, they have to go find other business. They can't be in the telephone business and censor. It's a lawsuit and possible jail for the execs involved.
People are so desperate to justify censorship that they'll make up nonsense. I can't believe anyone is dumb enough to believe all the pro-censorship nonsense in my thread. The only explanation is bias once again forcing people to make up and accept nonsense.
His point was that the 1st amendment applies only to Congress. There are other laws that apply to other entities, but currently no law prevents Twitter from censoring.
I'm sure there are other nuances, but I'd be willing to say that it depends on if their news feed is 'editorialized' or not (ie, they choose what I see, or I choose what I see ...)
I don't think this is the case, unless the platform is run by the government. The First Amendment applies to restrictions on free speech by the government, and is not related to the actions of private enterprises.
> unless the platform is run by the government. The First Amendment applies to restrictions on free speech by the government, and is not related to the actions of private enterprises.
That's not quite accurate. There is precedence for regulating speech in private media for example, as in the case of broadcast networks. Broadcast TV networks are still restricted on speech/expression to this day, as are radio stations.
There's nothing to say that social networks can't also be brought under the regulation of the FCC and have their speech and terms of service dictated to them. This turns social media platforms into something more like a utility. All users must be treated equally or the platforms get massive fines. If I were attempting to get this through, I'd argue the major social media platforms are natural monopolies due to reinforcing network effects and the finite nature of user time (there can only reasonably be so many large social networks, and the market has already strongly demonstrated globally that that is a very small number). You can replace a natural monopoly with another (an independent Instagram, allowed to evolve as a threat, possibly could have killed Facebook for example and become the new monopoly), but you can't have 100 Facebooks that will all perfectly compete with eachother (some people will argue that in theory you could, but that would never happen in reality). For whatever niche / category / concept they target, social networks inherently trend toward either consolidation of users or death; it's why the Twitter clones were all killed off in the early days when people were still experimenting with that social concept. It's why there are not dozens of large scale YouTube or Reddit or even Stackoverflow clones (the rather brutal combination of network effects and finite user time wins in all cases and you end up with consolidation of users to just a few viable platforms). The market, largely left to sort things out on its own in the social media sphere, has provided a lot of proof that large social media platforms are natural monopolies.
The reason broadcast content was regulated was because broadcast frequencies are a finite, public resource. Internet platforms are neither of those.
Also, the FCC no longer regulates broadcast content. It publicly stated a number of years ago that it does not have the desire to do so. The stations police themselves in order to stay within their perceived community standards to prevent external regulation.
Today, unless a station does something really egregious and congresscritters get involved, there is effectively no content regulation.
In the mid-90's, I was part of a morning radio show that was #2 in a medium-sized East coast market. One day we said "shit" on the air three times to see if this was true. Nothing happened. We didn't hear word one from any listener, manager, or government agency.
It is accurate. Other laws can be created which apply to various other entities, such as corporations. But the 1st amendment specifically applies only to Congress.
You know what would help. More alternatives. What if Instagram, WhatsApp, Youtube, etc were all still independent or there was interopability between Google Circles, Facebook and whatever else?
Watch how fast Facebook would adapt and be monitoring news and trying to conform to the public opinion if they felt they needed to compete.
The only reason we're in this position is because a reduction in real competition in the market and the only power they fear is that of democratically elected representatives.
Elizabeth Warren had this right. Break up the platforms, and force social networks to allow a download of your data in a json format and move to a seperate service. Easily. That is the ONLY regulation we need.
Social Media are the real monopolies. For the first time we have a business that are provided for free and that are based on the concept of monetising its users. There is zero accountability and it is the closest we will get to a natural monopoly.
It's interesting how they also collude with each other by defaming competitions and circularly maintaining this monopoly. Good example is Apple/Google process to accept new social media companies. It's nearly impossible and to a much higher standard they hold themselves.
I feel old writing this, but once a upon a time, phone-book companies (often the phone company, but not always) would deliver a phone book to every address with a phone. This was paid for almost-entirely by the advertising sold by the phone-book company.
If a company or person wasn't in the phone book, they pretty much didn't exist.
All of this has happened before, all of it will happen again.
This is the real answer. Removing the incentives that discourage interoperability and common standards would help. This probably means regulating spying-centric advertising and related mass-data-gathering out of existence.
Twitter should be a protocol with many providers and many clients, not a restrictive service from a single provider. Ditto most of the rest.
> force social networks to allow a download of your data in a json format and move to a seperate service.
In markets concerned with serving European users, GDPR already allows you to download your data. Facebook has allowed one to download all of one’s own messages, pictures, etc. for many years now.
The problem is that, in order to avoid infringing on other people’s privacy rights, downloaded data has to be anonymized: you can see your own messages as your own, but anything from another user will just be identified with a random identifier instead of their real name. This then makes it a challenge, when you want to import the data into a new service, to have all those messages associated with your friends’ accounts on the new service.
For what it is worth, Apple, Facebook, Google, Microsoft and Twitter are all part of the data transfer project which is designed to do that [0]. It is still in development though and I cannot comment on how much each company is investing in it.
You're worried about "whatever is the popular opinion of the day" but want to leave that job to voters?
You're talking about censorship as if it's something civil institutions have responsibility for, but private institutions shouldn't be doing?
Your comment is concerned about slippery slopes, but doesn't consider the one we're already skiing down where we discourage institutions from taking responsibility for accuracy?
> we discourage institutions from taking responsibility for accuracy
Facts are often debatable; the set of facts that need to be accurate is also debatable; and you are suggesting that both of those debates be settled unilaterally by expecting (requiring?) the institution to "take responsibility for accuracy." Note that public pressure (read: popular opinion of the day) will almost certainly be the deciding factor in how those debates are settled by those institutions.
Are you suggesting that it is wrong for institutions to abstain from "bowing to the popular opinion of the day" in how they censor speech?
Where facts are debatable, the responsibility to accuracy would include a description of their basis and the limits of that basis.
Where there's less room for debate, responsibility includes an obligation to avoid "shape of the earth: views differ" approaches.
If individuals and institutions don't take up this kind of responsibility, then there's no such thing as discourse, only speech as indulgence.
And the value of popular opinion is directly related which one of those is used in constructing it. Individual and institutional responsibility, however, will remain orthogonal to that.
Discourse is when individuals are willing to argue civilly with people who believe fundamentally different things, and are able to do so without concern for 'watching their words'. Whether they are engaging in an informed discussion or an uninformed one is between the people talking.
For anyone else to insert themselves into the conversation, public or not, and claim to have The Real Facts is arrogant. For an institution to attempt to referee the conversations it is privy to, prevents actual truth from being discovered. To require institutions to referee in that way? Is another act of arrogance: placing oneself above 'the other people who have to be kept in line'. To leave those institutions at the mercy of popular opinion, to referee as the mob sees fit, is insanity.
At least, that's the way I see it. Thank goodness we're still allowed to disagree on this.
There is no reason to give elected officials a free pass. It biases elections by giving them an even bigger microphone than the other people running. Apply the same standards to all.
Do you honestly not realize that this is the exact issue being debated? Whether the "standards" are being applied fairly and consistently irrespective of political affiliations? And whether the "standard" itself is loaded to censor certain relatively mainstream but not Silicon Valley viewpoints?
What you're saying reminds me of an anti-gay marriage argument I heard: "Hey, I'm straight and I don't have the right to marry a man. A gay guy doesn't have the right to marry a man either. Look at that: the same standard applies to all. Everything is fair!"
I'm not particularly aligned to either democrats or republicans, but I've looked at "fact-checks" by supposedly neutral parties and they're often absolute garbage. Not only is the "research" superficial, what is chosen to be fact-checked vs. what isn't fact-checked is transparently biased. The "same standard" does not appear to be applied to all.
Given the debatable nature of these questions, I think common sense would say it's better to let voters and the marketplace of idea arbitrate rather than social media companies. And, as a general matter, I'm shocked people are still so into censoring in 2020. Isn't censoring books/ideas/speech something we laugh at given how consistently wrong it tends to be? I'd say censoring is silly at best and evil at worst.
> I'm not particularly aligned to either democrats or republicans, but I've looked at "fact-checks" by supposedly neutral parties and they're often absolute garbage.
As usual, people approach a statement with their own biases. The issue is not between political affiliations which the statement made no mention of. It is between the government official we have elected versus everyone else. The officer holder shouldn’t have more rights than anyone else. If they want to make a statement, do it through the office.
Another bias of yours is assuming that I was for censorship. I argued only that the same standard be applied. That can mean no censorship too.
> And whether the "standard" itself is loaded to censor certain relatively mainstream but not Silicon Valley viewpoints?
The views expressed in this case were
"When the shooting starts, the looting starts"[0]. Which, while perhaps not Trump's intent, can reasonably read as encouragement (or incitement, or glorification) of violence toward people.
> the marketplace of idea
Twitter is a part of the marketplace of ideas, is it not? Their actions are simply an action taken as part of the marketplace of ideas.
> What you're saying reminds me of an anti-gay marriage argument I heard.
To the unoppressed, equality looks like oppression. That's true whether the unoppressed is a straight man or a person who can threaten others with violence without consequence. So yes, the two situations are very similar.
> Which... can reasonably read as encouragement (or incitement, or glorification) of violence toward people.
Do you not see how I could just as easily use your empty standard to ban tweets saying “A riot is the language of the unheard.” -- a quote I largely agree with? This quote explains rioting as a consequence of oppression. But if I was as easily triggered as you, I'd have it banned since it "can reasonably [be] read as encouragement (or incitement, or glorification) of [rioting]" a definitionally illegal act.
And frankly, your interpretation of the Trump tweet is not reasonable. The most reasonable, common-sense interpretation is one linking severe consequences to severe actions. Does the phrase "When you play with fire, you're going to get burned" glorify burning people?
> Twitter is a part of the marketplace of ideas, is it not?
In the standard notion of a marketplace, you let decentralized end-users gravitate toward things they like and avoid things they don't like. If Trump's speech is so offensive, then let END-USERS decide to block his content or barrage his comments with condemnation or vote him out of office. Instead, you're advocating for a major centralized "authority" to make the decisions about tweets that should and shouldn't be seen, whether or not they're popular with the end-users. Yes, if they disagree that strongly with the censorship, those end-users can leave the platform, but that's analogous to "well, if you don't like it, then you can just leave". If people like something, they're rather fight to keep it good than let other people "ruin" it.
> ...whether the unoppressed is a straight man or a person who can threaten others with violence without consequence...
Mobs breaking windows, lighting things on fire, and stealing is violence. It exposes people to tremendous danger and tremendous injustice. Saying that violence begets violence should hardly be a controversial idea. I certainly don't think Trump's tweets are helpful in anyway. I think they're incredibly tone deaf and unhelpful. But I'm not going to cry to Twitter or Facebook and demand it be blocked.
I'd ask that you respect the guidelines of HN and refrain from using needlessly charged language like "triggered". I'm not "triggered". I have views that differ from your own. Triggered has a specific meaning, either implying a traumatic response (as in "The loud noise triggered my PTSD") or it can be used to mean something like "offended"). I, personally, am not "offended" by Trump's statement, at least not in the colloquial sense. I'm disappointed by it, sure, but not offended. He didn't insult me. Since we're discussing language and semantics, let's be precise, yes?
> "When you play with fire, you're going to get burned" glorify burning people?
Fires are not intelligent beings with free will. You cannot, by definition, encourage a fire to burn someone. So while shouting "yeah, burn him" at a fire would not be inciting violence, shouting "yeah, shoot him" would be.
I'll also note that context plays an important role in how we understand statements. For example, if I, an individual sitting at my computer, say "protestors deserve to be shot", while that's a rather crappy opinion for an individual to hold, I cannot take action on it. It isn't threatening. On the other hand, when a representative of the government says the same thing, we must take that same statement much more seriously because the government does have the ability to shoot protestors. The same thing applies if I make the same statement while standing with a firearm across the street from a group of people protesting. In context, the same statement can have very different connotations.
Further, Trump specifically has made no effort to differentiate between peaceful and violent protestors. As an example, he retweeted[0] an article[1] claiming that park police cleared peaceful protestors out of Lafayette Park. That article was based on a series of tweets that at the time concluded that park police didn't use tear gas and were unaware of the president's movements, but now include more context, that Secret Service agents may have used tear gas, and that the clearing of the area was ordered directly by AG Barr.
So we have a President who has the power to fire on people, and has used that authority to use weapons of war on peaceful, non-looting, non-violent demonstrators. And who actively tries to blur the line between peaceful and violent protest (as do police forces when they escalate at demonstrations).
So the statement has to be taken in that context.
> And frankly, your interpretation of the Trump tweet is not reasonable. The most reasonable, common-sense interpretation is one linking severe consequences to severe actions.
So let's dive into this. I'd agree, if I were to make the same statement sitting here in front of my screen, it would be mostly an observational take. A terribly worded observation, but an observation. However, I do not have the ability to order people to shoot protestors. Trump does (and in a sense he did: Trump's AG ordered police and soldiers to fire tear gas at peaceful protestors to clear the room for a photoshoot).
In other words, an observation made by a person without power to follow through is a threat when made by a person who can. And Trump does have the power to follow through on his threat.
> In the standard notion of a marketplace, you let decentralized end-users gravitate toward things they like and avoid things they don't like.
Indeed, and twitter is a participant in the marketplace. Twitter itself is not "the marketplace of ideals". There are other places to speak. The idea that every participant in the marketplace must themselves also be a marketplace is antithetical to the idea of a marketplace. You can argue that twitter or facebook has positioned themselves as a marketplace, but they've clearly never really done that. Both have always had moderation and speech policies. They were just speech policies you agreed with (for example: certain uses of certain words have always been banned on both platforms).
> If people like something, they're rather fight to keep it good than let other people "ruin" it.
And you're welcome to do that. And others in the marketplace can listen, or not. Don't praise the marketplace of ideas on one line, and lament it the next when things don't go your way.
> Mobs breaking windows, lighting things on fire, and stealing is violence.
And I didn't say otherwise, but I think you've missed the point. I'll clarify:
The number of people who say that citizens on the streets who are acting violently should face no consequences is very limited. It's isolated mostly to anarchist and anarcho-marxist circles. People who commit violent crimes, indeed, deserve to face consequences.
But that goes both ways. Officers and forces who shoot unarmed, peaceful protestors should face consequences. There are hundreds of incidents of that happening in the past week.
If you have two groups, one who commits violence and is punished, and one who commits violence and is not, the one that isn't getting punished probably isn't oppressed, but the one getting punished might be. If you have two groups, one who commits violence and isn't punished, and one who commits no violence and is punished, well then something is very wrong. And one of those groups probably is oppressed. When the peaceful people are speaking out about being oppressed, and the response is to punish them, well the irony is palpable at that point.
So yes, a president who can threaten violence and face no consequences is not oppressed. While the citizens on the streets, citizens whom he is supposed to represent but is instead threatening with violence, they might just be oppressed.
> There is no reason to give elected officials a free pass.
> Apply the same standards to all.
I find these statements at odds. People can say what they want and it may or may not be inaccurate or misleading or accurate or truthful. Giving elected officials "a free pass" to do just that _is_ applying the same standard to all.
Elected officials have a microphone because of the political system and the endemic elevation of these regular, flawed, people to a status above the common man (same with police and a number of other power hierarchies in the USA).
A free pass on what exactly? This has gotten so crazy, I can’t believe people are protesting Facebook for not censoring a politician.
Here’s an idea... if you don’t like what somebody writes online, don’t read it.
Do people really think that censoring, or even annotating, the President’s words will change how people feel about them? Like someone is going to read Trump’s message, think “oh right, mail-in ballots could lead to voter fraud,” then read an annotation from CNN and conclude “oh never mind, Trump lied.” On the contrary, it will lead to further entrenchment of division as they retreat to the comfort of their pre-existing views.
Are people serious about this?? It seems obvious that the most likely outcome of censorship or editorializing is more entrenchment of views and pre-existing beliefs, not less. Nobody who is already listening to Trump is going to suddenly stop because Twitter or Facebook shows them a CNN article next to a “fact check” label.
It’s just so ridiculous, I can’t believe we are actually having this conversation. What happened to teaching people critical thinking and not to believe everything they read on the internet? And what the heck is “violent speech?” Just close the page if you don’t like it. My goodness.
I think it's a good thing to hold public figures accountable. Politicians, in particular, need to be held accountable due to the large amount of influence they have on our society. When Twitter, for example, places a warning on a Tweet to warn readers that it contains provably false statements, it holds the Tweeter accountable and that is enough reason to do it regardless of the changing minds argument.
As far as changing minds goes, I think there's a swath of voters who sit in the political middle and they don't fact check. The swath is large enough that they can influence elections. It's critical to make sure they don't get influenced by politicians who make up facts on the spot.
A friend of mine and I commiserated last week about the difficulty of debating with people on political issues. We noticed that people can make up "facts" quicker than we can fact check. You would think that the onus would be on the person making an argument to provide proof. But that standard of argument is long gone. The proof seems to always be on the person who says, "No, I don't think that's right."
Herein lies the rub. It’s extremely rare that a politician is tweeting about something so black-and-white that the tweet can be provably true or false. There is almost always a gray area between fact and opinion. In fact, people tend to vote for politicians because they agree with their interpretation and prioritization of a set of facts. So I think you’ll find the vast majority of political tweets reside in this gray area, because otherwise they wouldn’t be political in the first place.
And if “provably false” is the standard for editorializing, then Twitter picked a terrible example to set as the precedent. The tweet they “fact checked” was Trump making a prediction about the future. Namely, he was suggesting that mail-in ballots could lead to increased voter fraud. Not only is this an opinion, but it’s a projection about something that has not happened yet. By definition, it cannot be provably false.
Weird, I can think of at least one politician who constantly tweets, and most of it's easily proven false with the smallest modicum of research. It's almost like what you are saying has no basis in reality.
You're right. "No factual basis" might be a better standard. The claims Trump made had no factual basis. Twitter's addendum highlighted that fact. Nothing more.
I can't believe that people are criticizing an organization for exercising it's right to free speech. Like this wasn't even a case of removal of content. It was literally taking a sign and sticking another sign below it. It's something that the government would be allowed to do to a private citizen. It's that far away from censorship. And yet.
I think the person you're responding to is talking about an asymmetry of rules - current politicians can say anything, everyone else has to follow rules. In that situation, current politicians can lie about their opponents, but opponents cannot lie back.
Actually it seems like social media companies are choosing who can't lie and who can. Normal users and even most politicians lie on social media everyday, who cares. The problem I see is social media companies targeting specific leaders and what they see as "lies" while ignoring others.
And most importantly, who decides what is a lie?
Either take 230 away from the company and no-one can lie on the platform, or stop censoring/altering users and allow everyone to say what's in their legal right to say.
There's a meme about nothing being true on the internet, why do we need a Ministry of Truth now?
Because it will allow for some form of competition because they will be forced to make decisions leaving space for other players to make different decisions.
IMO revoking Section 230 would benefit incumbents like FB and Twitter who have already made billion-dollar investments into content moderation, and hurt the small guys who suddenly become liable for every user comment they publish.
Can’t you imagine Mark and Jack telling Republican senators: “We wanted to enable free speech, we told you so, but you took away the protections around that so we had no choice but to turn the moderation dial to 11.”
You seem to work at FB so you might have some more insight than I do, if so please share. On the other side, I see new players arriving and proving to be competitive because FB will be forced to take a position, if people don't like that position they won't interact with the platform. Suddenly there's a market for a different platform.
I’m just a video rendering code monkey, I have no particular insight into this issue. (FWIW I believe Zuckerberg is genuine in his position, but I also think he’d change it rapidly if the legal environment tilted the other way and it became a genuine business threat.)
I wonder about those different platforms. They already exist — Gab simply isn’t that popular. Revoking Section 230 would expose Gab to a “Thiel vs Gawker” situation: a billionaire could sue them into extinction. That doesn’t seem desirable at all.
I'm not a fan of Gab, but it could easily be not popular because it's offering a worse version of the current form of FB.
If FB's status changed, Gab could theoretically become more popular. Not all new platform/publishers would be the equivalent of a Gawker and thus raise the ire of a billionaire. There are plenty of independent journals, magazines, blogging platforms, etc that don't get sued out of existence.
> You seem to work at FB so you might have some more insight than I do, if so please share.
Please don’t make accusations of astroturfing. It’s rude and against the HN guidelines. Why? Because the majority of the time, it’s not true. I don’t work at or have any connection to Microsoft, but a week ago, I was accused of astroturfing for them because I did not agree that them open sourcing stuff is EEE.
> Here’s an idea... if you don’t like what somebody writes online, don’t read it.
The problem isn't that people "don't like" what elected officials say. It's that people often take what elected officials say as truth without even considering that it might be false (they must be Smart and Good if they're in office, right?)
Fact checking and censoring prominent figures isn't going to affect people who already have strongly held beliefs. But it will make a difference to people who are impressionable or vulnerable or don't know any better.
that is exactly what Twitter has done. they didn't block Trump's tweet, they added a box basically saying "be weary of this statement you should fact check it"
Is it true that public education is being destroyed? I'm having trouble finding metrics by which it has declined. In terms of budget, 4.1% of US GDP is government expenditure on education. For comparison: Japan is 2.9%, Canada 4.3%, UK 4.3%. This number hasn't changed much. Over the past 20 years it has fluctuated between 4.1% and 4.7% of GDP.[1] There might be a slight downward trend, but if there is it has followed most of the other developed countries.
The US is a rich country. If anything, using percentage of GDP understates just how much we spend on education. For primary & secondary education (everything before college), there are only four countries in the world that spend more per student than the US: Luxembourg, Switzerland, Austria, and Norway. For post-secondary education (college and grad school), the US spends more per student than any other country.[2]
If you look at other metrics such as the percentage of adults with college degrees, or the percentage of adults who graduate high school, the US has never done better.[3][4] If you look at PISA scores, we're very close to the OECD average and that number hasn't changed much over time.[5] (Like most developed countries, absolute scores have slowly decreased over time. It's not clear why this is happening.)
If people have been working for decades to destroy public education, they're doing a very bad job of it.
> Like most developed countries, absolute scores have slowly decreased over time. It's not clear why this is happening.
I teach in college. I was born early sixties. My daughter teaches math in high school. Recently I showed her my math books from my old high school. Her response: No way that my students could follow this.
Where I teach we have to give extra classes in basic math (I'm in STEM).
We're definitely sliding. Reading seems to be sliding worse even.
By the way, I don't think there is much of a correlation between how much a country spends on education and the quality of it.
Annotating is fine as long as the user can choose what annotations they see and how those influence filtering. This idea that the stupid little people need the protection of the enlightened ones doing the censoring is profoundly un-american and must die.
Facebook/Twitter/et al created the systems which focus people's attention and incentivize the production of divisive poor quality information in the first place - why shouldn't they take responsibility for fixing their mistakes?
Cambridge Analytica and meddling in the elections by Russian trolls is what happened.
Also: a quite effective assault at climate action using misinformation, violent riots, a movement campaigning for civil war, ridiculous medical advise by the US president leading to deaths during a pandemic, etc.
This all happened and permanently changed the debate. Free speech is poisoned these days and we can't simply ignore that.
> This all happened and permanently changed the debate. Free speech is poisoned these days and we can't simply ignore that.
Even if that is the case, it shouldn’t mean we give up on free speech and allow it to be effectively eliminated “so it can be saved”. Free speech with some censorship is just censorship.
A robust public debate is how ideas are fairly brought out and given an opportunity to thrive or die. Having Twitter or a government or “fact checker” or any other person/organization in the middle adding their outsized influence to the picture brings the whole debate out of balance. Why is Jack’s voice worth 1000x more than yours? Because he’s more “enlightened”?
We're getting onto our own slippery territory here.
If I let someone into my living room, and that person makes political statements I don't like, I'm free to ask them to leave and not come back, or tell other people in my living room that I think the statements are factually incorrect.
If Facebook lets someone onto their website, and that person makes political statements Facebook doesn't like, why can't Facebook ask them to leave and not come back, or tell other people on Facebook that Facebook thinks the statements are factually incorrect?
Where do you draw the line between my living room, where I'm free to regulate who my visitors are and what those visitors can say, and Facebook?
Everyone has more of a right to a public space (like a town square) than a private living room or a private website (hadn't occurred to me that being publicly traded may have implications actually).
As I've written in another recent comment, the problem is when Facebook or other networks become quasi-official sources for public communication. If the primary means that the government thinks I will find out about public health orders, mandatory business shutdowns, etc. becomes social media, I'm very concerned about the messy mix that "public space" and "private space" that Facebook will become.
I fear social media IS the primary place that a lot of people get their news. That is a huge problem because of all the propaganda, fake news, knee jerk reactions and everything else that is spread there.
Agreed - I think it's a distinct problem, though. Crowdsourcing news where the crowd is ignorant and uneducated is harder to fix than what I'm getting at, because the only ethical way to do it is to convince people en masse to be better about identifying credible sources and thinking critically.
But I'm actually more concerned about a state where overnight the government can ban normal day-to-day activities, and if you're on a 30 day ban from Facebook you don't know about it until they're arresting people for surfing. Or, in my town's case, where they've stopped notifying residents of construction-related closures with written notice to each house days ahead of time, and have just started putting it on NextDoor.
I feel your point here is important. Facebook has a lot of power. To the point where I feel they owe the public more than they would if they were a small entity with a narrow reach.
There are two principles here: 1) it's a problem if you have enough power to force the weak to cover the expenses of your failures and 2) if you have a lot of power you also have the ability to defend yourself from assholes and malicious actors.
So, for point 1, if facebook decides that they want to dump trash into residential areas, they could silence everyone on facebook who complains. They could also call up their friends at twitter et al and convince them to do the same in exchange for whatever massive corporations like these days. Because of this possibility, I want facebook's freedom to curtail freedom of speech on their private platform to be extremely limited.
For point 2, if facebook becomes inundated with assholes or (heaven forbid) malicious trolls who make relentless fun of the font that facebook uses for posts and every post that everyone sees is just complaints about fonts, then facebook has the capability to create a special asshole facebook website and marketing campaign to convince the assholes to go somewhere else. They have the ability to switch out the font. They can fund research into powerful AI techniques that can group all the font complaints together so that only people who want to see that crap will see that and everyone else can have a good experience. Because of facebook's ability to deal with malicious (or social incompetence that is indistinguishable from) behavior, I want facebook's freedom to curtail freedom of speech on their private platform to be extremely limited.
On the other hand, if we're talking about an individual (or otherwise small and powerless organization) running a personal blog (or your living room), they have extremely limited ability to force other people to deal with their platform. And they also have extremely limited ability to deal with malicious behavior if they didn't otherwise have the right to curtail freedom of speech. In these cases, I want them to have greater rights because the likelihood of them being able to use these rights to oppress others is minimal AND their necessity of them needing them to defend themselves is greater.
Not a parking lot, but a lot of suburban communities are built around large shopping malls, operated by companies like Westfield. It’s not exactly conducive to protests, as the malls are private property and there’s no significant pedestrian traffic anywhere else.
Such arrangements are not conducive to the political process, and that should worry everyone. If something can be used against someone it can also be used against you.
If we're continuing the parking lot analogy, it's like there's an almost unending grid of parking lot after parking lot, and you can travel to any of them instantaneously and for free. Some parking lots have a lot of people, so you'd rather go there if you want to be heard, but nothing is stopping you from setting up shop in an empty lot, and nothing is stopping people from visiting your lot.
Now, given that major social network advertise pretty much everywhere, this might not be an entirely fair analogy. Some parking lots have ads for them posted in every other parking lot. But it's a point worth making that access is relatively egalitarian on the internet.
Thanks for the clarification. You're right, access to the internet at large is more or less egalitarian; having your voice heard (and not hidden), is not though, and therein lies the problem.
If a tree falls on the forest, and no one is there to hear it, did it even make a noise?
Also relevant is this very interesting research experiment conducted ~5 years ago on social media manipulation, in which they dubbed the term "Censorship 2.0"
That's true, we used to think of it in terms of access, but I suppose now that anyone can figuratively start their own news network, the issue is who gatekeeps the directories rather than the access.
The network effects are somewhat irrelevant and GP's comment is correct. If you do not use a retailer or service provider's business in a manner that they like, they are normally free to discontinue your service and ask you to leave. The same principle should apply to Facebook as they are a service provider like any other which has not (to my knowledge) explicitly been designated as having special status in legislation.
You can't decide Facebook is subject to a different set of laws or has fewer rights simply because people have arbitrarily decided to communicate on its platform more than another platform. Every private entity should be afforded the same rights to regulate their private property or services unless there is legislation specifically saying otherwise.
If you or any other person do not like the way Facebook is regulating usage of their platform, you are free to go start up your own social media platform with a different ToS (and many people do).
>You can't decide Facebook is subject to a different set of laws or has fewer rights
Oh but we can, that's what democracy and law makers are for, and if you've been keeping up with the news at all for the past few years, you'll know that's where we are increasingly likely to be heading.
If you’re arguing that they shouldn’t do something, that’s different from arguing that they can’t do something. I’m a free speech advocate, but the First Amendment does not apply to private corporations except in very rare circumstances.
First, the rules are different in commerce than they are for private property.
Second, some entities are so big they constitute a public good, even if they are not recognized as such. If Google Search started to openly promote one side of politics and not the other, the regulations would hit hare and fast.
Third, this idea of 'populism' is a little warped. People agree or disagree with the nature of the protests to varying extent. Don't assume that the press and those writing letters to FB, or definitely FB employees themselves, are in any way representative of anything.
Fourth, it's in the company's interest to try to remain 'neutral in whatever terms that might mean.
Finally - it's in everyone's interest for said platforms to have consistent, objectively applied, and hopefully independently monitored criteria and operations for managing what they take down and not. Zuck should not be involved in any of the day to day operations, and should not be commenting on any specific situation or post.
But Facebook does regulate the feeds of their users by employing algorithms to surface content that they think the user will engage with, and so get bucks for the eyeball.
This distinguishes them from libraries, the only other similar platform for unmediated content. Libraries treat all information as equal and curates it as such. Facebook does not treat information equally, this means it is already moderating content.
Facebook already censors content that politicians deem unsavoury, so why should one political message get a free pass while another is removed?
Libraries have a featured books section or many of them do. Someone decides what is featured. If they are pro X they'll feature pro X books. Similarly they have limited space. Someone decide which books to take in and which books to throw out. Libraries have also banned books.
Sorry to be a little pedantic, but libraries do not treat all information as equal. There are far, far fewer books written by nazis in the library and most of them are by former high ranking members of the 3rd Reich so students and historians can research the Second World War and The Holocaust.
Even access is limited in some libraries for some information like technical libraries and books describing the manufacturing or design of dangerous materials.
I am not sure librarians are the ones who decide that a particular subject is socially or technologically dangerous.
The point still stands, the information that libraries provide is organised to an open standard defined by information science, not an opaque algorithm that is subject to the whims of its creators.
"Facebook employees should take some measure of pride in that, he said: "I would urge people not to look at the moral impact of what we do just through the lens of harm and mitigation. That's clearly a huge part of what we have to do - I'm not downplaying that. ... But it's also about the upside, and the good, and giving people a voice who wouldn't have previously been able to get into the news and talk about stuff. And having painful things be visible. I think that matters, too."
Employees I spoke with did not seem particularly moved by these answers. "Everyone's grateful we have a chance to address these things directly with him," one told me. "At the same time, no one thinks he gave a single real answer."
Another said Zuckerberg appeared "really scared" on the call. "I think he fears his employees turning on him," the employee said. "At least that's what I got from facial expressions and tone."
At the same time, another employee told me that Zuckerberg's decision was supported by the majority of the company, but that people who agreed with it were afraid to speak out for fear of appearing insensitive. (An employee who spoke on the call echoed this point.)"
So much for "free speech" within Facebook, the company. In this case, "popular opinion" cannot be shared openly.
Free speech is hard when you watch George Floyd get murdered, anti-racism takes a huge leap at censorship against Trump (and labeling is a form of censorship), and then you’re left justifying why Trump’s horrible message should be left untouched.
It’s a scary thing to get labeled as a racist because you defend free speech. I think it illustrates why we should shy away from censorship as much as possible.
People here saying that social media companies should be considered a utility once they reach a certain size are very on point.
I have zero doubts that the anti-racists would happily advocate for Trump to be cut off from electricity/internet/sewage/water.
In some way I agree with them but in another there are healthy methods of activism and unhealthy methods of activism. We shouldn’t be encouraging companies with billions of users to weaponize the unhealthy forms.
I am interested to see how many people who want Trump’s post labeled have read about the history of censorship in Europe and China.
Christianity used censorship a long time ago and what were the primary principles they believed in? Do onto others as you would do onto yourself, love other people, don’t do drugs or commit adultery. Then when you look into the cracks you can see the crusades they went on and all the horrible things that occurred from it.
Which, as you started to say it, is the slippery slope fallacy.
I don't see what the problem is with embedding some factchecker work beneath content (NOT "censoring" it, which is a ridiculous statement since I haven't seen this at all) that is making claims unsupported by literally a single Google of the claim with evidence provided by reputable sources. Bad information is harmful, and the vast majority of social media consumers are lazy and reactive and not very good critical thinkers. This wouldn't even be necessary if any sort of critical thinking was taught before college, but it isn't. (And that might even not help, according to some other claims, which again, necessitates the people with the voice being heard giving people good information.)
Do you think people have the right to convince other people of something that would be harmful to them, like a charlatan or snake-oil salesman? Ethically, I don't think so, and I don't think it's a "slippery slope" to anything that is an overall bad.
Plus, private companies can do whatever the fuck they want. If they want their content to be factual, they have the right to enforce that according to their criteria. Anyone is welcome to create a competing site that allows any and all harmful bullshit to flourish.
The US president is the single most powerful person in the world. If he says "Hey $literally_any_media_outlet, I'd like to make a statement, want to swing by the rose garden in 20?" The media will fall over themselves to do so. I don't think you could come up with an example of someone who would be less harmed by being banned from social media than the potus.
Even if for some reason you don't buy that, social media could easily justify a ban by differentiating between his personal and official government accounts.
Both facebook and twitter give the president something more than traditional media: retweets, engagement, reaction, analysis etc.
The whole 'the only good democrat is a dead democrat' and glorifying violence would have been much harder to do with traditional potus press statements.
Twitter and Facebook allow him to use these new tools, despite his tendency to incite violence. They help to enable him. And it's their choice. And they thus make his words much more effective.
You're worried about free speech - but what happens when an elected official calls for violence against a group of people? What happens when those calls for violence are answered and people die?
There is a reason yelling fire in a crowded theater is not protected speech, and companies should be in the business of censoring speech that incites violence.
Sorry but you are incorrect. Platforms have protections while they act as a free and open forum. This protection ceases to exist the time you start fiddling with the content. Obviously, Terms of Service are there to filter the garbage out but any double standard can end up with your Platform credentials revoked and your business down the drain.
Your comment doesn't even slightly reflect the law. I don't know why people keep spreading this.
Section 230(c):
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Furthermore section 230 (c)(2) states:
> any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
So to be clear, it's not that the law is unclear on this, it's very clear on this: The publisher has no responsibilty to consider whether speech is constitutionally protected at all. Not only is there no qualification that these platforms need to meet in terms of moderation, but the law actively states that the platform can moderate speech.
Yes, and the debate is whether or not the companies are acting in bad faith and abusing this 1996 law by selectively altering their users posts and presenting their information thus acting like a publisher.
Yes this is the current state, but the question is whether it's morally right and should we update it to reflect the current situation of the social media behemoths, this law was designed for small bulletin boards at the time.
Yes, I believe the discussion is to update Section 230 to make those distinctions as the law was created in 1996 to protect bulletin boards, not the social media giants we have today.
It's a discussion we should all want to have to preserve our freedom of speech from being infringed on by these (very very few) social media companies.
I don't think anyone is arguing they don't currently have these powers, it's whether they should and whether they've been abusing them.
> It's a discussion we should all want to have to preserve our freedom of speech from being infringed on by these (very very few) social media companies.
Why? Twitter is not infringing on people's freedom of speech. Trump's comments are proof of that because he's continued to have a platform on the site. People keep saying that their being oppressed by the site they're using to say they're oppressed. No one is being oppressed by Twitter.
You take away 230(c) from Twitter and one of two things will happen:
- they'll cease to exist becase they will become legally liable for all content their users post
- they'll automate moderation much more aggressively and "censor" 10x times more than they do currently.
Also, it'll make it harder for competition to exist because the government will increase the burden on creating a site that hosts user generated content. It will be a contraction of speech online.
Really, if you're not happy with the moves these companeis are making, you should be advocating for more competition so you can switch to an alternative.
I can want more competition as well as more accountability for Twitter when they choose to target specific tweets/accounts.
Facebook has for the most part stayed out of it and they are being attacked for not providing a Ministry of Truth like Twitter does.
So the little bit of competition that Twitter has is being coerced to do the same.
If Twitter continues down the route it's going and continues to publish it's thoughts and link them to other's thoughts then I think they should have their 230 protections revoked and that will probably shutter them. Good.
Hacker News should be all about breaking up monopolies and regulating companies, what gives? Ah yeah, the elections.
You're arguing that the US government should be able to force companies to publish content that it doesnt want on it's site.
> Ah yeah, the elections.
I'm not american, so i have no horse in the elections, apart from wanting twitter to continue to exist.
Twitter is a private company, it should be able to moderate its platform as it sees fit. If it see's content that it feels is dangerous, they should be completely within their right to remove it from their site. Preventing them from being able to do that is the government restricting twitters speech.
If you don't like how twitter moderates it's platform, you should have alternatives to go to that moderate differently. That's basically the whole premise of subreddits and, well, a "free and open market", rather than the government forcing companies to publish content it doesnt want on it's site.
Yeah, at best it's an outdated law. At worst it's an excuse for companies with immense wealth and power to enforce their political will on millions if not billions of people. I would be fine if there was some remote form of competition at scale with these platforms, but no one can make an honest argument that these are not dominant players who are quickly pulling up the ladder behind them. We need to update our laws to reflect modern reality
> Platforms have protections while they act as a free and open forum. This protection ceases to exist the time you start fiddling with the content
This is a popular belief among people who feel some viewpoints are being "censored" but it has no basis in law or reality. My cynical perspective is that if enough people spread this lie, it will become true in the court of public opinion, which is what the people being "censored" are hoping for. (the scare quotes are because "censorship" is a scary word used in place of "moderation" or "community management" or "spam removal")
Platforms are 100% allowed to take down material that they find objectionable for whatever reason.
Those protections afford immunity to civil claims related to copyright and infringing content. It does not apply to matters of decency. Many online platforms have booted their bigots and hate speech without any consequences to their ability to continue hosting content.
Please do not conflate civil torts with criminal hate speech
Not only have they booted the bigots and hate speech it's only because they booted it that they florished. That's also the reason alternative "public spaces" exist without that kind of moderation... think 4chan,8chan etc. It doesn't matter where you draw the line or anyone of us it matters that the line is there for everyone. When twitter bans an experiment of an account that was just tweeting Trumps tweet s as onces from themself. They should also ban Trump if they are not banning him but other people for the same choice of words it leaves a bad taste. Facebook is on another level in myanmar facebook was used as a tool to organize an ethnic cleansing. Free Speech is the modern "we just followed orders" of tech companies.
There are lots of options between full on censoring and not doing a thing. Displaying that certain prominent posts aren’t accurate is a good start, much like what Twitter did.
I responding to this "Displaying that certain prominent posts aren’t accurate is a good start, much like what Twitter did." not being quite enough in this case.
Yes, exactly. Sun is the best disinfectant. We should not allow speech to be banned, because then we do not know what people think anymore. If we suppress the free expression we may find that really bad ideas are silently spreading without being challenged. And it may end up really badly for all of us when people holding these ideas suddenly surface. You do not need majority to make quick and radical changes.
Social media sites should also feel free to apply the same content standards (no calls for violence or fake news) to all users regardless of their political status. Politicians aren’t entitled to use Facebook or twitter. They can start their own social media site where racism and false statements are allowed
>social media sites should not be the business of censoring elected officials.
It would help to make things clear if FB in their terms of service would specifically and explicitly excluded public officials from limitations on hate, violence inciting and other otherwise prohibited on the platform speech,and marked that speech accordingly.
Or let an official to explicitly checkbox such exception in his/her profile or may be at individual post level, a la carte style with separate checkboxes for hate speech, racism, etc., and mark such his/her post as made under that exception - "This post is published under the hate speech, [comma separated iterator over the checked checkboxes' values] exception requested by this public official"
Read section 2 of their Community Standards about Dangerous Organizations and Individuals. Then read the definition of a terrorist. They really do have exceptions for this.
Right, so the people that are registered with the US political Party ANP (American Nazi Party) Or many other parties you have never heard of could then check the box, and be free to say whatever they want. Or is Facebook going to police it only for a few popular political parties in the US, and not censor which political parties get speech?
I meant public officials. As there is at least some public interest in giving them platform, and there is at least some minimal check/scrutiny for them - their voters.
Would it help? I would expect that to just make things worse, as the right to freer speech on Facebook becomes an explicit asset with unequal allocation. An informal policy seems much less dangerous.
The First Amendment protects citizens from censorship by the government. It doesn't protect the government (or citizens) from censorship by private business. This is why having unregulated private platforms be the primary forum for political discourse is a threat to democracy.
If Facebook or any other social media site wants to do that, they should be classified as a common carrier and be regulated by the FCC.
"One reason common carriers are licensed is to ensure they provide indiscriminate public access and protect the privacy of the general public."
Facebook is not only in the business of publishing people's speech, it is also in the business of curating people's information input. They are only defending their obligations for the first of the businesses, not the second.
* If someone criticized newspapers for yellow journalism and inciting violence with sensationalism, would it seem so weird to push the newspapers to regulate the content posted on themselves?
* Is the way that people view large public figures on social media all that different than how they did with newspapers?
I'm not sure if these questions imply anything about how social media sites should deal with regular small-audience / low-influence users, but I think any platform sharing influential posts to a large audience has similar responsibilities to a newspaper.
Ultimately, the comparison between government censorship and social networks' decision to spread somebody's message falls flat because of the different baseline:
If governments do nothing, there is no censorship. Governments have to decide to take action for there to be censorship.
If social networks do nothing, nobody gets to spread their message through them. Social networks have to take action (write software, run servers) for messages to be spread.
This difference changes the calculus of moral and ethical responsibility.
Sure, why not? It is illegal for police to enforce laws on certain public officials in certain circumstances. IIRC members of congress can't be stopped for a traffic violation on the way to a vote, or something like that.
The ancient Romans didn't allow prosecution of some public officials while in office for a similar reason. Their courts could too easily be abused as a political tool.
It's not unreasonable to leave public officials unmoderated while they're in office.
The real problem here is that we're letting private companies make the decision. We should pass a law that requires unrestricted free speech on any platform of a certain size. And then give users optional moderation tools. Users could toggle a button that says "Show me fact-checking along side controversial topics" but it would all have to be opt-in.
There's a huge misinformation problem with these platforms. Having the ability to present believable false information isn't something that used to be possible, but now it is.
It could lead to censorship, which would be terrible, and we should let people post whatever is legal. A way around this would be to combat the misinformation by making sure the reader has quick access to the facts like Twitter's fact checking.
Believable false information has always spread and always will.
I grew up in a rural city and was taught to believe that jet fuel doesn’t melt steel beams, aliens were real and the government was hiding them, farms were being oppressed by liberal government in favor of factory farms, ghosts were real, cellphones cause cancer, vaccines cause autism.
This shit was always here, but now people completely ignorant to the fact of it spreading now know.
The problem is that social media has made it possible for anyone to reach an audience of billions with a click of a button. This so radically changes the dynamics of how we communicate with each other that some new rules are going to be needed.
It's amusing the "slippery slope" is originally the name of a fallacy, but how now become a simple statement of supposed fact (i.e. "if we do something, we won't be able to, over time, stop ourselves from doing even more of that thing").
> So long as their speech remains legal, social media sites should not be the business of censoring elected officials.
With all due respect, I disagree entirely, and elected officials censor each other as a matter of doing business. social media sites should be designated as journals and subject to the same ethics as journalists. I think the novelty of harvesting personal data (as in, "Oh, you mean someone cares about my favorite color? Neat!") can be tempered by holding users to the same standard as journalists.
Social media sites are just journals. I am not going to allow Tinker Bell to sprinkle her magic pixie dust over Facebook and Twitter to give them some "special" status that allows them to avoid their journalistic responsibilities. They are in the journalism industry; they should lead, not beg for special consideration.
Slippery slope is a fallacy. You realize that these sites are choosing, on their own prerogative, to censor, if it becomes an especially burdensome part of their business they are free to stop.
It should be decided by the market. If that can't happen because there is a oligopoly on social media, maybe we should be look at anti-trust suites breaking up large players
While I agree that censoring speech should is something that shouldn't be done, I think calls for physical violence is an exception. There's no doubt that some of Trump's tweets in recent days fall into this category. It doesn't need to be deleted entirely, as I also see the importance in preserving statements from public officials, but they can be hidden under a clear warning like what Twitter has done.
BTW is threatening violence against fellow citizens even legal?
If our elected officials use your platform to share lies and hate you cannot morally ask people to be silently complicit.
Our elected officials have their own electronic resources. Every senator and congress person has a website. Trump has a website. Anyone with $100 can have a website.
Let them distribute hate and lies on their own site.
There's legal recourse for this (see Chaplinsky v. New Hampshire, among the many other SCOTUS rulings curbing free speech in special cases). Social media companies are not the arbiter, the courts are.
Fighting words only apply to face-to-face speech and even then it's somewhat doubtful. If you want further analysis, look at some of Ken White's writings on it on Popehat. Incitement as defined by the Brandenburg test is more likely to apply.
I dont expect FB to be a arbiter, but in a more general sense, prevent harm done using the facility. Like a transport company to be held liable if someone is using them to mule drugs, or banks for money laundering.
You can make a coherent argument that they shouldn't, but that would require much heavier censorship on Facebook's part than is present today. How many Facebook posts glorifying the looting were left up uncontroversially?
Facebook banned me for 3 days for commenting on a previous ban where I had said "white people suck" and "white people are savages", both for pictures contrasting white people protesting vs PoC and First Nation people protesting.
Disclaimer: I am very white. I was not promoting hate.
You miss the point. How hard is to find hate speech which isn't censored?
Out of interest I searched for "alt right" on Facebook, and it took about a minute to find a post that said "Social media has made too many of you comfortable with disrespecting people and not getting punched in the mouth for it."
The post is dated Feb 9, so it's been there a while. It's not unusual for that group.
And so on. It's one thing to have community standards, it's another to prove you're applying them fairly.
I see no evidence that Facebook is interested in doing that.
I think that's a separate discussion altogether. I'm sure there are instances of hateful speech that should've been removed but nevertheless weren't or haven't been. That doesn't mean that the instances that were removed shouldn't have been.
I think the context backs up my position. There was no justification for calling an entire race of people savages. The image made its own point and didn't need to be coupled with hate speech. That definitely promotes hate and should've been removed, and you should've been (temporarily) banned, just as you were.
Everyone who promotes hate speech thinks they're just being misunderstood. You're not the first.
You called them savages. In addition to making wild accusations against me in your latest edit of your previous comment, you are now being disingenuous about what you wrote in order to make yourself seem more innocent.
Yeah, I did. Please connect the dots from that comment in that specific context to where actual hate comes into play, and more importantly, where that supposed hate translates into a pathway to discrimination, violence, or other ramification of real hate speech.
I'm not trying to be innocent. I make mistakes all the time. If I do an edit, I append and keep the original content intact (save for typos) because I try my damndest to own my words.
My accusations against you are my guesses about you. You are very comfortable in judging me, so I'm not sure why you feel so hurt when I share my tentative assessment of you.
One more datapoint: that within the private group that the original comments were made, the feedback I got on this issue was aligned with my perspective. Something tells me that in your social circle it would never happen.
> One more datapoint: that within the private group that the original comments were made, the feedback I got on this issue was aligned with my perspective. Something tells me that in your social circle it would never happen.
Your one data point is pretty funny. I bet within any racist group, hate against another race would also get postive feedback.
It's a play on the fact that native americans have traditionally been referred to as savages, and this was my commentary on "the pot calling the kettle black".
Replying to myself because the edit window closed.
I posted this scenario (the background of my original post, as well as the response here) to FB to ask for feedback. I specifically asked for criticism if they saw it (not just for echochamber validation).
All 10 respondents thus far saw no issue with what I said, with the most telling comment to when I asked again:
Q: Do you think I was promoting hate speech in the given contexts?
A: possibly, In the eyes of someone threatened by your opinion.
I stand by my words. I will correct them when the correction is valid, however, I've not seen that here in this dialog.
> Zuckerberg’s decision was supported by the majority of the company, but that people who agreed with it were afraid to speak out for fear of appearing insensitive
It's unfortunate, but I've seen it happen a lot at companies like this, from what I've spoken about privately.
The worst part is those I speak to have very modest views, good intentions, and have put thought into it. But they're terrified of ever speaking up at the workplace for fear of what those that are most radicalized might try in response.
It's difficult to see this article from that perspective given the clear mendacity and sleight of hand at play from Zuckerberg.
It's not like a Facebook feed is the deterministic result of a given user's subscriptions to friends and businesses, as if Facebook were a kind of user-friendly clone of a FOSS mailing list. Facebook engineers the contents of the feed in a way that is typically inscrutable to the user, and often harmful-- driving division and filter bubbles in order to keep up engagement. It's democratic in exactly the same way that Las Vegas would be democratic if you removed every single regulation on the gambling industry there.
So I'm perfectly happy to concede that even in a system designed with dark patterns to eat as much users' attention as possible-- well past the scant value they receive from using that system-- it'd be even worse to censor the POTUS account. But I highly doubt that's the kind of reasoned defense you're talking about. Any Facebook employee uttering that knows the next question coming is why they work for a company that employs so many dark patterns to generate so much anger and misinformation among its userbase.
In some ways this seems endemic to all organizations with >1000 or so people. It's crazy the extent that white nationalist orgs have infiltrated Law Enforcement/US military.
That being said, in many ways companies/organizations are simply biased samples of the cultural milieu, so I'm not sure there is an easy company-level policy for changing this. Given the Banjo/Clearview.ai KKK/alt-right stuff, the fear of radicalized tech employees is definitely a matter of real concern, but it's definitely a hard problem.
I'm agreeing with the parent comment about the chilling effect that radical political groups can have on moderate dissent. White Nationalism's relationship with Law Enforcement/military is pretty well documented, and is very clearly under the umbrella of radical political movements. But if that's causing confusion, I used that as an explicit example of the phenomena the parent comment was describing.
I am curious by what you mean by "well, that's the opposite of the case here." What is in your opinion the opposite of White Nationalism on the radical spectrum?
I want to clarify that this is in NO way verifiable.
There were 0 company polls taken about this on a grand scale.
Most vocal position so far has been large scale dissatisfaction with this decision. But no way to actually know if that is just a ‘vocal’ minority.
So unless this person they ‘surveyed’ did some sort of independent analysis where they sampled a significant portion of the company independently, they are full of shit.
To be clear I am not advocating for either position in this post (though I do have an opinion), but this is just a bold faced lie.
It's not entirely clear how they came to that conclusion without any data to back it up. How could they know? Was there an internal survey or something?
Or perhaps Facebook employees generally lean right maybe? Any FB employees here care to comment?
It’s impossible to tell, especially working remotely. It’s not like you can ask these kinds of questions over Workplace chat unless you’re close with other team members.
But the perspective that there are people who agree with Zuck’s decision and aren’t speaking up is valid. I’m one of them despite generally agreeing with BLM _and_ being a person of color.
You have people changing their profile pics to BLM icons which makes it ever so apparent that a non-negligible percent (20%-30% maybe) of co-workers in each group chat support the cause.
During Q&As, 95% of the comments are from BLM supporters making quite blunt remarks about Zuck’s decision making.
Director+ levels are also vocally against Zuck’s decision (although they accept that they can’t change his mind). I’m not willing to torpedo my career progression here, and imagine others feel the same.
So yeah, I’ll be keeping my mouth shut and silently nodding along with the rest while Zuck gets rocks thrown at him.
Just out of curiosity, what keeps you from speaking out (besides a strong feeling of group think).
Have you actually experienced firings / ostracism from people at the company? While I do feel like your points would be contrarian (and may get shouted at), I'm wondering what the actual repercussions would be.
Most of what I've seen (in regards to dissenters) have been relatively spirited debates and strong digital dissension but no strong tangible effects.
I'm asking this in good faith as someone who sits on the other side of the fence on this issue (though like with most things am not 100% assured of my decision).
Baltimore's homocide clearance rate also dropped from 40% to some percentage in the 20's because the breakdown in police/community relations made people stopped snitching.
There's a similar spike in St. Louis, timed to the Michael Brown killing.
The national murder rate spiked 25% in the last two years of Obama's presidency, after decades of decline. There isn't any alternative explanation.
Well, I don't they did this, but it's an interesting technique used for this sort of scenario. Unfortunately, I can't find a wiki article to explain it (I think it's something like the hugh-jones coin or something).
Anyway, I think the idea was someone needed statistics on how many people were stealing things after some sort of event, but obviously they couldn't just ask people because who would admit to breaking the law.
So what you do is you ask the question: "Are you stealing" then you have the person flip a coin out of your view. If the coin is heads then they answer honestly, but if the coin is tails, then they always answer yes. This way the person in question has plausible deniability (I didn't steal, I said that I did because the coin came up tails). Then once you have the data you just need to divide by half and maybe tweak some other parameters, but you end up with realistic data (or so the statisticians say).
Even if it wasn't the majority that felt that way, it's interesting to me that they felt "afraid to speak out for fear of appearing insensitive". The situation is so polarizing and dramatic, we can't hope to find an optimal solution.
I think it's a mistake to assume that anyone who doesn't want to fact-check/remove Trump's posts has to lean right politically. There's a lot of other reasons they can have.
What if all of Trump's worst posts were removed, such that every potential voter reading them then has a less-aligned view of what he supports, and decides to vote for him, when they would not have had his posts not been selectively filtered? Just one of many possibilities
i think that the president should be able to speak to the people, and half the people want to hear it. those who don't want to hear it shouldn't block it from the others. it's jiggled into such a drama. a headache over nothing. zuck was right, about transcending the feelings, and going by principles.
I wouldn't say it is dangerous, more like it is obvious. I very rarely agree with everything my company does and yet I don't feel put off by that. The world of total conformity is the really scary thing, we will cease to be humans by then.
The danger is that what people think and what people are willing to say are systematically diverging. If true, that means that anyone who's trying to observe what people think and make decisions based on it will consistently get things wrong.
That is kinda temporary, in normal companies mistakes get accountability and executives are fired. In the end, you can get on the "design by committee" trap.
The problem here might be the monopoly, so it doesn't matter how bad the decision there will be no repercussions.
I’m at FB and the nuance between supporting BLM/racial equality and preserving free speech has been lost. If you don’t think Zuck should’ve taken down Trump’s ramblings then you’ll likely be seen quite negatively by people all across the org chart, potentially including your direct and indirect superiors.
Would people who spoke out negatively potentially receive reprisals, even if not explicitly so? I'd imagine they'd lose opportunities, have their career affected and be treated differently.
In that case, I wouldn't be so sure about your numbers because people will lie about positions if they think they'll be negatively impacted by a "wrong" answer.
This is one of the reasons why the 2016 election polls were so wrong. When polled, a certain number of Trump supporters would lie and say they supported Clinton.
Most of the folks I've spoken to don't agree with Zuck's position, which I find disappointing and unsurprising.
Facebook and to a lesser extent other big social media is in an impossible position.
In my opinion: “Over time, in general we tend to add more policies to restrict things more and more,” he said. “If every time there’s something that’s controversial your instinct is, okay let’s restrict a lot, then you do end up restricting a lot of things that I think will be eventually good for everyone.” is right on target.
I don't mean to understate or under-appreciate the damage and impact various posts can and do have. This one post by Trump will likely cause tangible harm.
There are absolutely no unambiguously good and correct decisions here. Facebook and the others are in completely uncharted waters.
This is just Facebook acting in bad faith, and glad to see many employees can see right through the charade.
Facebook cares about its bottom line only. By restricting things, it decreases its income. They don't want to verify ads (my dad was just scammed the other day. I have since gotten my mom off of Facebook). They don't want to lose users, that includes nazis, white supremacists, Trump supports, far left radicals, etc. They don't care about hackers, and foreign agents influencing elections.
If Facebook wishes to be a public forum then it needs to either become a utility or it needs to lose its profit incentive. Otherwise it is a private platform and should bear the consequence of the misinformation and hate it spreads.
Also, there is no way to abate and come clean of the sin once you've been accused of it loudly enough. The accusation will stick and can ruin your entire remaining life. It's often not worth the risk if your views are moderate, and it's not a life-or-death question.
The thought police will publicly shame you in every way possible if you don't score enough "woke points" with your latest post. We've seen conservative voices exiled from SV, and I suspect moderate or simply imperfect liberal voices would be treated similarly. As a pretty far-left liberal in an extended social group of extremist liberals, I've learned / am learning to just stay off social media and say nothing (easier to do this by deleting your fb/ig/whatever). Nothing you say or do can every please these people and they are very quick to destroy people when they feel slighted.
I've heard this for years, the conservative victim complex truly is one of the country's greatest examples of social engineering. No evidence of any of this stuff - it's just a boogeyman.
Most jobs in the world aren't going to require their employees to represent a company that is presenting itself as the arbiter or defining absolute of free speech. Hell, most jobs it matters very little who you vote for or what you believe, even in the government. If these employees are uncomfortable with the position their employer is putting them in they may want to reevaluate their life choices.
If there's an internal culture of fear at Facebook then why should this information be trusted? Seems like there's usually more fear when disagreeing with corporate policy, not agreeing with it.
This is an entirely unsurprising result of eliminating reasonable discussion and rendering it professionally untenable to support the president.
Welcome to the life of a conservative in tech. I'm pretty sure there are a lot of us, but I have no way of knowing for sure, because it's clear that speaking about our political beliefs can lead to negative repercussions in our career. Good luck getting a job or investment after you've publicly posted anything in line with Trump's agenda (which, btw, about 60 million people seem to at least partly agree with).
The result is that the only conservatives remaining in public view are the loony ones who can afford to speak out without risking their job, don't care about their corporate reputation, or have to post from pseudonyms. People on the left see this and conclude that everyone on the right is an extremist, further assuring them that their leftist views are "on the right side of history."
There should be a term for "liberal privilege" when you're able to post your political views under your real name.
I do not think we should make people who spread hate feel comfortable spreading that hate. The president is unequivocally spreading hate and inciting violence. He should not be able to do that when doing that is against the TOS of the site he is doing it on. Said another way, why does he get to incite violence on twitter, yet I do not?
There is a distinct difference between expressing a political view, such as that we should have the government spend less money, and inciting random citizens to commit violent acts against a perceived enemy.
> I feel like this is a misreading of the situation.
I was referring to the quote from the article "that people who agreed with [Zuckerberg's decision] were afraid to speak out for fear of appearing insensitive," so I don't think it's a misreading of the situation. On the contrary, I'm expressing my lack of surprise, given my personal experience as a conservative in tech with a 98% liberal peer group. I am unable to talk freely about my political beliefs in the same way that, say, Biden or Bernie supporters are. I would be labeled a racist simply for agreeing with any republican point of view. Whereas I routinely see friends on Facebook getting hundreds of likes on statuses that call republican politicians "vile" or "scum."
As an aside, what's particularly frustrating is how many people assume or expect that I hold the same views as them when in casual conversation. Someone will bring up Trump as if it's a foregone conclusion that he's a racist sociopath and anyone who supports him is one too, and I just have to nod along and laugh with them, or find a way to change the subject, lest I be "outed" as some sort of violent extremist for supporting the president.
> The president is unequivocally spreading hate and inciting violence
Unequivocally? I think you could find a lot of people who disagree with this assessment of his words. Who's to decide what is "unequivocally" an incitement of violence?
> There is a distinct difference between expressing a political view, such as that we should have the government spend less money, and inciting random citizens to commit violent acts against a perceived enemy.
This is a strawman, because I haven't seen Trump "inciting random citizens to commit violent acts." Regardless, of course there is a difference between political speech and inciting violence. But who is to be the arbiter of it? Because from what I've seen, certain people on the left can twist almost anything outside of their agenda to fit their definition of "violence." As a contrived example, some on the left would be "offended" by the suggestion that "a country is not a country without a strong border," equating it to "locking kids in cages." Is it violence to advocate for strong borders and criminal consequences for illegal immigration? Or is it political speech? What about when the president says it?
> Trump as if it's a foregone conclusion that he's a racist sociopath
You are totally correct. I do think this. I have ample evidence that backs up my case as well. I think the quote "when they start looting, we start shooting" which was hidden by Twitter for inciting violence, is enough for me.
> This is a strawman, because I haven't seen Trump "inciting random citizens to commit violent acts."
I don't think we can have a real conversation because we disagree about the basic facts on the ground.
Trump, from my position, is clearly, repeatedly and blatantly pushing people to violence using Twitter. In fact, even Twitter thinks so.
Umm... except that’s not what he said though. If it was, I would completely agree with you that it is an incitement (more accurately, a proclamation) of violence. But what he actually said was:
"when the looting starts, the shooting starts"
And, per Wikipedia:
"He said that he was not aware of the phrase's 'racially-charged history'. He added that he didn't know where the phrase had originated, and that his intent in using it was to say 'when there's looting, people get shot and they die.'"
So his statement is not an incitement, but a prediction. You may believe that Trump is lying about his intention, but that's a different debate.
So at worst we have someone who is intentionally inciting violence.
At best we have a leader of our country too irresponsible to do due diligence on his own posts to the entire free world. He has literally infinite resources at his disposal to communicate effectively about this.
Being a hateful bigot and or an ignoramus should both be unacceptable positions for the leader of the US. Furthermore this isn't the first time he's said or done hateful / bigoted / ignorant things / lied so you'll excuse people if they don't give him the 'benefit of the doubt'.
When you have a 'bully pulpit' as powerful and far reaching as Facebook now provides to the president then it is totally reasonable to want to hold them accountable (given you disagree with their stance).
I don't exactly understand what is objectionable about protesting the decision making of corporate entities given the immense power they hold?
This is like saying you shouldn't boycott BP for their oil spill, you should just complain to your congressperson. I don't understand why you can't complain on both fronts?
> I'm pretty sure there are a lot of us, but I have no way of knowing for sure
Just created a new account just to tell you we're at least 2 conservatives in tech. Pretty sure I'll get angry and abandon this account very soon again because of the liberal bias here, and their adoration for the downvote button, but whatever, anything for a conservative colleague :)
> There should be a term for "liberal privilege" when you're able to post your political views under your real name.
They can post under their real name, doesn't mean it's not gonna cost them someday for a future job or who knows what, karma works in complex ways.
I don’t support Trump and am not even conservative but look how I got downvoted when I pointed out that someone misquoted him. Sorry, you want to claim that conservatives ignore facts and then proceed to downvote me when I literally reference Trump’s verbatim quote from Twitter to correct your misquoted version?
I almost think Trump keeps winning elections out of a sense of schadenfreude. No one likes the guy, but when you constantly get downvoted and people attempt to silence any minor disagreement from the party line, it sure makes you want to vote for him out of spite (I won’t though).
>I almost think Trump keeps winning elections out of a sense of schadenfreude.
I definitely think a lot of his appeal has to do with his ability to rile up the left then demonstrate their ineptitude at stopping him. He's also doesn't seem to have much in the way of strong principles on subjects, instead, he does pretty much exactly what he thinks his supporters will like.
His corruption is the trade-off one makes for electing a politician like him. He'll do whatever it takes to make his constituents happy, and in return they give him a blank check.
for all the lofty scientific & neutral ideals that rational/atheistic/left/liberal people chest-thump, they should be able to judge an idea for its content and not for the person.
but since we know that no one is that scientific, just post views under a pseudonym, unless you plan to run for political office.
The paradox of democracy is that it makes it seem reasonable to say “60 million people agree with me, so my opinion cannot be that bad”, even if that opinion means ruining the other 60 million lives.
It reminds me of a joke by the late singer-comedian-punk activist, “Freak” Antoni: “Let’s eat shit: millions of flies just cannot be wrong!”
This said, I honestly have no answer to the current predicament. We have populist parties collecting substantial victories simply by claiming they are not allowed to say what they are saying, and then use those victories to actually restrict the speech they don’t like. It’s a bit like rappers complaining that radios won’t play them and they are discriminated against, while breaking all sorts of airtime records and flaunting their bling.
Even you - you have a government you agree with, doing stuff you agree with, but somehow... you’re an oppressed victim. How much of this is due to the vagaries of your electoral system (you are effectively in a minority, but still you rule over the majority), how much is rehashing the good old practices of ancient Christianity (persecuted for 2000 years, while ruling the world for most of those same 2000 years), how much is modern weaponization of the democratic conundrum, Weimar-style? And how much is just an excess of visibility of relatively small and intolerant fringes on all sides?
I don’t know, and I honestly cannot see a way to make you happy and progress society at the same time.
I'm against Antifa and am afraid to admit this in PDX, for my own safety. Left wing politics is becoming just as fascist as right wing politics used to be.
Zuckerburg has a history of this type of behavior in the past where he sucked up to Xi Jinping asking him to name his child (wtf...). He obviously doesn’t care about politics nearly as much as he does courting influence and money from politicians in power. The embarrassing part is his attempts to justify it without outright saying “Look, we have to keep Republicans happy because they have the power to hurt Facebook or stop giving us money and I’m not OK with that.”
Dorsey and even Bezos are more principled than he is.
If Facebook and Zuckerberg personally did not profit from such acidic and destructive kind of free speech they would fix that pronto, it's as simple as that and it appears fruitless to me to spend a lot of words to counterargue them without considering this. Karl Popper would be going nuts by now were he alive today, I'm afraid.
Unfortunately, I do not think it is a coincidence that the choices Facebook makes are the ones that allow the most content — the fuel for the Facebook engine — to remain in the system. I do not think it is a coincidence that Facebook’s choices align with the ones that require the least amount of resources, and the choices that outsource important aspects to third parties. I do not think it is a coincidence that Facebook’s choices appease those in power who have made misinformation, blatant racism, and inciting violence part of their platform. Facebook says, and may even believe, that it is on the side of free speech. In fact, it has put itself on the side of profit and cowardice.
This seems to me like the obvious answer. Zuckerberg has no problem censoring nudity...but that threatens his advertising revenue so it doesn't raise any concerns and he doesn't have to pretend to care about "free speech". They have a lot of "community standards" that they don't mind censoring ... but all of them threaten advertising dollars.
Kara Swisher has said it again and again. The best thing for a company to do business-wise is to ban Trump off their platform after the events of the last week.
I’m twisting her words because she heavily disagrees with Facebook, and I want to respect that, but this is a non-argument if you actually take a business perspective on it.
i've been looking for it as well and haven't been able to find it. i wouldn't be surprised if journalists are keeping it close because fb puts a watermark in streams to catch leakers. so it's not available to keep the leaker anonymous.
I like very much the approach the author took to writing this piece. He does a great job of putting the situation in context (ie: his "the optimists and the pessimists" positioning).
I don't think the policy will ever be black and white like that in terms of free speech. If that is the case no social media company will have any terms and conditions on their site, other than don't spam us. All of them will have the same free speech limitations that the country has.
For example will facebook be ok if the next democratic president says "we will stone carl if this post gets 10k upvotes" will it be ok? Lines are always drawn and they are always in he sand. Its just that FB decided that is where they want to draw that line. Everyone will have their own opinion on whether its good or not.
Megacorp gonna megacorp. There are 655 billion reasons why Facebook will continue to act in the interests of its sole reason for existence as a money-making enterprise, as it has for a decade. All the hemming and hawing and hand-wringing and agonizing they portray...ignore it. Just watch the actions. Actions speak louder than words, Mark.
I like how o.p. edited the title of the original article from ‘Nine things we learned from leaked audio of Mark Zuckerberg facing his employees’. Good move. The ‘x thing you never knew’ format has become the hollow candy of internet-acquired knowledge.
"Free speech" is a very nuanced term. You can't shout "fire" in a crowd unless there's a fire; and you can't force me to distribute anything.
Traditional media (TV, Newspapers, Radio, Publishers) always use discretion about what they print and broadcast. A TV station deciding not to broadcast something, or a publisher declining to publish something, is not censorship.
IMO, Social Media is not like a generic web hosting company. They can use a lot more discretion in what they publish, how they publish it, who they show it to, and what context they put it in. To be quite honest, I honestly believe Facebook needs to be more proactive in curtailing misinformation. Calling lack of discretion over what they promote in feeds "Free speech" is ruse to try and distract people from their poor credibility.
Nothing's blocking Trump from creating his own website where he can say whatever he wants.
I remember seeing that “yelling fire in a crowded theater” was technically untrue as it is usually presented. Anyone with more knowledge can elaborate?
Basically, the analogy of shouting "fire" in a theater was meant to illustrate the "clear and present danger" doctrine - if an act of speech could lead to clear and present danger which there were laws against, then the speech was not protected. What's false about this analogy is that the "clear and present danger" standard isn't the standard that speech is measured against today. Instead, speech is restricted according to the "imminent lawless action" doctrine, by which speech that could lead directly to people breaking the law is not protected. This was decided in Brandenburg v. Ohio[2], and the central tenet of the ruling is:
"Advocacy of force or criminal activity does not receive First Amendment protections if (1) the advocacy is directed to inciting or producing imminent lawless action, and (2) is likely to incite or produce such action."
Thus, standing on a street corner and praising the merits of rioting and damaging property might be protected speech if it was very unlikely for anyone to take those actions, but if you did it during a restless protest march the protections of free speech may not apply.
It's worth noting that the phrase "shouting fire in a crowded theater" comes from a Supreme Court case in which the government imprisoned socialists for handing out flyers urging people not to submit to the draft during WWI.[1]
It's one of the most infamous decisions in the history of the Supreme Court, at one of the darkest hours in the history of the First Amendment. I think most people would be surprised to learn how far the Wilson administration went to suppress free speech during WWI. I had no idea until I read "The Great Influenza" (about the 1918 pandemic) recently.
As an aside: the Espionage Act comes from that time, and is the law that was used to jail opponents of the draft back then. It's also the law being used to charge Assange today.
- It was a non-binding side comment (dicta) not particularly grounded in existing law on the decision it was presented.
- It was presented in a decision that has since been overturned in effect which is notorious as one allowing government prosecution of core political speech “because socialism”.
Free For All of Social Media censors won't last very long. It became mainstream to hate the internet and push for the end of platforms, effectively turning Twitter/Facebook/Google into publishers. By doing that they automatically loose the ability to claim to be platforms and become publishers.
As a publisher it is much harder to guaranteed that things said in the platform won't end up in the arrest of your employees. Zuckerberg is trying to find a middle ground where he does not push an agenda but at the same time can keep its advertisers by removing things based on a clear policy ( showing violence / pornography / etc ).
Don't forget the media as a whole does NOT have that benefit, if a company publishes or defames people they will be severely punished. We should think really hard before asking for Social media companies to breach that vote of confidence, there is real chances Twitter will be the first giant to fall regardless if Biden wins the election.
The idea that there is any law that "platforms" cant censor speech or they become publishers and take full responsibility for content is completely false, and has no basis in law whatsoever.
Go actually READ section 230 and not what people repeat about it.
It is becoming easier and easier to argue that these platforms are actually publishers. They are all making more editorial decisions on what posts to show and hide, where to show what, and they all have rules about what content is allowed and what's not. Those sound like the activities of a publisher to me.
This is a good take and one I also think is likely. Maybe Zuckerberg saw what happened to Google.
Google essentially bet the farm on a Clinton win (Schmidt was directly helping the campaign). Fast forward to today, Google is under dozens of anti-trust investigations and has been iced out of all Washington influence.
Facebook thinks of itself as a state and fundamentally misunderstands it's relationship to the traditional "big L" liberal democracies.
The first amendment[0] specifically defines congress, the law making body, as the arbiter of free speech. This is because without the force of government behind it, the concept of censorship is effectively meaningless.
In a constitutional sense, it is impossible for any actor other than the government to violate a persons first amendment rights.
Facebook, as a corporation, is not capable of violating first amendment rights. Even FCC section 230 [1] explicitly enables facebook to moderate as it sees fit. Facebook could decide that any post that contains the word "avocado" would be deleted on the spot. Or they could decide that only political posts that support Vermin Supreme [2] are allowed.
This would be completely legal.
The same way you are allowed to delete spam and comments by Nazis on your personal blog. You are also not liable if someone posts a libelous comment on your blog, just like facebook isn't liable.
Similarly, if you decide that you are a-ok with rabid neo-nazis commenting on your personal blog, well, then, sure, that's legal, but it also says a hell of a lot about you and your personal beliefs. There's a word for people that sympathize with nazis...
You could run the smallest blog or the largest, world spanning social network and delete Nazi content legally in the US.
So, when Mark Zuckerberg says he is defending free speech, he is using misdirection. He is fully aware that the legal, constitutional term "free speech" doesn't apply to Facebook.
He wants you to avoid thinking that the people that run Facebook are morally responsible for the legal, fcc section 230 moderation that they choose to engage in, the same way you would be morally responsible for actively allowing Nazis to comment on your blog.
It's so simple, a six panel stick figure comic can explain it. [3]
Zuck just doesn't want to show Trump the door. Why? Because he makes a lot of money from sowing division and hate. [4]
As Barry Schnitt, former spokes person for facebook put it [5]:
"Unfortunately, I do not think it is a coincidence that the choices Facebook makes are the ones that allow the most content — the fuel for the Facebook engine — to remain in the system. I do not think it is a coincidence that Facebook’s choices align with the ones that require the least amount of resources, and the choices that outsource important aspects to third parties. I do not think it is a coincidence that Facebook’s choices appease those in power who have made misinformation, blatant racism, and inciting violence part of their platform. Facebook says, and may even believe, that it is on the side of free speech. In fact, it has put itself on the side of profit and cowardice."
It is clear that ruling class wishes widespread censorship in internet to happen, this is why they spend millions on hiring shills to support such causes. I haven’t seen a single person in real life that support censorship of ideas.
How is this even a question? Fictional narratives and real-life actions are incomparable
> Should we ban tweets linking to bail funds for arrested protesters since that's "glorifying" breaking the law?
If they were encouraging violence, yes. They weren't encouraging violence, they were encouraging civil disobedience.
Destruction of property and assault on individuals are incomparable.
> Should we ban commercials for fast food and high sugar products since they glorify unhealthy lifestyles and are a "public health risk"?
Well, we've banned commercials for cigarettes. Lots of countries ban commercials for alcohol. But I would assume you would be against this too, and here I will actually agree with you.
This one is a spectrum. If an adult human being wants to make the conscious informed choice to assault their own body (with drugs, or alcohol, or sugar), we should let them.
What we shouldn't do is target commercials at children that are not fully formed enough to make their own decisions. Which is what most of restrictions about fast food and high sugar products are focused on.
As I say elsewhere, I could just as easily use your empty standard to ban tweets saying “A riot is the language of the unheard.” This quote sympathetically explains rioting as a consequence of oppression—and it’s a quote I largely agree with. And yet people as easily triggered as you may seek to have MLK banned since the quote serves as justification/apologetics/glorification of rioting—-a definitionally violent and illegal act.
And if you want to play these games about particulars, replace war movie with war documentary. Are you satisfied now? Shall we ban war documentaries which may glorify violence?
Documentaries by definition should be DOCUMENTING reality not glorifying it.
War documentaries that glorify violence are propaganda.
We shouldn't ban them, but we should contain them the way we contain all propaganda - study it's means to an end, and contain it.
I'm pro rioting. I don't know why you think I'm triggered. Rioting isn't promoting violence. There is also a question of instigation.
Person A: Punches person B in the face
Person B: Punches person A back in return.
These actions are not equivalent. A instigated. B defended. A is being violent. B is not.
Person B: Destroys person A's property
Person A: Punches person B in the face
These actions are not equivalent. Person B did not perpetuate violence. They instigated, but A did not defend - they escalated from property damage to personal damage. B is not being violent. A is.
So you’re just trolling, is that right? You’re claiming that you’re deeply upset about a vague Trump tweet and need it to be banned to protect your innocent mind, but you’re openly advocating literal random violence. Ok, comrade.
Great, so I'll come and burn your house down. I've got nothing to fear, since you obviously won't try to physically stop me. That'd be bad. And you're not bad, are you?
A younger me would be right there with you arguing these hard-line principles. And older me has seen 20 years of absolute madness spreading across people's minds and an outbreak of sickness and obesity. An even older me fears this whole thing is coming down because that old assumption that most people are like me--smart! educated! harmless! peace-loving! egalitarian! color-blind!--is flat-out false. Human society is infested with psychopaths who are given free reign to do whatever they want as long as it flies under the radar of whoever is making and enforcing the rules. And some of these psychopaths are busy building tribes by feeding them absolute bullshit and raising angry mobs.
Does this mean censorship and the big ban hammer for everything we don't like? You're making that leap.
As for myopic, let's turn that back around. How myopic it has been that we haven't been serious about addressing the hate and violence in our societies and have left it gestate, fester, even mutate into an uncontrollable angry mob? How are them free speech principles tasting when 40% of the population is so far gone they'll never be sobering up for a nice quiet little chat about intellectual issues? Have you ever had an in-depth discussion with someone who doesn't believe in evolution, who thinks the Earth is probably 6000 years old, and will flat out tell you that they cannot be convinced otherwise because of their religion. That's some weapons-grade irrationality, right there. You should think hard about what to do with a society filled with people so effectively programmed, should it ever come the point where people are programmed with a more virulent ROM.
Society is sick right now. Very, very sick. Enjoy your free speech principles while they last, because it won't be your fellow HNers taking that right away, it's gonna be the angry mob who doesn't actually give a shit. You fundamentally do not want to see what is going on. The fascists are feeding their base crazy pills right in front of your very eyes. Bleach for fuck's safe. Hydroxychloroquine, NO COLLUSION. 19,000 lies. LOCK HER UP! And that mob is. pissed. off. Everyone. is. pissed. off. The president is the leader of a mob. Let that sink in. The President has the write lock on a big section of the voting population and is feeding them utter insanity and is now vowing "total domination" through police violence. We are in deep shit this very moment.
-Facebook has a culture that is extremely biased toward the political left. The left has a (recent?) trend of wanting to censor speech they don't agree with.
-America (and the world) has a lot more viewpoint diversity.
Zuckerberg is stuck with trying to keep his workers happy but also not wanting to alienate large portions of users, customers, politicans, etc.
In my opinion, he has chosen the path correctly and only those deep in an ideological bubble would think otherwise.
If you disagree with this, think to yourself - would you want Trump supporters deciding what you're able to post on Facebook?
I want to respectfully disagree with you. The kernel of your argument here is that the speech in question is a matter of disagreement. It's important to define what kind of speech we're talking about here, because everyone might mean something different.
To be clear, we are talking about verifiably false speech. Verifiably false speech that is sufficiently amplified can have serious consequences. This is unequivocally NOT a matter of differing opinions. And, yes, I understand that there is a huge gray area as to what is "verifiably false", but it's still a standard that we should strive for. Misinformation is a serious problem that is arguably responsible for tens of thousands dying in the US from COVID just in the past few months. This is not a game.
I can't say for sure if you're framing the question as partisan bickering in good faith, but if you are I want you to consider how flat out lies and half truths can have extremely dangerous effects. All sources of misinformation are dangerous, but it is undeniable that one "side" traffics in misinformation in much, much higher volume (at least an order of magnitude). Anyways, the point is the problem is not simple and anyone claiming such is oversimplifying at best and being disingenuous at worst.
Since social media companies can't be trusted to make that decision, I'm more worried that what they censor was true speech. Why? Because if it was false, they could just show the truthful information like Twitter did. Basic facts are contentious nowadays (e.g. number of genders in humans). They're going to get it wrong, so it's just too dangerous to censor.
I suppose you would be in favor of a Minority Report type of world, where our truth machine will tell us verifiably that you will commit crime in the future, thus we should preemptively arrest. Surely it's never wrong!
I'm with you on this. In the past 150 years, pseudoscientific disciplines like eugenics and phrenology have directly led to the deaths of millions of people. Geographic determinism and 19th-century racial "science" directly led to the subjugation, abuse, and exploitation of tens of millions of people. Just two years ago, misinformation and falsified news fueled a genocide through Facebook.
You're 100% correct: this is not a game. We have frameworks for verifying truth, in science and journalism - that truth can safe lives.
Brought to you by the Ministry of Truth. If you don't see our stamp on it, it's not true!
> All sources of misinformation are dangerous, but it is undeniable that one "side" traffics in misinformation in much, much higher volume
I'll blow your mind, but from a conservative perspective, it's Dems lying constantly. Both sides can pepper truth in with lies. The best solution is to think for yourself.
Being hyperbolic and screaming it'll save lives doesn't help win anyone over either, you'll have to try harder to trick people out of their freedoms.
> If you disagree with this, think to yourself - would you want Trump supporters deciding what you're able to post on Facebook?
I've been thinking about this from a what-if perspective. In this scenario, you could imagine the staff wanting to block all LGBT content especially this month as that would be "dangerous to the standard family values"
This bizzaro FB would be upset that Bernie was spouting socialism and would point to historic examples of socialist dictators as a reason to ban him
This is Zuckerberg posturing to run for president in some medium term future. Probably expected it to be leaked and is setting up the decision reasoning to make it seem like he deals with “presidential” topics, has to assess national civil unrest.
It’s such a complete joke. Probably will work though.
I sense that Mark may be using his employees to play good cop bad cop in front of the media. But in reality, they all unanimously want censorship. Censorship would give Facebook a huge amount of power. Why would Mark not want that?
I've seen videos of looting (as opposed to protesting) which say out loud that they are being coordinated on social media. In particular, one looter's sister was murdered by another looter. She was crying on camera, "This ain't some Facebook shit" indicating she had been coordinated to show up on FB.
Once the general population of old people figures out that roving groups of young people are using "The Facebook" to coordinate their plans to set the cities on fire, social media is dead via regulation. So far that knowledge has escaped mainstream understanding. But it's only a matter of time.
Once the population ... figures out that ... people are using phones/fax machines/drums/smoke/whatever to coordinate their plans to set the cities on fire, they are dead via regulation.
You're underestimating the intelligence and experience of "old people." They've been through eras like this before, they know what protest looks like. And they already know that the looting (and much of the protest too) is pre-coordinated and planned by outsiders, primarily via social media.
There's a difference between being technically aware of something, and having the mental framework to take that awareness and turn it into a call to action. It's the news that tells people how to think about things and how to act about them. Corporate news is being slaughtered by social media. You've seen them attack social media already with stories of how pedophiles use it to get your kids and stuff. I don't think it will take long for the corporate news to figure out they can weaponize this to encourage regulating their enemy industry.
Though I could also say that's an overestimation of people's ability to understand that looting is pre-coordinated and planned by outsiders.
Local news broadcasts play a big part in this, because that's the only thing my parents will be doing during protests is watching what they provide. Guess what content they provide :/
It’s too easy to walk down a slippery slope where suddenly these sites have to regulate in accordance with whatever is the popular opinion of the day.