Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who does deeply believe in and care about free speech (from all parties), I could not be in deeper agreement with you. (for non-native English speakers: I fully agree).

Parler should not have been de-platformed. That was a clearly partisan act, and the truth would have come out shortly about the company's total lack of care for its users. Say what you will about Gab, but they never pulled any of those stunts. Looking forward to whatever the new solution is.



Parler was never a "free speech" platform. This was always a lie, they heavily moderated their content. I've always felt like Parler was just a lazy attempt to make some cash by pulling some users away from Twitter claiming they could do what they want, it was never more or less than that.

I'm interested in knowing how you think a real "free speech" platform can actually work, however. We have message boards that do this and they are just toxic cesspools. The idea of an online "public square" that isn't that sounds impossible. How many public squares are reaching millions of people instantly?


Whenever the discussion about Parler being a “free speech” platform comes up, I feel compelled to point out that they banned the DevinNunesCow parody account.

Parler was more than happy to moderate speech, even political speech, as is their right as the platform creator. Their breathless claims about being free speech absolutists is absolute nonsense.


I think Parler's angle was to become the home for popular conservative commentators. Parler continually suggested Hannity, Levin, D'Souza, etc as accounts to follow, even if you blocked those accounts.


> I'm interested in knowing how you think a real "free speech" platform can actually work, however. We have message boards that do this and they are just toxic cesspools. The idea of an online "public square" that isn't that sounds impossible. How many public squares are reaching millions of people instantly?

I'd like to know the answer to this, too. I wonder if the reason why they turn into toxic cesspools is precisely because the only people who use free speech platforms are the people who were kicked off the others.

If you accept that this is plausible, then is it feasible that the more reasonable folks that just want to talk about politics in a less divisive manner (or maybe not even politics at all!) might help bring down the temperatures if everyone was swimming in the same pool instead of just a few extreme viewpoints forced to move into the same swamp? (NIMBY!)

This would also be in keeping with that classically liberal axiom, "The remedy is more speech, not enforced silence." (Supreme Court Justice Louis Brandeis, the Democratic justice who is credited with creating the "Right to Privacy" in 1890: https://en.wikipedia.org/wiki/Louis_Brandeis)


> I wonder if the reason why they turn into toxic cesspools is precisely because the only people who use free speech platforms are the people who were kicked off the others

You don't have to wonder, we've seen this time and time again with virtually every open community. Without tireless moderation, the swamp grows.

In other words, the majority of people who use free speech platforms have already answered your question: they've shown themselves unable to co-exist with (a much larger number of) reasonable folks and were kicked out.


You're not actually answering the question.

Suppose that the dominant platforms (e.g. Facebook) are not free speech platforms. They boot off a hundred thousand people. 10% of them actually deserved it and are militant jackasses who ruin everything.

Now someone else creates a "free speech" platform. Everybody is allowed in. Well, 80% of the initial users are going to be a subset of the ones who got kicked off of the incumbent platform, and 10% of those are jackasses, so your platform is now 8% jackasses. That's a huge percentage and it's going to immediately turn into a dumpster fire because the jackasses will drive out ordinary people and become an even larger percentage. There are plenty of instances of this happening, e.g. Voat.

But suppose you go the other way. Somehow get a large number of ordinary users. Now the jackasses are only 0.5% of the users. Combine this with something like a voting system so that nothing is ever actually removed, but spam and fascism end up at the bottom of the feed where nobody sees them by accident.

That has the potential to work. The key is to somehow get enough users to dilute the jackasses before they take over, e.g. because the incumbents overreached and a large number of non-jackasses are moving in protest.


I did answer, because the internet started off as your proposed experiment. It didn't work. They didn't get diluted, they just got louder, circumvented, harassed and escalated. They aren't accidentally toxic, they are actively / aggressively toxic.

Moderation didn't come before toxicity it came in response to it. Therefore, moderation doesn't cause / focus toxicity.

If you want to address this, you need to look at education.


And moderation came _very quickly_. Usenet started seeing significant use in 1983. The first moderated Usenet group was created in 1984 (insert mandatory weak Orwell joke). And Usenet was eventually largely replaced by very heavily moderated webforums, and then by things like reddit where the popular subreddits that people actually want to use are mostly fairly heavily moderated.

It turns out that people don't, as a general rule, actually enjoy using totally unmoderated fora; they tend to quickly fill with spam and awful stuff.


> I did answer, because the internet started off as your proposed experiment. It didn't work.

It worked great for multiple decades until "social media" applied algorithms that promoted controversy (i.e. anger-inducing hyperbole and conspiracy theories) to increase "engagement" and sell more ads.

That you can find an ASCII swastika or goatse on Slashdot which is instantly downvoted to -1 (but not actually removed from the site) was never a real problem. That Facebook put QAnon at the top of your mom's feed was a major problem.

But then we get calls for censorship as a response to problem created by bad moderation.

Notice that there is a difference between voting (where the community determines prioritization in a decentralized way but nothing is rendered unavailable) and censorship (where some fallible central authority is deciding what people are not allowed to know).


That might work. Except these platforms are about getting you to spend more time on them. Thus they threw in some AI to decide what to show people.

Turns out outraging people increases engagement so the “jackasses” get amplified.


Isn’t Reddit pretty much proof that, at least of you allow users to self-select into groups, this doesn’t work?

Arguably FaceBook is too.


Isn't Reddit significantly less toxic than Twitter and Facebook?


Precisely - the CEO literally hung out on Discord servers hunting for people to ban on Parler.


I hadn’t heard this one, and can’t find anything about this - can you give some links/pointers to this story so I can learn more about this please?



Perhaps Parler wasn’t a true free speech platform, ie one that moderated only the legal bare minimum. But, even if it’s true that it was friendly to only end of the political spectrum, at least it was a competitor to Twitter. The elimination of the sole significant direct competitor to Twitter, even one that represents a subset of views and not all speech, is the problem. It means there’s a tech oligopoly.

I’d love to see a decentralized free speech platform. I’m no blockchain evangelist, but it would seem like such technology could be used to build a real online “public square” as you say.


Check out Scuttlebutt [0].

[0] https://scuttlebutt.nz


> Parler was never a "free speech" platform. This was always a lie, they heavily moderated their content.

Exactly. Alt-left speech was not welcome on parler - only alt-right speech.


Alt-left isn't a thing in the US, perhaps you're falling for a false dichotomy. Alt-right is a euphemism for fascism/white supremism.


I'm not going to split hairs about the actual positions of radical leftists, because it's not really relevant to this conversation.

I will, however, point out, that much of the alt-right is terrified of the antifa boogieman... And that Parler, the platform of free speech was quite happy to ban anything that smelled of antifa... As well as much that could, be described as moderate leftism.

Its tolerance for speech only extends to a very narrow slice of the political spectrum.


Yes, but the notion of an existent, burgeoning "Alt-left" is due to fully attributable for folks being divorced from reality as a consequence of consuming lying media sources (especially those promoted on Parler). Also, "Antifa" isn't an organization. So classifying what white nationalists call "antifa" as boogeymen is surprisingly apt.

What gets blocked on Parler is more appropriately called "not supportive of white nationalism." That way you are sure to capture the entire space of content, whether Alt-right or not, instead of narrowing the universe to Alt-right and things the alt-right is encultured to fear but don't appear in reality.


I think you mean European / UK Centrists like that Nice Lib dem Alexandria Ocasio-Cortez :-) and that On nation tory Mr Obama


The correct antonym to Alt-Right is "Ctrl-Left"


And anyone in the middle should be called a spacebar.


Well a real free speech platform should apply strict automatic moderating rules to political threads and basically tease out the best arguments from all sides,


Maybe we should move away from free speech into pro speech. Sites that encourage a healthy discussion is what we really want. Anything that discourages that should be reduced in importance anything that encourages that promoted.

Patterns of negative conversation defined by fewer quality reply posts get negative points and vice versa for positive or patterns to be encouraged.

As things evolve and how people respond changes these patterns can change.


That is not free speech. That could be called something like "quality speech".

With free speech, as defined now, low quality arguments hit the same bar as high quality ones.

Yes we should strive to emphasise high quality arguments (even though I fear this is a near impossible task programmatically) but this is not what free speech means.

Tldr: according to free speech the dummies and assholes deserve to be heard.


I think HN strives to achieve this (as you point out, very high bar) and does a pretty good job with it overall. I do believe that dang's and the moderator teams' light touch has really aided a free-ranging discussion among people of wildly divergent viewpoints, and I for one appreciate this dynamic.

That we're even having such a high-quality discussion now really speaks volumes to the seemingly effortless way in which the HN team has developed into a great place to have such a conversation or debate. To be honest, I do not think that HN is always fair to everyone, but it does seem like the moderators do try to be.

With that said: Twitter and FB have wildly missed the mark.


I think any mildly political topic disproves your point.

Technological discussions mostly stay high quality but any politics(-adjacent) topic turns into an obvious struggle of moderation and quality.

The same goes for Twitter but the balance of topics is heavily skewed towards the latter.


I mostly agree, in fact there are certain topics that I will absolutely ignore and block as experience shows there is no room for intellectual discussion about them on here.

That being said, I do agree with GP on this point:

> To be honest, I do not think that HN is always fair to everyone, but it does seem like the moderators do try to be


[flagged]


> You’re just spreading what you’ve read in liberal news about Parler, without having used the app yourself.

You have no way of knowing if this is a true statement. He never said whether he used the app. Maybe he's speaking from personal experience with the app. You should focus on debating the argument and not introduce assumption as fact.


> Because the reality is, if the users didn’t like the post or comment, they simply downvoted it until it was buried.

Exactly. And the problem was the users did like it. Lin Wood called for the VP's execution just days before a real mob took the capitol chanting "Hang Mike Pence!". And it was hugely popular and shared everywhere. And that's a problem that needs to be fixed, not celebrated. Major thought leaders across the platform were fanning the flames, not engaging in moderation.

Maybe it's true that Parler had a scheme for moderation. But it was objectively a complete failure.


> I'm interested in knowing how you think a real "free speech" platform can actually work, however.

Let me give this a shot. There are two types of content that need to be removed: (1) spammy content that readers themselves don't want to see, and (2) illegal content that society doesn't want anybody to see. I think these two need to be addressed separately.

Illegal content includes copyright infringement, violations of NDAs, libel, slander, perverting the course of justice (violations of court orders), incitements to violence, consipracy to commit a crime, exposing troop movements, and in general anything that directly causes damages which could be assessed and recovered in a court of law. Racism, sexism, homophobia, "hate speech", advocacy of violence, falsehoods, trickery, promoting very dangerous ideas, lying about (or being incorrect about) vaccines, lying (or being incorrect) about who won an election, offensive speech, and jokes of all kinds are, at least in America, all forms of protected speech.

Illegal content must be taken down when ordered by a court to do so. AFAIK the platform isn't responsible to take anything down unless ordered by a court to do so, but I'm probably just ignorant -- PLEASE dont take legal advice from some random armchair pontificator on the internet like myself. Section 230(c)(1) protects the platform as they are not deemed to be the speaker.

Most people are terrible judges of whether something is illegal or not. For example, many congresspeople think Trump's Jan 6 speech was illegal (it wasn't) because it incited violence (it didn't). It didn't even advocate for violence. The idea that it was illegal is so far beyond Brandenberg to be laughable if it wasn't so serious... but back to the point... Attempts to proactively take down illegal content will invariably take down some legal content, probably lots of legal content, which is why a true "ideal" free speech platform would not attempt to do this... one must have humility and recognize one's near utter inability to determine what is and what is not illegal to any reasonable degree of accuracy during times of political strife like we find ourselves in today. The popular opinion right now is that all of us should enact justice upon each other according to our own personal interpretation of justice... which is just nuts when you think about it. Let's leave law enforcement up to the law enforcement professionals.

The first issue, the spammy content, would then be handled the same way we handle email spam. Plug into the spam filtering service of your choice, subscribe to rulesets, or write your own. I don't know of any platform that lets you plug in your own moderation, and so that's where I think they've all gone wrong - they either get flooded with spam or antisemites and become nearly useless to nearly everybody, or else some moderation team thinks it is their role to filter spam on behalf of the community and the community gets pissy that they filtered the wrong things. These two extremes are both wrong. Users can filter their own content if given the right tools. BTW, Section 230(d) requires providers notify their customers that parental control protections are commercially available... it puts the onus on the users to solve this for themselves... even way back when that law was written.

I'm going to add a third issue, flooding. That has a content neutral solution: throttling.


I assume you're getting downvoted at least in part because people disagree with you (I didn't downvote FWIW), but you're also factually wrong about Section 230 protections. Section 230 doesn't give blanket immunity to publishing illegal content: it only protects against civil violations, not criminal.


> Section 230 doesn't give blanket immunity to publishing illegal content: it only protects against civil violations, not criminal.

Of course you are correct, but that doesn't make me factually wrong. You are talking about publishing illegal content. But (c)(1) "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

And I missed DMCA takedowns.


How was it a lie? You didn’t have an account there. And you’re very mistaken on the “make some cash” theory.


> Parler should not have been de-platformed

With this statement you are implying one of two things:

* You are very unhappy with AWS for de-platforming them

* You think AWS should not have been allowed to do so

If it's the first one, well, you are free to treat that however you like. Stop using AWS yourself, especially if you now fear AWS will do this to you. Don't do business with other companies that use AWS -- you can even block all AWS IPs at your firewall. Tell everyone how bad they are, and encourage others to boycott them too. If AWS continues to de-platform customers that don't seem to deserve it, this will get even easier over time.

If it's the second one though, that raises all kinds of tricky issues. If you run anything that allows user content, and a user posts something that hurts you, are you now obligated to host it (leaving aside illegal content and the tricky jurisdiction issues that go with that)? What if other major customers or users start leaving because you're hosting said content -- are you just supposed to let your site/business effectively die? Are you also forbidden from raising the price to that harmful user to $1-trillion-per-month (since that effectively is a way to "de-platform" them), and if so, what is the maximum price you're allowed to charge and who figures that out?


If AT&T started banning any traffic: emails, websites etc. that 'talked poorly about AT&T' then it would be a breach of net neutrality, right? And totally unacceptable.

Twitter and Partler are content platforms, AWS is not.

They are an infrastructure, not a content company and there is a material difference.

Ford, GM and Telsa collectively refusing to sell to XYZ customers for whatever reasons gets out of hand very quickly.

'You can just use another service' generally doesn't work as an argument in oligarchic situations. If AWS style services were as varied as grocery stores, maybe, but they are not.


That was a clearly partisan act

Asking a customer to stick to the rules they agreed to and eventually ending the business relationship when the customer chooses not to is hardly a 'partisan act'. Parler could have just done what AWS asked (even temporarily, to keep themselves online while looking for alternatives). They didn't.


ISIS had Twitter accounts in which they showed beheadings and other things [1], largely as a recruiting tool in order to bring young Western Men to Syria to commit atrocities.

"The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS’ unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and “lone wolf” attacks." [2]

Apparently US authorities were happy to want Twitter to keep those accounts alive for some time as they could use them to track activity - the point remains it's wrong to the point of naive to believe that these decisions are not political.

Facebook was enabling genocidal activity [3].

It's incredibly hypocritical to look at the historical activities of ISIS on Twitter, Facebook and all of these platforms, and then to hear about the 'political violence' caused by Parler.

I don't like Parler, the hypocrisy is staggering.

[1] https://en.wikipedia.org/wiki/Use_of_social_media_by_the_Isl...

[2] https://journals.plos.org/plosone/article?id=10.1371/journal...

[3] https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...


You can definitely make the case that the large moderation infrastructure fielded by gigantic social media companies has been anywhere from imperfect to outright irresponsible.

The difference is, there was actual moderation. Parler, a tiny (2 orders of magnitude tinier than Twitter) appears to have had no moderation infrastructure at all.

Even the data in your first reference seems to suggest that Twitter alone has probably moderated more accounts than Parler has actually had.


AWS also did not give Parler any time to cure their problems after making it clear that they wanted to terminate the contract for problematic content.


According to https://www.courtlistener.com/docket/29095511/13/parler-llc-... it took several months of moderation problems before AWS dropped Parler as a client.


Not only that, but Parler ran directly afoul of contract provisions they agreed to that specifically gave AWS the right to terminate without notice. People making these arguments about Parler's relationship with AWS don't seem to have actually read the AWS terms.


According to Amazon. Parler has vociferously disputed this.


You can read 100 different takedowns of Parler's arguments, but they're all mooted by the fact that Parler was self-evidently hosting material that contravened the AWS term allowing them to be cut off without notice. Parler has no case here.

Broken record: I'm still surprised so many people here have so little exposure to the hosting business to believe that there's actually an inviolate right to due process in keeping your stuff hosted somewhere. Hosting providers have to kick users off all the time. It is a basic fact of life in hosting: people abuse stuff.

Parler wasn't running a phishing scam, spamming, or DDoS'ing people, but that's not my point; my point is: if you're a hosting provider, your contracts basically have to give you the right to boot people at your discretion. The idea that anyone thought AWS's might not? Baffling.


Parler isn't disputing that abusive content was posted. Their claim is for breach of contract, antitrust, and torturous interference.

The following is an excerpt from https://www.courtlistener.com/docket/29095511/34/parler-llc-... the order denying the injunction.

"AWS has submitted to the Court multiple representative examples, reflecting content posted on Parler during this period, that AWS claims violated the terms of the AUP and the parties’ Agreement. ... Parler has not denied that these posts are abusive or that they violate the Acceptable Use Policy. "


> Parler should not have been de-platformed

You obviously did not see the material that the company allowed to be posted on their services in contravention to their hosting provider's terms of services.

Always abide by your host's AUP. Otherwise, you can get yeeted.


> You obviously did not see the material that the company allowed to be posted on their services in contravention to their hosting provider's terms of services. Always abide by your host's AUP.

AWS AUP: "You may not use.. or instruct others to use, the Services.. for any.. offensive use"

"Offensive"? That word is so vague as to be meaningless.

This turns into a dangerously simple attack. Just post something you know will offend someone. Et viola: you just killed a company.

It's a lot cheaper and easier (and less likely to involve swat teams) than hiring some sketchy DDoS outfit.


AWS provided multiple times over evidence of violent content that was against their policies and asked Parler to step up moderation, which didn't happen.

AWS arguments:

https://twitter.com/KYWeise/status/1349200096345296897?s=20

> Parler itself has admitted it has a backlog of 26,000 reports of content that violates its (minimal) community standards that it had not yet reviewed

https://twitter.com/KYWeise/status/1349203942614335488?s=20

> When I said Amazon cited many vile and violent examples of posts that it flagged for Parler, I meant it. https://pbs.twimg.com/media/ErlVjN7U0AEr1gt?format=jpg&name=... https://pbs.twimg.com/media/ErlVjPBVEAEs9sg?format=jpg&name=...

https://twitter.com/KYWeise/status/1349202898459115520?s=20

> Such a requirement also poses a risk to Amazon itself, with posts calling for others to "burn down Amazon delivery trucks" until they "reverse course."


I wonder whose side the FSF would take.

Amazon shouldn't be making these decisions, and certainly not without giving the other party ample time to make other arrangements. This should have been litigated in court prior to taking deplatforming actions.

Imagine if Amazon was a Chinese company and Parler hosted information about the plight of the Uyghur people.

Community websites have the right to enforce rules. I'm not so sure common carriers do, and AWS is a common carrier. A good rule of thumb for American hosting companies is if the FBI won't seize the website for the content hosted therein, the website shouldn't be removed.

For context, I'm a liberal and dislike the content on Parler. Defending free speech is more important than zapping the republicans. Imagine if the tables were flipped and talking about abortion was what conservatives wanted to ban.


AWS is not a common carrier. They are a hosting company. There are hundreds (thousands?) of hosting companies to choose from and you are not even required to use one to host a website on the Internet. You are free to host your website yourself on your own hardware.


> Amazon shouldn't be making these decisions, and certainly not without giving the other party ample time to make other arrangements

Amazon is still working with Parler to export their data off of AWS.

https://www.buzzfeednews.com/article/johnpaczkowski/amazon-p...

> We will ensure that all of your data is preserved for you to migrate to your own servers, and will work with you as best as we can to help your migration


Taking the services entirely offline then giving them a few weeks to access archived data does not entirely cover the definition of (at least in my opinion):

> not without giving the other party ample time to make other arrangements

Going entirely offline like that can easily destroy a web business. Or at a minimum seriously harm them.

Not to say I care much for Parler's technical nor administrative approach to providing free speech.


Parler ignored repeated notices of TOS violations from Amazon, were given a chance to address their frequent TOS violations and incomplete enforcement of TOS violation notices, and came back with a very weak plan while still ignoring specific TOS violations AWS had notified them of.

Parler had ample time. They ignored warnings, they did not address their TOS violations. Then their contract was terminated, and AWS is helping them migrate after the fact. Parler displayed an ignorance of the law and how web hosting works.

Web companies that repeatedly violate the TOS of their hosts deserve to be at the mercy of their failover plans. That responsibility is borne by the web company, not the host.


Why aren't you hosting them?


Because they're not Amazon.


I'd imagine that the FSF would say parler should be using free software for their hosting, so that they can switch to local hosting if they want

Their position is usually to not rely on non free software


Parler was, by the by. The issue is that their hardware requirements were bonkers, so finding a new host capable of giving them a few hundred servers and dedicated 10G internal network capacity wasn’t trivial.


AWS is not a common carrier


At their size, they sure seem like it.

What happens when AWS and Azure design their own silicon, outstrip other hosting providers, and then become the only hosts on the internet?

Are they common carriers then?


Common carriers are regulated entities like shipping companies and telecoms. You get to be one through a huge amount of bureaucracy, negotiation with regulators, etc. It protects you from a lot of liability but also creates a lot of constraints on what you can do, what you can charge, etc. Either way, you don't just wake up one morning and discover that your company has become one.

https://en.wikipedia.org/wiki/Common_carrier


This is a valid thought experiment to consider.

I'm old enough that racking my own boxes and hosting everything myself is both natural and easy (and cheaper and faster).

But working with plenty of young developers, most don't have any exposure or interest in setting up anything other than AWS services tied together.

So like it or not (I don't) we're rushing towards a world where AWS (maaybe GCP, mostly not) is the only way to build something.

It's good to consider the implications of this.


That’s not true though.

By volume, GoDaddy is actually a bigger host than AWS by a factor of 2.

The HN crowd vastly overestimates how big AWS is compared to less sexy options.


The difference is that AWS hosts big clients and makes big bucks in return. Godaddy's hosting had 300m in revenue[0], while AWS did nearly 13 billion[1].

0: https://www.zdnet.com/article/godaddy-q3-revenue-beats-expec....

1: https://www.cnbc.com/2021/02/02/aws-earnings-q4-2020.html#Bo....


It's definitely true that AWS customers spend more than Godaddy customers, for sure. But I still reject the idea that you have to use AWS (or will in the near future). There are literally hundreds of hosting providers out there, thousands if you include places that will run your own hardware. In my own career only half of my jobs have depended on AWS in any capacity; the rest used other hosting providers or self hosted.

In a very real way, a founder today has more options for hosting than they do for catered lunch.


Size doesn't have anything to do with it. No matter how big AWS and Azure get or what hardware they use, you can host a website out of a computer in your closet or the phone in your hand. No one needs them to operate, it's just convenient & cost-effective to do so.


Nah, anyone can set up a host.


What does "outstrip" even mean? Physically ripping the silicon out from the servers in your garage?


It means overtake or outcompete. In this case they’re saying that the big cloud hosting providers could end up in a monopoly like position that smaller companies couldn’t compete with if they developed their own superior server tech.


Don't AWS, GCP, and Azure have the majority of the market already? What would them having their own silicon change?


Oh sorry, I’m not defending the point just explaining what outstrip meant in context of it.


> Just post something you know will offend someone. Et viola: you just killed a company.

Except that's not how it works in the slightest. You have moderation in place to work to prevent that. You are doing your due diligence. You are making an honest effort to solve this problem. Mistakes will happen. Nothing is perfect, but you are making an honest attempt, and it's reasonably effective.

Parler was not doing anything close to that.

Your comment operates under the belief that these things are binary. Simply post one thing someone can find offensive, and "I win!"

But most people, most successful companies, are smarter than that and operate in reality.


When Parler reviewed the AWS terms, the vagueness of "offensiveness" would have been a good reason for them (of all companies) not to select AWS as a hosting platform. Anyone who pays attention to this space could have warner Parler; similar things have happened to other sites, like Gab, on AWS. They weren't diligent, they selected a provider incompatible with their business goals, and they suffered the consequences.


>"Offensive"? That word is so vague as to be meaningless.

>This turns into a dangerously simple attack. Just post something you know will offend someone. Et viola: you just killed a company.

What's the probability at any given time that someone on HN is offended? YCombinator seems to use Amazon's Route 53 and would therefore be at risk.

  Domain Name: YCOMBINATOR.COM
  Registry Domain ID: 147225527_DOMAIN_COM-VRSN
  Registrar WHOIS Server: whois.gandi.net
  Registrar URL: http://www.gandi.net
  Updated Date: 2020-02-15T20:11:05Z
  Creation Date: 2005-03-20T23:51:07Z
  Registry Expiry Date: 2021-03-20T22:51:07Z
  Registrar: Gandi SAS
  Registrar IANA ID: 81
  Registrar Abuse Contact Email: abuse@support.gandi.net
  Registrar Abuse Contact Phone: +33.170377661
  Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
  Name Server: NS-1411.AWSDNS-48.ORG
  Name Server: NS-1914.AWSDNS-47.CO.UK
  Name Server: NS-225.AWSDNS-28.COM
  Name Server: NS-556.AWSDNS-05.NET


>"Always abide by your host's AUP." These basically say things like "your users may not use the services to offend others." That's literally impossible to abide by. In fact, it appears to have been a company-ending event in this case, which makes it a devilishly simple attack: post something you know will get everyone worked up, et viola: you just killed a company.

The way you're wording this sounds very much like you haven't actually seen the kind of stuff that was going on there. It's not simply a matter of people being offended.


The problem with Parler is not that the users didn't abide by the policy of "your users may not use the services to offend others." The problem was that the administration side refused to attempt to enforce any semblance of rules on the platform.


I'm not sure what you mean by that. I read the AWS response. Nothing there seemed to be much worse than I've seen on Twitter (actually, Twitter stuff was far worse).

Regardless, this is besides the point: all a competitor has to do is post some garbage and point to it, and boom, you're dead. That's not ok.


>Regardless, this is besides the point: all a competitor has to do is post some garbage and point to it, and boom, you're dead. That's not ok.

In this case, they pointed to it weeks prior and it wasn't removed.


Amazon claims this, and Parler disputes that this was brought to their attention. (it's not easy to ramp up moderation overnight -- and FB/Twitter have consistently had far worse, even with far larger moderation teams.)


Removing the specific examples included in the complaint can't be that hard though. One person should be able to do it in a day or two


I agree, which makes Parler's dispute over Amazon's claimed timeline far more plausible to me.


Doing just easy things isn't much of a business model.


How many years did Twitter exist before hiring big teams of moderators?


> Regardless, this is besides the point: all a competitor has to do is post some garbage and point to it, and boom, you're dead. That's not ok.

That is not what happened however. That is quite manipulative framing spread by Parler.

What happened was that Amazon itself identified those few dozen posts and asked Parler to remove them. Parler did not done that and claimed it is difficult for them. Amazon is not Parlers competitor. Amazon gave Parler plenty of time to remove those posts. The issue was that Parler was unwilling to do so, because they whole thing was to be safe place of exactly that sort of comments.

Twitter does not have too good moderation, but they are not refusing to take down inciting posts on principle. They did regularly took down accounts and tweets in the past. There is difference between not doing it perfectly and refusing to even try.


This may be true. It wouldn't surprise me if Parler said "no" when asked to take them down, but I didn't see that even in Amazon's court filing.

However, I don't think Amazon should interfere in customers' businesses and harm the relationship between customers and third parties (the users). What if Amazon had said, "Netflix, we find the Cuties movie offensive. We have shut you off and will delete all of your data within 24 hours."


> didn't see that even in Amazon's court filing.

That is exactly what is in Amazons court response. You either did not read it or are lying.


> Nothing there seemed to be much worse than I've seen on Twitter (actually, Twitter stuff was far worse).

https://i.redd.it/om90nwadqca61.png is a Parler post with a racial slur and 25k upvotes. Twitter has its issues but an equivalent comment isn't going to survive two days.

The court documents from their case against Amazon are available at https://www.courtlistener.com/docket/29095511/parler-llc-v-a.... Amazon's declarations contain other examples of content that wouldn't survive on Twitter.

> Regardless, this is besides the point: all a competitor has to do is post some garbage and point to it, and boom, you're dead. That's not ok.

According to https://www.courtlistener.com/docket/29095511/13/parler-llc-... it took several months of moderation problems before AWS dropped Parler as a client. If Parler had done more active moderation and tried harder to keep Amazon happy they'd still be hosted on AWS.


Yeah, that's a disgusting comment and it's far worse that it received upvotes if that screenshot is accurate. Nevertheless, it is legal free speech, even if it makes me very angry.


Legal free speech does not mean that a private company is obligated to host it.


> the truth would have come out shortly about the company's total lack of care for its users

The truth had come out, hadn't it? Even before the 2020 election, people had been mocking the network for arbitrary post removal and the requirement to show official IDs for at least some features.


If you stand for freedom of speech then you must stand for freedom of association; as "I stand with you" is a declaration of support. Speech isn't only vocal, it is all communication.

Services that chose not to host Parler were exercising their freedom of association; they were _communicating_ that they did not wish to support Parler.


Sorry, but you don’t just get to incubate insurrection without facing societal consequences.

You have free speech, but zero freedom from consequence.


>You have free speech, but zero freedom from consequence.

That's about the most Orwellian meme that is making the rounds and needs to be examined critically. Freedom to do a thing implies protection from consequence, or you're not free.

Also there were articles on HN about how the Capitol riots were organized on Facebook.


> Freedom to do a thing implies protection from consequence, or you're not free.

This is a wild take on freedom. Do you seriously believe that free speech means that you should be able to say anything and be protected from any and all consequences?


I'm not sure what "any and all" means, but it certainly protects you physically and that implies that you get protection from criminal violence and harassment, and certainly should. No?

It's so vague, these one-liners, that we're talking past one another. Surely you don't think you should be able to doxx or punch someone in the face that has extremist views?


> Surely you don't think you should be able to doxx or punch someone in the face that has extremist views?

How could you possibly believe this in good faith? Consequences can also be, for example, people expressing their distaste in your speech and choosing to boycott your products if you're a business.

You said protection from consequences, not protection from criminal violence and harassment. If you meant protection from criminal violence and harassment, you should have said protection from criminal violence and harassment.


You honestly thought that people who raise their eyebrows at the phrase 'consequences to speech' advocate preventing boycotts or forcing people to continue buying products?

How about the real world consequences people could potentially be referring to: harassment, doxxing, job losses and violence. I've met plenty of 'punch a Nazi in the face' people to know that many people believe that as a consequence.


Again, if you wanted to convey a specific meaning of consequences you probably should have just said that. There's a lot of consequences you're protected from, and even more that you're not protected from.


I get subtracted points on every single reply to this thread by God knows who, but the point I've been trying to make is, when someone talks about consequences of speech, it sounds threatening.


If you keep insisting parler as a bastion of free speech then there's not much to argue with you. It was a clearly partisan highly censored place where anything other than right wing commentary was not tolerated.

If signal gets deplatformed get your pitchforks. Till then I'm not worried. Also regarding the slippery slope, let it slide. If this is how we start regulating cloud services as utilities then so be it.


> It was a clearly partisan highly censored place where anything other than right wing commentary was not tolerated.

This is just false. Did you have an account there?


Do you? Please share. Will love to see what things you have said that ostensibly should have been censored but weren't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: