Hacker News new | past | comments | ask | show | jobs | submit login
Reddit is still in turmoil (techcrunch.com)
335 points by minimaxir on July 21, 2016 | hide | past | favorite | 499 comments



I think much of Reddit's problems with its userbase boil down to an early failure to manage expectations.

It's pretty clear that the Reddit corporation doesn't want Reddit to be an anything-goes, absolute free speech zone with no moderation or anti-harassment policies -- but that's what the site actually was for many years. Now, when the company cracks down, users think their freedoms are being curtailed. The mistake was ever allowing that kind of freedom in the first place, because people developed an expectation that it would persist.

Compounding that problem, the fact that the site was unregulated for so long caused it to attract the kind of people who need to be regulated the most. In other words, it's no surprise that the most tolerant communities attract people who are difficult to tolerate.

I suspect Twitter is having similar issues dealing with harassment, after letting it happen for so long. If there is a lesson in this, it is that online communities which plan to implement anti-harassment policies ought to do so from the beginning, and develop the expectation among the users that such policies exist, and will continue to exist. Don't just tack them on after several years, and don't enforce them inconsistently and arbitrarily as Reddit has done.

It will be difficult for Condé Nast to get its money's worth out of Reddit now. I doubt it will ever shake its negative reputation.


Twitter's problem isn't quite the same as Reddit's. It's because the site is incredibly inconsistent with its moderation.

If you run a community, you need consistency. Members need to know where the boundaries are in regards to how you can act and what is/isn't acceptable.

Twitter doesn't really do this well. If you agree with the staff political stances, you can basically get away with anything. If you're popular enough (or a large company), you can often get away with things that would get a less popular user banned.

For example, contrast what happens when a left wing user breaks the rules and attacks people and what happens when a right wing one does it. It seems like the former will get punished a lot less harshly for the same offence.

Twitter needs to stop this, and enforce the rules for everyone in every situation.


If you ask us to compare between two examples, you are required to provide two actual examples instead of having us just imagine a scenario that fits your claim.


Perhaps I can assist: Shaun King calls Jason Whitlock a "coon". He keeps his verified check and account. If that isn't personal, racial abuse and harassment, I'm not sure what is.

EDIT: Not being tremendously familiar with this spat, it took me a while to figure out what Milo got banned for. Best I can figure out, it was for calling a Ghostbusters actress "barely literate". Since that's a bit of a weak-tea insult, I imagine the real offense was not stopping (how?) his army of followers from tweeting other vile things at the woman. Is that our standard? Should we hold peaceful BLM leaders responsible for tweets from their followers promoting cop-killing?


What happened with Milo is less about him attacking Leslie Jones, and more about his _followers_ attacking Leslie Jones. Milo knows full well that if he identifies someone for his heckling, he'll have thousands of trolls and their sockpuppets do the heavy lifting for him.


I personally don't think people should be accountable for their followers behavior, but let's suppose it is so.

Then, should we ban people such as Shanley, Randi Harper and their likes when they "unleash" their followers on some guy?

And this is actually different. While Milo did not explicitly call for the attacks, Shanley and Randi routinely do call for a target to be abused, for their companies to be harassed until the target is terminated, etc.

I don't support any side, I'm just trying to show that the first comment regarding consistency is indeed accurate, Twitter has none.


>I don't support any side, I'm just trying to show that the first comment regarding consistency is indeed accurate, Twitter has none.

No argument here. Twitter's inconsistent, if not outright apathetic about abuse no matter where it comes from. It's been pointed out there's no shortage of harassment and abuse from people on the left on Twitter. Maybe not at the rate and volume of the "alt-right" types, but it doesn't matter. Twitter needs to clean house.


The thing is they're not really apathetic. They seem to pick a target once in a while and go overboard with the retaliation. And it seems to be just out of the blue for no specific reason other than the mood of the day.


>Then, should we ban people such as Shanley, Randi Harper and their likes when they "unleash" their followers on some guy?

Absolutely. If the goal for Twitter is to be a place free from this sort of abuse, then anyone using a position of prominence to call for harassment and abuse should not be allowed to do so.

I don't actually know who these people are/that they've done what you're saying, but if it's true, then I don't think it's a hard question at all.


And Leslie has told her followers to attack people before too

(looking for the tweet)


Hopefully Twitter told her that isn't acceptable. I'm pretty Milo has been asked to cool it before.


Milo was temporary banned, at least three times, and had his verification revoked. You can't say he wasn't warned.


That's absolutely tame compared to the level of abusive content from right leaning accounts on the site. A mean spirited name calling like the example cherry picked has nothing on the hordes of alt-right leaning death threats and personalized abuse laid against progressive figures on the site.


Should we hold peaceful BLM leaders responsible for tweets from their followers promoting cop-killing?

Actually, I do think we should hold "peaceful" BLM leaders morally (not legally) responsible for tweets from their followers. How can a movement of justice accept the silence or ineffectiveness of BLM leaders with regards to inculcating a non-vile culture and calling out hate speech? Isn't tolerating that sort of thing from a movement's followers for the sake of internal politics and membership basically the same thing as tolerating a number of bad cops for the sake of "thin blue line" solidarity and internal police politics? It's not viable without becoming hypocritical. The only difference is in direct connection and legal consequence, not in the overall morality.

I think we can tolerate vileness as a culture and society, but we should not welcome it -- especially if it starts to abrogate what participants would consent to. We can't tolerate hate speech that incites to violence, period. We should not laud internet vileness as some kind of magical problem-solving "pure" expression, which is what we seem to be doing now. There is an important difference between a freedom and a virtue. Movements of social justice need to be teaching virtue. Otherwise, history shows us that such things will tend to devolve into demagoguery and mob behavior.


I think Milo was banned because he was retweeting fake epithet laced hate speech tweets that were made to look like they came from Leslie Jones:

http://fusion.net/story/327103/leslie-jones-twitter-racism/


Except Milo has been suspended before. Multiple times. Confusion towards his suspension has only considered this single incident in a vacuum.


Yeah, a criminal law analogy of Milo's circumstance would be being sent to prison for a probation violation.


Milo Yiannopoulos vs Ben Dreyfuss.

http://fredrikdeboer.com/2016/07/20/whos-in-and-whos-out-tha...

(And this is from a socialist)


They are probably talking about Milo Yiannopoulos being permanently banned from twitter.


Every "permaban" like this is an opportunity to get a famous person on a distributed Twitter competitor like GNU Social (especially if the ban actually lasts). If you work on one you may want to contact him. People whose presence is objectionable to Twitter are the natural early adopters for distributed social networks.


Except GNU has a Code of Conduct talking about considerate and respectful.

Whatever you think of COCs, they might be hard to take seriously if it's "be considerate or respectful, unless you have both a large following and were kicked off Twitter for what largely amounts to abusive and hate-speech like behavior, then we'll actively court you to come to our network!"


A bit old now, but this is a perfect example:

https://twitter.com/MikaelThalen/status/757505701711405056

Left is a tweet by Leslie Jones, right is one by Kassie Dillon. The former is 'left wing', the latter is 'right wing'.

Only the latter got suspended.


Twitter's other problem is that they cannot be consistent in their moderation, because when they ban someone threatening violence against celebrities in the name of, say, feminism the campaign to reinstate them comes from people in the tech industry and press that they can't ignore.


Then they have to become like most large companies and be willing to take criticism from the press. I mean, it's not like Twitter's immune to it already. They're getting hammered from every angle over their platform and its attitudes towards harasment and trolls and moderation.

Which brings me to a point about commmunity management I'd like to make:

Don't even get into this field if you want people to always like you. Or if you're obsessed with your 'reputation'.

Running a community site means making difficult decisions. It means losing friends over the community's rules and principles. It means you will get a very bad reputation from at least some part of society as a bully or dictator.

This is what a community manager is perceived as by a banned member:

http://www.managingcommunities.com/2012/12/03/community-mana...

Unfortunately, it seems Twitter (and a lot of such companies nowadays) seem more interested in keeping their friends in the media than they do running a decent community.


> If you run a community, you need consistency.

I agree with this. Reddit as a community is a little like Los Angeles is a city. It's really a community of communities. Some are incredibly well run, others are not.


That is its strength for broad appeal - there's the concept of bazaars and warrens that Reddit hits squarely on the head. I'm not sure how you can have one thing without the other.


> Twitter's problem isn't quite the same as Reddit's. It's because the site is incredibly inconsistent with its moderation.

If Twitter gets consistent with moderation, any profits that they could ever have will go puff! It takes real people to do consistent moderation.


They would also lose a large number of users. It wouldn't look good when the stockholders get a presentation saying that the user base shrunk, because the moderators kicked out a significant part of the users. Consistent moderation means kicking out anyone exhibiting that behaviour you just punish one person for.


And that's why a lot of social networks originally ended up in this mess. That's why there is so much controversy and so many personal attacks and flame wars.

Because at the end of the day, numbers mattered to them more than a decent community. It's kind of like why the media is so polarised and broken now. Because getting your readers to flame you and each other to a crisp gets page views and ad clicks. It's easy to sell junk that preconfirms your followers personal biases.


It is extremely disingenuous of you to avoid disclosing in your comment that you are heavily involved in "gaming journalism" and its direction, as that is directly related to the points you attempt to raise and would offer readers a clear understanding of which angle you're approaching from.

Not to mention the false equivalences that others have already pointed out. Par for the course I suppose. "Ethics!"


Huh?

Yes, I'm involved in gaming journalism and have some interest in the drama going on surrounding controversies like GamerGate, Brexit, various political elections, etc. Does this affect anything? Maybe, but I wouldn't say that makes it disingenuous.

I've seen people smear others with accusations of crimes to try and get them fired. I've seen people sic their followers on those they don't like. There have been various obvious threats left up that should have been removed in a decently moderated community.

And these issues come from all sides. They're not acceptable on either side, but from my experience, a conservative will seemingly get the rules enforced on them more harshly.

I wouldn't say those comparisons are false equivalences. I'm not comparing someone calling someone else a scumbag with a terrorist threat. Someone who claims they want to physically attack people should be banned much more quickly than someone who merely calls a few names.

But that's not happening at Twitter.


> Huh?

Nothing was more predictable than the feigned incredulousness, as if you haven't been harping on these exact talking points for months on this site while rationalizing GamerGate [1] and trying to convince us we've been deceived and it's not what we think.

Some gems from that query:

> So you believe what the media tells you about GamerGate? Because there's pretty clear evidence that regardless of your side in this debate, there was a decent chance you'd be trolled, bullied or doxed because of it.

> [GamerGate is] basically a revolt against what some see as a broken media, political correctness being forced on a community that didn't want it and favouritism among the gaming press.

There's no changing you, obviously, but for the sake of the "ethics" you're always talking about, your past efforts should certainly be disclosed when you hijack a top-level comment to make the unsupported claims you're making.

1. https://hn.algolia.com/?query=cm30%20gamergate&sort=byDate&p...


He's not wrong. And the claims of Twitter's political bias ARE supported above (and pretty easy to find too if you're willing to).


>Twitter's problem isn't quite the same as Reddit's. It's because the site is incredibly inconsistent with its moderation.

That is the same. Both are absolutely terrible at consistency. They both have very clear and obvious "protected" groups who can violate the rules with impunity.


True. Some like Shit Reddit Says can get away with open brigading against other subreddits they don't like. There are cases where a post has a few hundred votes, gets linked there and ends up with a score in the minus hundreds range.

And a few other subs seem like doxxing people they disagree with, usually with the depressing common assumption that there's 'no bad actions, only targets'.


The whole notion of "brigading" and "doxxing" that reddit has is just plan stupid. The whole site is build on linking stuff to other places on the internet, but suddenly, when you link to the site itself there's a problem? "Doxxing" is an issue, but reddit is overly sensitive where it's impossible to link publicly accessible data that the people in question put up for everyone to see. I've been banned for linking a facebook profile that had the same username as the account on reddit - stuff that takes one second to google.


Well, as far as brigading goes, the violation is actually doing it, not linking to the wrong sub. However, the rules aren't enforced evenly. Some subs have been threatened with shutdowns for allowing any links, even np links. Some subs openly allow links and disallow np links, with no retaliation.

Doxxing is a similar issue. However, were you banned by a mod or shadowbanned by an admin? Because mods can mod their sub as they want. Consistency goes out the window, which is good and bad. If you were shadowbanned though, that's a whole different story.


The very concept of "brigading" doesn't make sense, as it more than ofter actually punishes someone for actually using the site's features. I've been banned for following a link to reddit posted on 4chan and voting - apparently there was a raid and the admins acted like I was a part of it. Totally unprofessional and doesn't stop actual, organized raids done via IRC.

As for doxxing - both, actually. I had to create at least 5 different reddit accounts in the span of less than two years, always getting banned for petty stuff like stating someone's real name when it was actually used on twitter. Reddit is a waste of time and it's not because of cat memes, it's because of the attitudes of admins who doesn't give a shit about quality or their userbase.


It's not just that the site was that way, a free-speech zone, for many years. It was their official policy. They were quite pretentious about it as well, leading the reddit community to believe that this would be the permanent policy. To be honest, that is still mostly how it is; the admins only ban a subreddit when it generates a critical mass of bad press.

They don't want to drive away their users. They know how Digg died. They're smart enough not to piss off the community enough such that any significant number of users will leave.


I think something really important that doesn't get discussed very often when "free speech" comes up is that there really are categories of speech which most reasonable people expect and perhaps even prefer would not be 'protected' (or in the context of Reddit, permitted and perhaps celebrated). More importantly there are categories of speech which are not put forward in good faith but are instead intended to do damage to communities that host them.

Among the various errors the administrators of Reddit may be guilty of, underestimating the damage & toxicity some categories of speech made in bad faith cause and having prepared no mechanisms to help their communities be resistant to or minimize that toxicity might be the most significant.

Digg died because Reddit existed, so there was somewhere for disaffected Digg users to go to. In particular users who by & large were interested in good faith participation. There is no similar analog that exists today and because users who show tendencies to indulge in bad faith participation new alternative sites often struggle with dealing a level of toxic behavior which makes it extraordinarily difficult to maintain growth.


I think something really important that doesn't get discussed very often when "free speech" comes up is that there really are categories of speech which most reasonable people expect and perhaps even prefer would not be 'protected'

In a sense it always comes up. Without sticking my neck out to say what reddit should or shouldn't allow, I simply want to point out that principled positions only count when they're uncomfortable to uphold. We wouldn't be having a conversation about free speech in the first place unless the conversation has already turned to the topic of sweeping away uncomfortable speech.

If someone wanted to make a convincing argument about speech which should be 'unprotected' there has to be a conflicting principle, or it's just lip service to the idea of "free speech" (because it's generally recognized as virtuous and they don't want to be seen as opposing it). "Fighting words" are unprotected in so far as they represent incitement to an immediate breach of the peace. Other principles raised to carve out exceptions to free speech should be similar in the obviousness that it's categorically not about suppressing ideas.


> I think something really important that doesn't get discussed very often when "free speech" comes up is that there really are categories of speech which most reasonable people expect and perhaps even prefer would not be 'protected'.

I think this gets brought up a fair amount, with both the obvious examples (yelling 'fire' in a crowded theater) and the not-so-obvious ones (whether 'hate speech' should be protected is very divisive).

A larger problem really is that many people are in a pretty binary mindset, especially about the not-so-obvious ones: Either you're a monster who has no principles, or you're a monster defending neo-nazis, without acknowledging that "allowable speech" is a constantly moving target.

Yelling "DEATH TO <X>" on a plane nowadays would almost certainly land you in prison (whereas 50 years ago maybe you would've just gotten a psych evaluation), and conversely talking about equal rights 200 years ago would've gotten you thrown in prison as a rabble-rouser inciting chaos.

Finding the acceptable level of "free speech" in a private community is something that's constantly going to be in flux, and successful communities will accept that ultimately neither group may be happy with the equilibrium: But it's still the equilibrium that maximizes happiness.

> More importantly there are categories of speech which are not put forward in good faith but are instead intended to do damage to communities that host them.

The dangerous part that you're omitting here is that "good faith" is an extremely subjective judgement. It's incredibly easy to imagine scenarios where dissenting speech = scrutinized HEAVILY for "not being in good faith".

For example, I'm pretty sure Erdogan has accused many opposition members of exactly that.


> The mistake was ever allowing that kind of freedom in the first place, because people developed an expectation that it would persist.

Sure, and then we wouldn't be talking about reddit but rather whatever site was actually free at the time.

The only reason web forums exist is because people are lazy about adopting decentralized Free software. Yet the owners of these centralized services, as well as paternalistic spectators, end up deluded that the communities they serve are "theirs" to mold and control. In reality these sites' power is but an epsilon, and they should be thankful they can at least exercise it to generate revenue by surveilling users who are too lazy to protect themselves.


> The only reason web forums exist is because people are lazy about adopting decentralized Free software

Or because that open-source software is user-hostile and has no serious argument for its use for 99% of people. When users don't do something you like, in almost every instance it's your failing rather than theirs.


Free software is merely user-unfriendly (presently). Webapps are user-hostile - overtly functioning as agents of third parties rather than their users.

Blame isn't singular - describing users as lazy, including ourselves for using this website, both encourages individuals to exert more effort adopting better solutions and just plainly characterizes the problem.

I do agree that the most productive way forward is for Free software to accept this facet for what it is, and to step up its quality.


> Webapps are user-hostile

What? What if I wrote it for my own use? What if it's among the thousands of web apps meant to be installed on a server you control?

There's nothing inherent about the technology that requires third party control.


I'm not who you were replying to but...

I think web-apps are generally user-hostile. You are correct that there is nothing inherent about it though.

I think the same could be said for FOSS: There's nothing inherently user-unfriendly about it, it just has a tendency to go that direction.


"Webapp" generally implies something administered by a third party. The extra overhead of (dedicated server, network setup, redundancy, each user requiring connectivity) only pays off when it's amortized over many users who do not have to set things up.

Sure, some of these release their code as "open source" and you can setup your own copy. But I've never heard of one where the majority of users actually do so [0].

There are some locally-hosted applications that are intended to make themselves available on :::80 that use a browser as a UI toolkit, similar to how we can have a native app that links a browser as a library. But such things generally aren't referred to as webapps. The network transit seems integral to this term.

[0] Well there's probably some developer-focused tools where this could be the case because said developers already have to deal with "web stuff" and so the additional overhead isn't as high. But this doesn't generalize to all users.


> But I've never heard of one where the majority of users actually do so

https://github.com/Kickball/awesome-selfhosted


The majority of that list has nothing to do with the web (eg sendmail). And many of the projects that do also fail the requirements - for instance most users of a blog platform are readers, who still aren't in control of said software.

We're obviously not disagreeing about the utility of local software, just that of "web apps". I assert that the platform itself is inherently biased towards making non-Free software. It's not impossible to create something that qualifies as a web app and Free software, it's just that this puts it in one small corner of an ecosystem based on user-disempowerment.

Obviously many web technologies could be useful to build freer systems (like say if DNS/HTTP were replaced with something distributed), but as it stands the term basically refers to the modern generation of proprietary software.

I get that a lot of by-day web developers want to apply their skills to build Free software, and that is great. They just need to be aware that it's an uphill battle against one's own assumptions, and building the same type of centralized systems desired by business interests (but releasing the code) likely does not suffice.


> The majority of that list has nothing to do with the web

There are 188 PHP web applications in that list. 60 NodeJS web applications. 86 Python web applications. 25 Ruby web applications. I can go on.

You're just being silly by pretending that this segment doesn't exist.

> I assert that the platform itself is inherently biased towards making non-Free software.

As opposed to native applications? I recall those being distributed through "app stores" where you have zero access to even the binary, nevermind the source code, and demand payment in return for access to either the application or features within that application.

Does that mean native applications have a bias toward "making non-Free software"? No. And the answer is no for web as well.

> They just need to be aware that it's an uphill battle against one's own assumptions, and building the same type of centralized systems desired by business interests (but releasing the code) likely does not suffice.

I have no idea what this means, but based on your previous statements you're assuming a multi-user registration model with a centralized database you find on every commercial site which is a pre-requisite to monetization.

You don't need to have that for a web app. At all. You can build a calculator as a web app. Or any desktop or command line application as a web app. With no registration or database required. Just type in a web address and start using the app. There is nothing inherent about web development that requires a user-based system.

There is no uphill challenge to building free software using web. Now, you obviously won't have as many users of your software because getting things running is much more difficult ... but that's changing with platforms like Electron.

http://electron.atom.io/


As I keep saying, yes it is possible to create something similar to a webapp that is Free software. [0]

But, let's take a look at the actual technologies behind the "web":

1. A naming system/transport protocol

2. A document markup language

3. A virtual machine for sandboxed code.

(2) is user-friendly (PoLP). But creating a Free app implies ignoring the defining "web" features of (1) and (3)!

If a single user is the only one accessing a given name, then that naming system has basically been made irrelevant. (If multiple users are accessing a given name, then per HTTP only one is in control of that name, and thus we deviate from free software).

If a user controls updates to the code and (in theory) audits it, then the primary purpose of the sandbox (executing random third-party specified code) is gone. The battle-hardened sandbox is nice for security, but the functionality is otherwise indistinguishable from any other language.

Why should such an app be called a "webapp" if it does not utilize most web features?

> based on your previous statements you're assuming a multi-user registration model with a centralized database you find on every commercial site which is a pre-requisite to monetization

I am not assuming this - I am noticing that a large majority of released projects are. Either due to lack of developer thought about repudiating "web" assumptions, or deliberate corporate "free-washing".

Platform defaults end up defining the culture of a platform. A good question to ask yourself - if you were designing a Free calculator app and wanted to show a friend, would you send them the files you're editing and they'd set it up on their own httpd to try it out, or would you end up sending them a URL to your server and thereby undermine their freedom?

[0] Although as I've said, many of those PHP etc apps are not actually Free - for instance the blog platforms.


> If a single user is the only one accessing a given name, then that naming system has basically been made irrelevant.

No. It's not irrelevant at all. It's how you "execute" that specific software. It serves the same purpose as a filename for a binary on a filesystem.

> If a user controls updates to the code and (in theory) audits it, then the primary purpose of the sandbox (executing random third-party specified code) is gone.

No. An audit is never perfect and that even assumes you perform one. While I frequently modify the software I utilize, I almost never audit the entire codebase, so a sandbox is quite useful.

> if you were designing a Free calculator app and wanted to show a friend, would you send them the files you're editing and they'd set it up on their own httpd to try it out, or would you end up sending them a URL to your server and thereby undermine their freedom?

I did just that. I sent them a URL. Here's the rub, it's a purely front-end only web app. No backend. That means their browser downloads the entire codebase instantly. The codebase is easily accessed, saved, and modified by them, if they want to.

The calculator was possible over a decade ago. With some of the stuff hitting JavaScript within the last few years, you can literally build anything you want in that fashion. Games, communication apps, etc.


> The codebase is easily accessed, saved, and modified by them, if they want to.

This is harder than it already being local, or apt-get install turning into apt-get source. Likely, if they find your app interesting they will keep using it directly from your URL and end up at your mercy.

And that's the crux of my point - the assumptions of the platform are at odds with software freedom. Not in a way that is incompatible - as this is computing, we are always free to ignore the properties an abstraction provides and build a completely new one over it.

Your argument is analogous to a C programmer insisting that if one just follows the rules, C is just as expressive and safe as a higher level language. People are not perfectly rigorous, and the properties of the basic abstractions matter.

"Web" implies executing code off of a third-party server. Yes, there is a pathological case of having your own server (and that may become more popular one day), but that does not change the term's current wider definition and the general culture around it.


> This is harder than it already being local, or apt-get install turning into apt-get source. Likely, if they find your app interesting they will keep using it directly from your URL and end up at your mercy.

You have the same issues with the centralized package distribution system you're pushing? You're reliant on a 3rd party. How many apt-get the source? Saving source on a front-end web app is as simple as clicking "Save As" in the browser and you get the bonus of not having to compile anything. It'll just work.

> Your argument is analogous to a C programmer insisting that if one just follows the rules, C is just as expressive and safe as a higher level language.

Not quite. You're simply not seeing the levels of abstraction present in your ecosystem because you're used to them. Centralized repositories, binaries, toolchains, etc.

> "Web" implies executing code off of a third-party server.

If you follow this logic, your apt-get example is executing binaries off a 3rd party server. You sure you want to go there?


Yes, I'll happily go there - it's a flaw. I'm not "pushing" apt, I used it as an example with a slightly better way of handling source. At this point, I probably should be running Gentoo/Nix/Guix (but I should also be running Qubes) but my investment in Debian has been too compelling. I'm well aware of sticky path dependence retarding progress, which is exactly why I'm nonplussed about working around the base assumptions of the "web" to make free software.


OwnCloud/NextCloud


Probably because it's free software. "Beggars can't be choosers"

There are very few open software engineers who like improving UX. Since it's free you have to like it to do it, so I don't know if this will ever change.


Is it really that they don't like improving UX, or is it that they don't know how, or don't respect people who understand UX?

I like improving UX but when I look at many open source projects from the outside they are run by cranky ubernerds who triumph technical merits above everything else and I can't imagine them being very cooperative with someone whose specialty was "UX."


> I can't imagine them being very cooperative with someone whose specialty was "UX."

Sounds like you've never actually tried.

I maintain an open source project (https://conversejs.org) that could benefit greatly from the expertise of a UX pro. However, to date no-one has approached me to help with this (specifically regarding UX). If you or any other UX people here are interested in helping out, hit me up at https://opkode.com/contact.html

I definitely don't have a lack of respect for UX people, having been fortunate enough to work with someone whom I consider to be a brilliant UX designer.


We can all have different experiences.

Good software takes strategy, UX, visual design, solution architecture, software development, QA, DevOps, post-launch support, KPI monitoring & analysis, marketing... and probably a few other skill sets to really have it thrive.

The trouble with many open source projects is that they are one-man endeavors, and the person who started it cares more about control than quality outcome. "This is MY pet project..." is a horrible mentality to take when shopping for a partner.

The moment the dev thinks, "Gosh, I'd rather spend my free time watching a baseball game than making those 50 changes the UX guy asked me to make... especially since all I really wanted from him was a color pallet..." that's about when the partnership dissolves. (Misunderstanding about the different areas of expertise intentionally included.)

One person can't think time is worth more than a teammate's. Good software requires compromise and horse trading and a lot of planning.

You typically need money to keep everyone interested.


I fully agree with you. Damning the plebs for not being enlightened enough to use stuff that, if we're being real here, is worse by the metrics actual people use to decide what they're gonna use--that way leads to a really gross, insular mindset that doesn't help anybody.


Have you tried using federated sites like quitter.no? It's pretty much the same UI as Twitter.


Marketing fail. The name of a site actually matters. Who wants to be a quitter?


You find that power dynamic a lot in linked-community sites like Reddit and image boards. I find that the most enjoyable communities to be in are the ones where the site staff wield their power to make the user experience more enjoyable, rather than treating their users as cattle to be corralled. Acting like they provide a service rather than allowing you in their garden, in other words.

A few Reddit communities sprung up that track content deletions, like /r/undelete, and the sheer amount of stuff that winds up disappeared on arbitrary whims, rather than violation of a stated objective rule, is huge. Coincidentally, when the moderators of those places show up, the attitudes on display are the worst kind of toxic.


To be fair, a lot of web forums have run on free software. Centralization was what happened when usenet was over run with spam. Laziness is a poor explanation given one alternative exists and requires no maintenance, and the other takes a great deal of time, effort, and especially in the past, a lot of money to run.


> The only reason web forums exist is because people are lazy about adopting decentralized Free software.

Well, other than the fact that a whole lot of the software involved in using web forums, on both the server and client end, is "decentralized free software".

And the fact that other software (much, but not all, of it free) was adopted for discussions before web forums got popular. Web forums got popular because -- on both server and client end -- most people are going to have the tools anyway, so adopting a whole new software (free or otherwise) stack for discussions is unattractive.


It ceases to be free software when it is adopted by many people beyond the administrator, as they lack the freedom to study and modify it.

And it's certainly not decentralized - each instance of the software is its own walled garden. I can't install my own instance of say phpbb and use it to communicate with an existing forum.


I'm wondering what kind of revenue can actually be generated by surveiling users. Any numbers?


I dunno, it seems weird to me to treat reddit as one community. I use reddit a lot, but really 99% of my interactions is on /r/cfb, which is a space far from the descriptions painted here.

It is a community (and I do believe it is a community of its own) imo not far away from Hacker News in terms of creating and curating interesting content, and discussing it, for a focused group of interest, except rather than tech the interest happens to be college football.

I imagine there are plenty of other spaces like that on reddit for people. I dabble a bit (mostly lurking) in /r/kettlebell too which seems to be similar except smaller.


I think you've nailed it here. Viewed from the outside, Reddit is a very different place than when viewed from the inside. Outsiders tend to go to the default subs, which are the first subs you unsubscribe from when you choose to stay. Once people migrate to a topic-focused sub with decent moderators, they definitely become part of a community.


I think that also reflects Reddit's issues with mission confusion. Reddit does present itself as a single entity: "the frontpage of the Internet," and it was founded as a single community, a group of users on one forum. The introduction of subreddits has very slowly shifted it towards being a platform, a piece of forum-hosting software--and as it grows larger, it becomes much more enjoyable to use it to browse small forums than deal with the mob on the defaults. IMO they should fully commit to being a platform for many different communities, but the site's founding ethos was the opposite.


Agreed. Reddit is a very large, diverse group of separate communities. Trying to enforce a cohesive culture throughout reddit is doomed to fail, or at least will lead to a lot of strife and a loss of fringe groups.

It's like how politicians and media in the USA are always trying (and failing) to impose upon the citizens some country-wide cohesive cultural identity and feeling of belonging. The USA is too big, with too many different regional cultures and ideals. Doomed to fail, and you'll just annoy everyone in the process.


The problem with reddit isn't the admins, but that people who have managed to become moderators of prominent subreddits censor posts and thousands of rather innocent comments. For example the Orlando terrorist attack was censored from /r/news, and when they later allowed the posts to go through they deleted most of the comments, including all comments talking about the censorship of the previous posts. They even deleted comments giving info about where to donate blood for the victims. This strategy backfired immensely when the Donald Trump subreddit ended up being the only one reporting on the news.


The problems would be portrayed differently depending on different subsets of the userbase. I don't like all of /r/news moderation, but unfortunately it hasn't alienated enough people for another relevant news subreddit to take its place. A lot of the people that are alienated enough end up being people that I don't want to associate with, e.g. the kind of people that post in the Donald Trump subreddit, where I was banned for commenting on a topic that stated a subscription spike on the Bernie Sanders subreddit was a result of paid actors. All I said was "Isn't that when he held an AMA?"

So yea, I question authority. I think it's great to question authority. I don't like when others question authority and then will not tolerate when their own authority being questioned. From my POV, the part of Reddit's userbase that conveys that attitude, such as users in the Donald Trump subreddit, is the heart of the problem.


I don't agree that the Donald Trump subreddit is the heart of the problem. It's a symptom of the problem. I don't care what goes on in that subreddit. Let them have their echo chamber. The problem is huge default subreddits that purport to be neutral but actually are far from it. This is what pushes users to the extremes, one of which is the Donald Trump subreddit, which got a lot of subscribers because of this fiasco.


> This strategy backfired immensely when the Donald Trump subreddit ended up being the only one reporting on the news.

Say what you will about republicans, but they often end up being the biggest defenders of constitutional rights.

Democrats on the other hand (aka "progressives") are always looking for ways to limit constitutional rights in order to create a "safer society" and stable "social climate". Just look at Europe - where "hate speech" can get you arrested or your door kicked down, and guns are completely banned.


Depends on which parts of the constitution you look at: freedom of religion (specifically not establishing one over any other), due process, birthright citizenship (14th Amendment) are all things Republicans have attacked lately.


Speaking as a progressive, that's because most of us consider the US Constitution to be a means, rather than an end unto itself. We tend to make a strong distinction between Natural and Legal Rights[1] and to group the Constitution into the latter category. And Legal Rights are useful only insofar as they align with the underlying Natural Rights they support (i.e. life, liberty, and the pursuit of happiness).

Our constitution is extremely useful. It's among the best systems for protecting our rights that we as humans seem to have devised, and we disregard the wisdom it encodes at our own peril. But that doesn't mean it's either perfect or sacred. If a law is actively harming (or negligently failing to protect) the inherent worth and dignity of every human being, then that law is wrong by definition, and I've yet to hear an argument that the Bill of Rights is exempt from that analysis.

[1] https://en.wikipedia.org/wiki/Natural_and_legal_rights


> We tend to make a strong distinction between Natural and Legal Rights[1] and to group the Constitution into the latter category

The framers debated the inclusion of the Bill of Rights in the US Constitution for the simple fact that enumerating the rights made them appear though they were "granted" or in your words "Legal Rights." Furthermore, just by entertaining the idea that they should be explicitly enumerated, called into question whether they were "inalienable". However, they fully believed that those first ten amendments were "Natural Rights". In the end, they felt that enumerating them worthy, and given how the amendments are treated/respected/discussed and the litigous environment today - I'm glad they did.


> The framers debated the inclusion of the Bill of Rights in the US Constitution for the simple fact that enumerating the rights made them appear though they were "granted" or in your words "Legal Rights."

No, they debated them because any enumeration would fail to be exhaustive, and they wanted to avoid omissions from negating rights. There was no mistake on either side that the rights written into law would be legal rights, even if they were motivated by conceptions of natural rights.

The confusion of the Bill of Rights with a statement of natural rights rather than a set of legal rights designed to achieve goals set by moral principles which include a particular concept of natural rights is a more recent phenomenon.


Yes, the framers did debate whether to include the Bill of Rights, but my understanding is that their fear was that by enumerating some rights it would delegitimize those that were not enumerated. That's why we have a (mostly ignored) tenth amendment.

Plus the Bill of Rights aren't quite Natural rights as I understand them. Half the Bill of Rights (plus the 13th and 14th amendments) are about setting up a Legal Right to a somewhat complex and specific trial by jury system. But you don't actually have a Natural Right to participation in that exact system. You do have a Natural Right to just and fair treatment from your government, even when accused of a crime, and a jury trial is a very good way (though not necessarily the only good way, nor the absolute best way) to formalize that right.

On the other hand, I believe I have a Natural Right to bodily autonomy. If I want to surgically change my gender, or have sex with another consenting adult, or ingest a mind-altering substance with full knowledge of the side effects, then that's my right and the Government should have an extremely high bar to clear in order to prevent me from doing so. However, the fact that these rights are not enumerated in the Constitution has historically been a huge barrier to having them respected, which indicates to me that the framers fears were well founded.


Speaking as a liberal who refuses to identify as progressive, this kind of thing - the idea that e.g. freedom of speech is a legal rather than a natural right, and that the only natural right a person have is basically "to be happy" - is one of the reasons why.

To address your specific point, by itself, speech cannot harm the worth and dignity of any human being. Worse yet, what constitutes "worth" and "dignity" is so extremely subjective and culturally relative, that any such analysis would be the same. Because of that, I dare say that it is a poor basis for legal constructs. We should stick to laws that reflect measurable, objective harm, and use other mechanisms to deal with things like these.

For example, hate speech is best dealt with not by fines and prison terms (which tends to create martyrs out of those people, and boost their propaganda - "if they're trying to silence them, they must be saying something important"), but by voluntary boycotts, public shaming and other forms of social ostracism. This makes the definition of "hate speech" fluid, subject only to the social mores of the given era - as it should be. It also means that no idea can be completely silenced solely on account of being offensive, which is important to ensure true political freedom.


Speaking as a progressive, you shouldn't buy into Republican framing here. Republicans are adamant about changing, misinterpreting or ignoring the constitution when it suits them.


As progressives actively attack the 1st, 2nd, 4th, and totally blown out the 10th.


A big problem with Reddit is the first-come, first-serve nature of claiming subreddit names. In theory anyone can start a new subreddit to provide different moderation, but in practice /r/news, /r/bitcoin, /r/programming, /r/haskell etc have a leg-up in being seen as the canonical subreddit for their topics (these are just the first examples of canonical subreddits that came to mind, i'm not taking a position on whether they are well-moderated).

Perhaps subreddits should be required to have numbers after their name, eg /r/news2 (numbers 0 and 1 would be forbidden, eg no /r/news0 or /r/news1).


> If there is a lesson in this, it is that online communities which plan to implement anti-harassment policies ought to do so from the beginning, and develop the expectation among the users that such policies exist, and will continue to exist.

A few relatively popular programming languages have begun publishing official community code of conduct documents[1][2]. While initially I thought it was a bit overreaching, I can now see steering a community early is much easier than trying to change course later.

[1] https://www.rust-lang.org/en-US/conduct.html

[2] https://golang.org/conduct


The Rust leadership tolerates harassment and threats as long as its against the right wing. One of Rust's top people wants to violently purge the tech community.

https://archive.is/BtDaA


"right wing" or "far right wing". Remember we're talking about literal neo-nazis here, not just Conservatives.


No, we're talking about applying the violence historically used against literal fascist marches against people online he's declared to be his enemies, because "the only things fascists respond to is violence". We're talking about an online culture where it doesn't matter if the targets are actually neo-Nazis or conservatives or anything else, because disputing any accusation against them no matter how trivially falsifiable marks you as supporting them. Our political discourse is in a dangerous place and I see no way out anymore.


I wasn't planning on jumping in on this thread, but I will leave one (and only one) comment here, just to make this crystal clear:

I have no advocated that anyone commit violence against "people online [I've] declared to be [my enemies]." In fact, in the thread you're linking to, right above the archived tweet, I say that I don't know what an anti-fascist movement in tech looks like. And today, years later, I still do not. But if you were ask me "hey Steve let's start a street fight with GamerGate, what do you think?" my first comment would be "what?" and my second comment would be "that sounds like a really dumb idea, don't do that."


Not advocating violence, but "100% okay with that"? Don't be disingenuous.


Okay, fine, one more comment. Last one for real this time, I think this clarifies something I was missing above.

You'll note that the subject at hand changed during this conversation: the parent I replied to suggested that I was suggesting something specific regarding the right wing in technology, but that response is about abstract antifa actions in France. It's also why I responded the way that I did: the equivalence between the two (European antifa and European fascist organizations) is laughable. But it's just Twitter; I'm not going to get into a debate about the details, I'm just going to say "okay" and move on.

But I did not and do not think, as I said above, that someone should go find tech fascists and cause them bodily harm. It's not something I did suggest or would support today.


Y'know what, fair enough, my bad. I read that tweet as applying to a hypothetical tech antifa, not as applying to actual EU antifa, but clearly I should've given a bit more thought to applying the principle of charity, and I'm sorry & apologize for that.


Thank you very much, no worries.


That's not how it goes in practice. The fact that the people in that twitter thread endorse antifa shows that they are either ignorant of what antifa actually is (as opposed to what it claims to be), or they endorse an organisation that is more fascist than the fascists it purports to fight.


Antifa is a label, typically self-appointed. There must be thousands, probably tens of thousands of groups that identify as antifa, most of them confined to a region or a city.

It's possible to make generalisations across all the various groups, but you're going to have huge variance across them -- in pretty much all areas, including internal structure, attitude towards violence and political positions.

Of course groups are aware of one another and cooperate (as well as conflict, I assume) in an anarchic way, but there is no centralized power or executive. It's certainly not a single organisation.

From an organisational perspective, sports fan clubs are rather similar (mostly independent of one another, mostly local, very heterogeneous in how they work internally and interact with the world).


One only has to look at that twitter thread to see what kind of antifa they are endorsing:

> we have an "antifa" movement in France (reacting to our rising far right), and they're often just as violent :-(

reply:

> yup, 100% okay with that, personally

> Me too (mostly). Personally trying to flush out internalised cishet machismo, but if there was ever an outlet...

> the only things fascists respond to is violence. Ignoring them or letting them attack you doesn't help

> good! bash their brains in tbh

It is clear that the kind of antifa they are endorsing falls on a particular point on the "huge variance" spectrum of attitude to violence. It is clear that the code of conduct is being applied selectively to one half of the political spectrum. And I say this as somebody who has always voted for the most left wing party in my country. This attitude towards violence and censorship and social shaming from my side worries me greatly.


“The anti-fascists are the real fascists!” is a tired argument and it has been for more than 70 years, ever since that was said in actual Nazi Germany and the fascists proceeded to win.


Violence is never an appropriate response to speech. I don't care what the Nazis are saying, if you use violence to silence them, you are the problem, and I will support their free speech rights. Free speech is not simply for ideas you like, it's for ideas you hate, too.


Says "GunboatDiplomat"[1]... ;-)

[1] https://en.wikipedia.org/wiki/Gunboat_diplomacy


When the so called anti-fascists use violence against peaceful protesters and against the police who try to stop them, then yes.


"""If fascism could be defeated in debate, I assure you that it would never have happened, neither in Germany, nor in Italy, nor anywhere else. Those who recognised its threat at the time and tried to stop it were, I assume, also called “a mob”. Regrettably too many “fair-minded” people didn’t either try, or want to stop it, and, as I witnessed myself during the war, accommodated themselves when it took over … People who witnessed fascism at its height are dying out, but the ideology is still here, and its apologists are working hard at a comeback. Past experience should teach us that fascism must be stopped before it takes hold again of too many minds, and becomes useful once again to some powerful interests""" <--- Franz Frison, Holocaust survivor


That's misleading. In neither Italy nor Germany did the fascists seize power without using violence. In Germany, the SA was used by the Nazis to suppress their opponents in elections and intimidate the powers that be. Even then, they only won like 40% of the vote and had to manufacture a crisis to seize full control. Similar in Italy.


So what's the solution?

Because attacking people for beliefs you don't like doesn't exactly make things better.

A lot of the people that seem to be trying to attack those they see as facist would be just as bad in the power as those they oppose. Except you know, with different targets.


One rule of thumb would to be to recognize that, within all groups, there are extremists (and they often have the loudest voices). It is wise not to denounce an entire group based on the extremists. It is wise not to flippantly dismiss concerns raised based on the behavior of the extremists.

The problem I see with a heck of a lot of online conversations is that it ridiculously fails this rule. Often times there are some issues on both sides of the coin worth discussing, but it's hard to discuss things rationally when the loudest voices on both sides are acting like spoiled toddlers or often worse. Often times, the extremists dominate the conversation and the resulting discussion is toxic.

Violence is never the solution, but isn't it possible to be anti-fascist (or against any other political movement that more relies on emotionalism instead of rationalism) without being violent? I would think so.


Of course it's possible to be anti facist without being violent. It's possible to non violently get changes in the world, at least if you're in a remotely democratic society like the one you're in at the moment. People like Martin Luther King and Mahatma Gandhi proved it.

Responding to political concerns with violence just begets more violence. It just entrenches people's beliefs even further, inspires revenge from the 'other side' (or at least, their own axe crazy extremists) and eventually causes everything to erupt even further (through a civil war).

Honestly, I'd say this part in the most recent series of Doctor Who sums up some of the issues with people trying to solve all their problems with violence:

https://www.youtube.com/watch?v=zvGND1i6Dj0


Martin Luther King changed the world because America decided that it would rather deal with him, then the far more violent parts of the black liberation movement.

If those more violent parts didn't put the fear of God into the government, he could have been safely ignored.


The standard I often judge groups by is whether it's possible to be a decent person and remain a member of the group; rather than looking at the assholes, I look at how the group treats members who stand up to the assholes. Unfortunately, a lot of groups have been failing this spectacularly, and the older a group the worse it often seems to get.


"If too many people disagree with me, we need to kill them". How about no? There is nothing wrong with fascism, and suggesting that people whose political views differ from ones own are subhumans who need to be killed is pretty ironic coming from a supposed holocaust survivor.


Did anyone in Nazi Germany really claim that the anti-Nazis were the real Nazis or fascists or something? Sounds unlikely somehow.


Well, if you replace "fascists"/"Nazis" with "totalitarian threat" -- the former is modern shorthand for the latter but obviously want used that way until after experience of their rule established that perception -- a big part of the Nazi schtick was portraying other groups that way (Communists, the supposed global Jewish conspiracy, etc.)


Yep! The communists claimed the social democrats were fascists because they wanted to own the left, for example.


No, in particular since neither national socialism nor fascism was a boogeyman then.


Before the Nazis came to power, there were plenty of people in Germany who thought the Nazi party & Hitler were thugs and dangerous.


Yes, but they were not a boogeyman, like I said. Where you can just say "you are a nazi!" to shut down any discussion of anything you please. There's a difference between "some people didn't like them" and being a boogeyman.



Very similar to "anti identity politics leftists are the true leftists" which is probably relevant to this discussion.


To clarify the context, are you saying that gamergate members are literal neo-nazis?


I would definitely say that there is a strong overlap between the two groups


And what do you base this claim on? What the people in the Anti-GG media have been telling everyone?


High incidents of hate speech, abusive behavior online, high overlap with swatting, death threats, and chasing women off the internet. The movement acts as a hate group and is made up of typical "red pilled" men's rights type people. And, much like most extreme right groups, all of these ills are dismissed as illusions by a nebulous liberal media that is against their hateful cause.


GamerGate exposed clear ethical violations in the gaming media. Do you dispute that too? That same media has yet to clean up it's mess and famously doubled down by ignoring the legitimate claims or blowing them off while conveniently pretending that every troll/kid with a keyboard are really the spokespeople of the movement.


GG spent the vast majority of its energy harassing and personally attacking two women, one of whom wasn't even involved in gaming journalism.

GG also completely ignored the fundamental ethical violation in traditional game journalism - that review sites make all their money by advertising the products they review. Instead, it spent all of its time complaining about how some idie game developer is friends with an indie game reviewer, and sending death threats to Anita and Zoe.

If it looks like a thug, walks like a thug, and quacks like a thug... You can't expect anyone to take its valid points particularly seriously.

In a stunning lack of self-awareness, GG has never considered that perhaps it should apply its expectations for ethical behaviour to itself.


Gaming media was never good and was used as a cover to harass women online en masse. MRA go home


Do white nationalists count (example: Vox Day)


No, we're not. We're talking about everything from center left libertarians to plain old conservatives. It is just that far left authoritarians call them nazis because that is the easiest way to dismiss people they disagree with.


Didn't some states initially started suppressing them using force and censorship, but they still popped up like mushrooms after a rain?


If you see a threat of violence on Twitter, report it.


Twitter is not part of Rust code of conduct, the code of conduct does not apply to off-project politcal conversations.

https://www.rust-lang.org/en-US/conduct.html


There's nothing in there that excludes off-project conversations, and I cannot imagine such a restriction surviving the first outrage about a project member's off-site views from the left based on previous incidents.


Nazi programmers can follow the Nazi punks and etc.


Pretty much every community over a few hundred people in size will eventually be faced with a moment where the "powers that be" are forced to sit down and have the "what are we going to do about person X" talk. Every one of them. That's just the nature of human beings. If you find yourself in that position and the person is merely toxic to interact with, count yourself lucky, because it can often be a million times worse. If you are in a position of power with a community it's very important to look at codes of conduct, standards of behavior, and mechanisms for reporting/tracking/resolution early. Because building those systems early will make that "what are we gonna do" moment a zillion times easier to handle. It'll also usually mean that things get handled earlier than they would otherwise, possibly before they've escalated out of control and possibly avoiding a lot of damage that might have happened otherwise (to individuals and to the community).


To add to the list, Swift[1] had a code of conduct as soon as it was open-sourced. Mono even adopted one the same day[2] because they thought it was a good idea.

The LLVM Project is in the final stages of adopting one[3] (after much discussion) as well. This will (once adopted) apply to the LLVM backend, the Clang (C, C++, & ObjC) front-end, libc++, the LLDB debugger, etc.

[1] https://swift.org/community/#code-of-conduct [2] https://twitter.com/migueldeicaza/status/672590341757927426 [3] http://lists.llvm.org/pipermail/llvm-dev/2016-June/101807.ht...


> "We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender, gender identity and expression, sexual orientation"

Cool that they are so open towards zoophiles and the occasional pedophile.

On another note, Code of Conduct for FOSS are the equivalent of stock photos showing ethnically diverse people having a lively meeting on company websites; it does not hurt anybody, but it's an utterly useless waste of everyone's time, and you feel a bit dirty having to pretend that it matters.


It's a good way to cover your ass if you need to kick people out of the community later. They can't credibly accuse you of making up ad-hoc rules if your code of conduct has existed the entire time.


there is an inherent trade off though. would reddit have ever become so big and popular if it had enforced a strict conduct policy from day 1? many of the early users of reddit were not necessarily the mainstream "good" users that eventually came post-digg.

it can be a distraction to focus on that early on when what's most important is user adoption...


>it can be a distraction to focus on that early on when what's most important is user adoption...

Distraction?

The wrong kind of initial user adaption will create a toxic community that excludes later users. Any community that goes unmoderated for long enough inevitably turns into a cesspool.

The most important users are the initial adaptors, they form the basis of the expectations of the community.


yes exactly, and who were many of the initial users of reddit? I'm not saying it's not important, just that in the very early stages of a startup such as reddit was, it can be the wrong thing to focus on when you really need to get users first.


Most people don't need a conduct policy. Your average person isn't going to sign up for a website and start harassing other users, reposting Stormfront articles, and slinging racial slurs. On top of that, most people just click through the TOS without reading it. Do you even read the gazillion EULAs you click through when you install software? Did you read Facebook's TOS when you signed up? I thought not. It won't affect them because they'll end up obeying the TOS even if they've never paid attention to what's in the TOS.

But if you don't have a TOS and enforce it, then you're eventually going to get the attention of the nastiest, most vile people on the Internet. And given that almost the whole world is on the Internet now, that's some of the nastiest, most vile people in the world. Once you get their attention, they're going to go out of their way to make your website their new home. If you were a Neo-Nazi, and you were used to being banned from every forum you found as soon as you opened your mouth, and then you found a forum that had no rules and allowed absolute free speech, you'd fall in love right away. So much that you'd go out of your way to advertise it to your Neo-Nazi friends and coördinate a mass migration. Once the nastiest, most vile people on the Internet migrate to your site en masse, they're going to start targeting people for harassment, they're going to use it as a platform to spread hate, and they're going to take the opportunity to recruit from the general population by posting carefully crafted propaganda.

And, yes, they'll drown out the decent people. First, because the vile ones are actively invading your site in droves, while the decent people just stumble on it from random links and maybe friends' recommendations. Second, because the vile people will drive the decent people out. You know the old adage about how a thimble of wine mixed into a barrel of sewage is still sewage, but a thimble of sewage mixed into a barrel of wine is also sewage? It's kind of like that. Once there are enough vile people to make their presence known, they will turn your site into sewage even if they're the minority.

tl;dr: Most people don't need their conduct regulated, but if you don't establish and enforce regulations, your community will be thoroughly overrun by the people who do need their conduct regulated.


And yeah, you're right about Reddit. Problem is, they're in between a rock and a hard place.

On the one hand, users want more freedom than on most traditional community sites. There's a certain expectation that communities nowaday are basically the equivalent to a free speech zone in a local park, where near enough anything goes. And because social networks originally were really hands off about what people posted, that's become the baseline now. You need to claim your site is free speech friendly to get people using it.

On the other hand, true freedom of speech is impossible on communities owned by companies and individuals. You'll get a lot of material that's extremely disturbing. Advertisers won't touch the site with a ten foot bargepole. And to some degree, a lot of normal users will actually be put off when the site draws in the less savoury users that freedom of speech often attracts most.

So Reddit is kind of screwed. They either have to say 'there's no freedom of speech here, here are acceptable standards' (and watch their audience evaporate) or say that there is freedom of speech still (but have their audience gets more and more angry when stuff gets removed anyway).


IMHO: They had a working solution, but they threw it out. When they were free-speech, they made it a policy to only throw out stuff that they were legally required to. That allowed them to keep their hands clean. As soon as they filter one thing for non-legal reasons, they become responsible for everything else on the site, otherwise it becomes a "why did you filter me, and not them" issue. The laissez-faire attitude worked for a long time, and arguably, that's why reddit grew as popular as it did.

If they had made an announcement: "Hey, we're done with free speech. Here's our plan" then I think it would have been less of a problem. But like you said, they are trying to be on two sides at once: They want the benefits of free-speech, without the costs.


> they're in between a rock and a hard place.

This is completely OT, but I've always enjoyed the spanish version of that idiom: "entre una espada y la pared" (between a sword and the wall).


> Compounding that problem, the fact that the site was unregulated for so long caused it to attract the kind of people who need to be regulated the most.

Is there any evidence that reddit has a greater fraction of asshats than the general population at large?


The issue isn't really the amount of asshats. In the physical world, it's generally really unlikely that you'll get more than a couple of particularly egregious asshats with similar beliefs in the same place. When you do, it's often a news story. Online, just as people with good intentions can find each other, people with bad ones can find each other and band together.

In other words, the unmoderated Internet is an amplification device for small groups of people who might otherwise not get a voice, and it doesn't actually matter what they're saying.

Of course, it's a lot quicker and easier to cause damage than to do anything constructive, resulting in unmoderated spaces online probably doing a lot more harm than good in the near-term. Long-term? No idea.


> Online, just as people with good intentions can find each other, people with bad ones can find each other and band together.

Yup. Besides egregious asshats, it has greatly helped conspiracy theorists to get together, fringe religious groups to get together, and fringe political groups to get together.

> In other words, the unmoderated Internet is an amplification device for small groups of people who might otherwise not get a voice, and it doesn't actually matter what they're saying.

I think that the amplification effect does vary, though, among different groups. Some groups already had worked out good ways to communicate before the internet, and to spread information. It wasn't as fast or as convenient as the internet, so things were slower than they are now, but it got the job done.

An example of such a group would be those in STEM fields. We had big libraries of technical material extensively cataloged and indexed and cross referenced, an extensive system of mailed bulletins and journals to keep us informed of new developments in our fields. We had telephones for long distance real time person to person discussion, and the postal system for less urgent communication or for sending documents and photographs.

The internet was a huge increase in the access to and speed of sending information for STEM people, but in some sense it was just a change in efficiency, not a fundamental change in what we could accomplish. I've compared it before to the printing press. We had books before the printing press, but it took a long time to make copies and they were expensive. The printing press made it faster and cheaper to make copies.

For those groups that were not organized like this, which includes the random asshats, the conspiracy theorists, and so on, the internet was a fundamental change. For them it like the discovery of fire or the development of agriculture. It put them in a whole new world.


Maybe the big universities had those big libraries, but mine certainly didn't. Without the internet I wouldn't have had access to nearly the amount of material I did.


You don't need to be a horrible person to make insensitive posts, it's a role played depending on environment and audience. I'm sure the 4channers among us aren't famous for trollish funposts on Facebook and LinkedIn.


It doesn't need to, because the general population is already overflowing with asshats. Keeping a forum civil is a matter of keeping the local asshat population artificially low, not preventing it from growing artificially high.


I think the biggest problem with it in general was how it greatly relied upon moderators of the individual subreddits. This inherently creates problems as they have to rely upon many individuals who have very different ideas and goals from the Reddit staff, both in terms of monetization and in moderation.

It is inherent in social media, the more stuff you allow on your platform, the less able you are to generate revenue. I think that has been clearly demonstrated by Twitter, 8chan, 4chan, etc. never being able to generate large amounts of profit.

Facebook realized that censoring is unfortunately profitable, and made a platform that is much more easily advertised upon.

It is similar to the development of sports, where in the US all sports have commercial breaks, even in things like F1 that do not run commercials during the race, while ads are run over it in the US. I mean it even happens in soccer with showing ads during the game, as clearly demonstrated by the sprint ads during Copa America.


It is an interesting idea that it is the censoring that makes Facebook profitable, but I'm not sure that I agree with it.

I would say that the "made a platform that is much more easily advertised upon" is the important part, combined with the massive data-gathering so the ads can be targeted better.


Well I guess I should say that a part of the design is self censoring, as what one says with anonymity is varying ranges of difference from that which one says when they are sharing it with all of their friends and family.


That's why I personally have self-censored myself from using Facebook at all.

I've discovered that there are actually zero things that I want to announce to all my friends and family at once.


I'll partially respectfully-disagree with you: I self-censor myself from FB as well, but there are a ton of things I want to announce to my family and friends at once. However, if they don't care enough to visit my blog to find out what that is, they I don't care if they miss out on the announcement.


Reddit's main problem is that they spent years building their user base as a site to find 'jailbait' porn (it was by far the largest search result that brought people to Reddit for a long time).

That created a toxic culture, which allowed stormfront and similar groups to use it as a recruitment ground for racists.

Reddit still refuses to do anything about it. The 'quarantined' a bunch of truly vile subreddits but all that does is give them a place where they can coordinate brigades of the public subs out of view.


This isn't how I remember the culture years ago. I looked at reddit every now and then, but I really came over six years ago during the big Digg v4 revolt when Digg fucked up their site. At the time, I remember being really impressed how many of the comments on reddit were more personally supportive, which I rarely saw elsewhere - for example, I learned a ton from r/fitness back then.

Yes, there were the more odious subreddits, but it was perfectly possible to surf around reddit and never be bothered by those.

The culture of reddit definitely seems to have gotten much worse over the past 5 years - to me, it just generally seems meaner, and there feels like there is more trolling overall. Even simple, seemingly straightforward posts seem more likely to spark some sort of vitriolic debate. Perhaps it was inevitable as reddit grew, but it still kind of sucks.


> This isn't how I remember the culture years ago

The very nature of Reddit is that each person will have different experience & memories: the truth of Reddit is that thre is no Reddit - there are only subreddits.

So, unsubscribe from most default subreddits, choose your communities and you'll be like me - having a 99% positive experience !


> (it was by far the largest search result that brought people to Reddit for a long time)

I'm pretty skeptical of this. I can maybe see my way to believing that googling "jailbait" beats out any single one of ["pic" "pics" "funny" "joke" ...], but the normal SFW subreddits have always been vastly bigger than the porn subs. I'm also really skeptical of the causation chain that leads from jailbait -> Stormfront -> general toxic culture. Do you have any harder evidence of this?

Edit: Sorry, "normal SFW," not "normal NSFW."


https://web.archive.org/web/20090814032422/http://www.alexa....?

https://en.wikipedia.org/wiki/Controversial_Reddit_communiti...

Seems like for a time it was the second biggest refer keyword, web archive doesnt really back it up though.


Reddit was mostly fine until they shut down the hate subs which pushed the hate crowd into the other subs.


I thought that a lot of them went to Voat and 4chan and 8chan

Edit: oh please mods. Voat got tons of users apecifically to host /r/coontown , /r/fatpeoplehate and the like

https://en.m.wikipedia.org/wiki/Voat


That's a pretty big stretch -- reddit's original audience and content were pretty similar to the hacker news audience (with more tolerance for silliness). The introduction of sub-reddit's expanded (and diluted) the audience, but it's not like reddit's origins were sketchy.


Every single thing you said is incorrect. Reddit ranked high when searching for jailbait, that does not mean jailbait was the largest search that brought people to reddit. The "toxic culture" predates the jailbait sub. And stormfront does not recruit on reddit, reddit recruits for stormfront. The insane, dishonest, censorious leftism that runs rampant on reddit is driving people to become racist, and sexist, and homophobic, etc etc. If you are going to be accused of it no matter what, you tend to start resenting the people you are constantly accused of hating. And they ban subs that brigade. Quarantines kill subs constantly, it doesn't help them. Also many of the subs being punished are in no way vile. They are just too popular and not leftist.


> Also many of the subs being punished are in no way vile. They are just too popular and not leftist.

You mean subreddits like /r/european and /r/truecels? Where death threats against women in interracial relationships were common? Where it was common to wish rape on women who don't hate Muslims? Where the former subreddit routinely called for ethnic cleansing and the latter subreddit talked about how western women are part of a corrupt plot to deny sex to men? Where AutoModerator was set up to respond to the word "entitled" by saying that constant access to sex is a fundamental need for men and should be considered a human right?

Everyone: I'd recommend popping onto /r/againsthatesubreddits and checking out the myriad posts on /r/european and /r/truecels AHS has linked to. You might have to dig through the archives, though, because both of those places were quarantined a while back.

They deserved to go.


>You mean subreddits like /r/european and /r/truecels?

"Many" means many, not all.


/r/incel is the new /r/truecels


I think what happens when we have truly anonymous free speech, on places such as Reddit, Twitter, and 4Chan, shows that we are nowhere as civilized as we like to think we are.


I have two (opposing) thoughts on that

1. We don't always want to express all our opinions and thoughts among people we know, because they carries significant social cost. So we end up saying a disproportionate amount of those things anonymously online.

2. It used to be that nobody had to listen to the outcasts and the lonely. People we don't enjoy being around. But now they can participate in online forums as much as anyone.

In part, it's a good thing that these voices can be heard, and that they can find likeminded to talk to. But a lot of them are outcasts because they're jerks, and online we do have to listen to them.


I'll argue on having to listen to them. I'm all for free-speech, no matter how vile it is. I'm also all for self-controlled censorship, and tools to facilitate that. If you don't want to hear, say, racist bullshit, ignore it, and move on. If it becomes too noisy, create a filter, and move on. If it is still too noisy, it's probably a good sign that your not fitting in well with the community.


Are we more civilized when there is no anonymous free speech, and people are abusive only in semi-private situations?


I think so. Actions toward others have a much larger societal impact than privately held beliefs.

For example, if someone has racist beliefs which do not affect his actions in any way, his effect on other people in society would be exactly the same as it would be if he did not have racist beliefs. In other words, beliefs that are not acted on or expressed have no effect on society.

There may be some social benefit to knowing who the racists are. But if they are anonymous, that benefit is probably lost. Their speech is protected by the first amendment either way, but Reddit is not required to enforce the constitution, so they can shut down the accounts if they want to.


I can't imagine a situation where someone has beliefs that do not affect his actions in any way.

I would agree that unexpressed values would propagate less.


> I can't imagine a situation where someone has beliefs that do not affect his actions in any way.

That was an extreme example used to illustrate the point I was making. Racism is also only an example; there are plenty of other beliefs that are bad for society. Personally, I suspect that sexism is probably far more prevalent on Reddit than racism, for example.

Social stigma can prevent one's beliefs from affecting one's actions, or at least reduce the effect of those beliefs on one's actions. If it is shameful to be perceived as a racist, racists will attempt to hide their racism by not acting in racist ways toward others, at least when other people are looking. This does not obviate the effects of unconscious biases or eliminate racism, but the net result for society is fewer overtly racist actions, and that's a good thing.

If racism becomes acceptable again, racists will feel freer to act on their beliefs, which will make life worse for the targets of their racism. So, there is a case to be made that if people repress their beliefs only partially, society is better off -- not as much better off as if certain people repressed their beliefs fully, but still better off.

The danger of allowing racism to flourish in online communities, especially ones as large as Reddit and Twitter, is that it re-normalizes racism and makes it seem acceptable to be a racist. It can be more than merely acceptable -- it can even be a contrarian badge of pride, justified as a rebellion against political correctness. (That kind of justification for it is reinforced when anti-harassment policies are not uniformly and consistently enforced, because it can make moderation appear to be arbitrary or politically motivated.) When that kind of thing happens, it makes society worse.


Well said.

I do not agree with giving up on freedom of speech because some people will say hurtful things. Every time I have read racism or seen racists on TV it has merely exposed how ill-informed, counter-productive and just plain wrong those ideas are, and has prompted myself and others to dispute those wrong ideas. There is also the slim chance that if the dummies are called out on their wrong ideas, they may end up changing them.

Anonymous freedom of speech is a slightly different case, but I still feel that all freedom of speech is valuable and needs to be preserved.


> If racism becomes acceptable again

Some would say that racism is still acceptable in USA today. Just look at Trump. (Yes not as openly as before, but it hasn't been erradicated)

There's a saying that the USA went from thinking that racism was OK, to not knowing what racism is.


Now explain why Facebook has far worse racist garbage than Reddit (at least the parts that mainstream users see without digging for crap) proudly on display to family and friends and the public, Real Name and photos attached.


Lots of people are racist. That is a sufficient explanation.

However, I very strongly suspect that no matter what the level of racism on Facebook is, it would be far higher if people were able to post anonymously.


> However, I very strongly suspect that no matter what the level of racism on Facebook is, it would be far higher if people were able to post anonymously.

Counterexample HN and arstechnica.com

As I have previously hypothesised a lot of good users (me included) seems to be scared away by real names policies.

Pseudonyms allowed and linked to phone numbers or national ids combined with karma seems like a workable middle ground for me.


There are probably many people who are sexually attracted to children but don't act on these feelings in any way.


Imagine that he never came into contact or communication with a person that he was racist against. E.g. a racist hermit.


If a racist hermit never interacts directly with anyone, sure that is harmless. But if she actually just racially isolates herself (which is pretty hard to do in many parts of America), then she still has influence over the institutions around her. If someone then wants to join her racist gardening club, then her racism ends up affecting things.


In that case, then who cares what his ideas are?

If a racist hermit is racist in the woods, does he make an offensive sound?


I find HN and arstechnica.com a lot more civilized than the comments section on the major local newspaper.

I have hypothesised that the reason is that without anonymity only

* (trolls with) fake accounts

* politically correct people and people who think they are pc

* and people who don't care about real life reputation anyway

post in any kind of controversial debate.

Normal knowledgeable people doesn't want to lose the dream job because of something they commented on 5 years ago.


You missed:

* naive idealists who believe that truth and reason matter more than immediate public perception

This is subtly different than "people who don't care about real life reputation" in that they sincerely hope that despite any short term damage to reputation, in the long run others (or at least some subset) will come respect their honesty and rationality.

I'm probably naive and idealist, but at least on HN, I think this is a significant group. Although, am I reading correctly that you are considering HN to be "without anonymity"? In the current "persistent pseudonymous" system, what do you think constitutes a "fake account"?


> Although, am I reading correctly that you are considering HN to be "without anonymity"?

No. Sorry if unclear. I categorise HN accounts as pseudonymous with optional real life names.

> In the current "persistent pseudonymous" system, what do you think constitutes a "fake account"?

What would annoy me is real sockpuppet accounts, controlling more than one account and using it to

* upvote ones own comments and postings

* astroturf

* using it as a strawman to defame opponents (i.e. by running a sock puppet account that seemingly supports some cases while posting bad arguments for that case etc.)

* etc

Personally I have nothing against using multiple accounts as long as there is a reason (i.e. someone has an account with full name and another that they use when they want to post or comment something that they do not want to reflect back on their real life persona, family and workplace.)


Many local newspaper comment sections have people posting vile things under their real names, often linked to their facebook accounts.


That was exactly my point. Sorry that I was unclear.

Honest question since two people read my post that way, I obviously was unclear, but where?


I don't think it's necessarily anonymous free speech that is the problem. I hang out occasionally on Freenet, on FMS and Sone specifically. Freenet is very anonymous by design. It turns out it is a mostly pleasant and helpful community. I think one good reason for that is that both FMS (which is a bulletin board system) and Sone (which is more like Facebook posts) use a version of a "web of trust". People you don't want to hear get filtered out, and others can trust your trust list and filter them out as well. Some people only filter out obvious abuse, others filter out more. Frost (another bulletin board system without a web of trust), on the other hand, is a downright awful place to go most of the time.

I wonder how feasible it would be for Reddit to implement subreddit-based web of trusts for users, as opposed to moderation. People could filter others (or not) individually and not have to rely on someone else to try to filter for the whole group.


YouTube comments are what really does it for me.


That is where the real asshats are, i wonder how youtube is dealing with this.


In short? They're not


tip of me fidora, m'lady


I think this is a mistaken impression.

The problem, I think, is that we haven't figured out new norms and boundaries for online socialization, as we have for other forms of communication and other media. One of the things that makes online communication so potent is that it is not just easy but also near instantaneous with a global reach. But we have been slacking on building the tools to manage the problems inherent in that power. One of which is the concentration and distillation of harassment. A tiny group of people can generate a tremendous amount of harassment which, through the magic of modern communication technologies, automatically gets funneled to the receiver. Even, say, a million people worldwide is still a tiny fraction of all people in the world, let alone all people online (less than 0.1%), but if there were a million dedicated harassers online it would be a hellscape.

In reality it only takes a few assholes to spoil the "mood" and the "fun" of interacting online in public, from a few dozen up to tens to hundreds of thousands. A minute but vocal, abusive, and provocative sub-group can easily change the entire tone of a conversation or change someone's experience using social media from good to bad. How many people sending you death threats and rape threats regularly would it take to sour your experience with social media? The real answer is: a very tiny number.

Right now that very tiny number of agents provocateurs are having an outsized impact on online communities and communication platforms, because they are leveraging the power of vitriol, hate, and targeted harassment. And also precisely because the platform builders (whether it's facebook, twitter, or reddit) have been far behind in building tooling to help deal with these problems, and because the social norms haven't caught up with the behavior. Going back to the '90s and even the '80s it was common for, say, usenet newsreaders to have tools built in for ignoring posts from people you wanted to ignore, and the modern tools for such things are barely at that level, if they exist at all, despite the problem being even more difficult to tackle. Additionally, there were a lot more moderated communities back then, whereas today the norm is unmoderated communities.

Just like the security problems of the early internet days we've largely been surviving based on luck and absence of dedicated attacks, and partly just due to ignorance of the worst that was actually happening. There are a lot of people who still don't think that harassment on the internet is a serious problem because they don't actually understand it all, having never experienced it in its most potent forms.

The worst takeaway from this would be deciding that most people are garbage and socializing with other human beings is best avoided (online or off). That's a classic "stranger danger" overreaction. We need better tools on various platforms. We need more people speaking out about the badness of harassment online and for people to stop parroting the stupid line "oh, it's just online, you can ignore it, it's not real" or defending targeted personal harassment as any sort of defensible "free speech".


> Going back to the '90s and even the '80s it was common for, say, usenet newsreaders to have tools built in for ignoring posts from people you wanted to ignore, and the modern tools for such things are barely at that level, if they exist at all, despite the problem being even more difficult to tackle.

I was a post-September usenet user, so apparently I never saw the glory days, but what passed for flamewars back then seems pretty tame in comparison to today. And I recall having much better (plonk) tools for ignoring stuff I didn't want to see.


> what passed for flamewars back then seems pretty tame in comparison to today

Well, until the Meowers came about. That lead to what was quite possibly the biggest flamewar in Internet history, and it involved wholesale invasions of random newsgroups.

http://xahlee.info/Netiquette_dir/_/meow_wars.html


Don't know how I missed that! 1996-2002 or so was my usenet era. I guess I didn't spend all that much time on alt.


> the most tolerant communities attract people who are difficult to tolerate

This is very insightful line. And it applies not just to internet communities, but any real life communities as well.

> I suspect Twitter is having similar issues dealing with harassment, after letting it happen for so long.

> Don't just tack them on after several years, and don't enforce them inconsistently and arbitrarily as Reddit has done.

Twitter is well on the same path. The inconsistency is what makes many furious. When you ban harassment from one political end, but allow from other, it doesn't paint the company as fighting harassment. It paints a company as fighting one political end.


Both Twitter's and Reddit's problem is that they are applying their rules very selectively.

Twitter, for example, ban political trolls and populists, but tolerate terror organizations and their supporters. Like what the actual fuck...


It's because terror organizations tend to be quite civil on Twitter, whereas political trolls tend to be...trolls. The former isn't inimical to the proper functioning of the service, whereas the latter is.


to be fair twitter probably doesn't want them on the platform but the US government has probably asked them to leave them alone


Why would the US govt ask twitter to leave them alone? Genuinely curious.


>It will be difficult for Condé Nast to get its money's worth out of Reddit now.

Condé Nast hasn't owned Reddit for half a decade now. Advance Publications owns both of them, and Reddit was split off as a separate entity with its own board and shareholders, one of which is Advance.


Well then that will make it even more difficult!


I use Reddit heavily, and subscribe to a wide variety of different sized subs. AskReddit is the only one I'm aware of that's still remotely anything-goes. Every other sub is so heavily modded that I have no expectation of any given thread I start staying up more than a couple hours. It's going the way of Wikipedia - so policed, it's no longer worth the effort to contribute at all.


I have a thesis that is basically identical to yours about unmanaged expectations and community being the main root of the problem that reddit/twitter/etc are facing, which is part of the reason why we started Imzy https://www.imzy.com/imzy

We got some press early on that really told an untrue narrative of what we are trying to do, but we've made a ton of progress in a few months of our private beta and our communities and platform are really beginning to come together. The link I gave above is to our company community and if anyone wants to try Imzy out I will approve your request as soon as you make it.


Would you care to give a true narrative of what you're trying to do? You linked to an empty page where I can request an invite, and the "about us" page just says you're "rethinking" communities online. What is the nature of moderation and communication on the site?

I can glean a bit of info from news sites, but not very much if you say they're fundamentally misunderstanding you.


Crap sorry, I was offline all night being with my family and didn't see this soon enough.

In short, Imzy is attempting to find a way to align the company as best as we can with the communities. We recognize that there is no one-size-fits-all type of community and trying to fit all communities into a simple, archaic message board doesn't give the communities much to work with. Compound this with our believe that having an advertising business model not being well aligned with community's best interests, we believe that we can build a flexible commerce system into the platform so that communities can use it if and when they need it for their communities (for instance, to sell each other things, to pay for events they are putting on, to support the community leaders).

How do we do this? It's going to take a long time and require us to make our platform extendable by developers.

Right now, we are at the starting point where we have to build some initial communities and get them starting their evolution process. We have innovated a lot on our community leadership tools (and still are) and have put a very rudimentary payments platform in place. Our next phase will be to start innovating on the developer platform and the things communities are able to do on the platform.

You'll notice that nowhere in this description did I talk about free speech or harassment. That is because it isn't part of our core thesis. Our core thesis is giving communities online the attention we feel they have never been given in the right way. However, we recognize that if we are successful, we will have a lot of people on the platform and be faced with the same challenges that twitter/reddit/facebook/etc are faced with regarding harassment. So, we are focusing on it early because we feel we need to set the tone for the early communities so that as we grow they can help. There are a few things we don't allow on the site due to our past experiences: porn, hate speech, harassment, doxxing, etc.. We realize there are a lot of gray areas here and so we are equally focused on building and scaling our community team early.

That ended up being longer than I thought it was going to be, but probably not as long as it should be. I hope it gives you at least a little better idea of what we are doing.


Relatively recent HN discussion of Imzy:

https://news.ycombinator.com/item?id=11556017


Self-promotion aside, you should disclose that you are ex-Reddit.


To be fair, kickme444 was pretty well-known as a reddit employee! (But explicit disclosure is good too.)


Some of us rarely ever look at the username behind (above) a comment. I personally just don't care, if the content is interesting.

And even if I did, there is just too many users, across various discussion sites, to keep all the pseudonyms and their comment history in one's head.


Reddit is owned by Advance Publications and some other investors (there was a round a few years ago http://www.redditblog.com/2014/09/fundraising-for-reddit.htm... ).

Advance also owns Conde Nast, but they restructured reddit out of Conde Nast in 2011.


They want to have the cake and eat it.

I saw this so many times.

First they make it free for all everything goes, then people come and use it and then they start moderation/cencorship to the max.

Wikipedia, Stackoverflow, Reddit, etc.

It's totally normal and okay for users to feel cheated, because they WERE cheated...


This is what made me angry, and why I left. I think things would have been better if they had openly said "Hey, we were free speech, but we're not anymore. Sorry, but we have to". It would have sucked, and I still probably would have left, but I would also have understood. Instead, they tried to keep holding onto both sides


Part of the problem is that Reddit even more than Twitter is structured as a meta-community, and so any standards which are imposed will be external to a given sub-community. That means that they will be perceived as, and may sometimes be, social and political censorship.


I think this is a somewhat more recent (i.e. within the past 4 years) development on reddit. For a long time it seemed to me that the various sub-communities grew out of some shared sense of what it was to be a redditor (memes like the dumb "narwhal bacon" thing were expressions of that.) People were redditors first, and members of the various sub-communities second. But recently this has not been the case. People seem to want to be able to create their own gardens in which they can create their own "language of good and evil that their neighbor does not understand." People don't want to associate with reddit as a whole, only the parts they chose to incorporate into their little filter bubble.


I find myself blaming the some of the default sub choices from within the past 4 years. The choices made there created a really frustrating global reddit user experience- it used to be that when you first logged in there was only a dozen subreddits people were looking at. In the last four years the default experience is now up to like 50 subreddits which in my opinion take a very curated and particular view of all that's available which just does not match up to everyone. You just didn't have worldnews, pics, science, gaming and then what you were interested in to be a redditor, it's 50 things + what you are interested in.

So the 'identity' of what was a redditor changed dramatically from the start, and it's left us worse since the a lot of the default subs aren't exposing what was good and are mostly mindless/bandwagon-y in operation. Not to mention that some of the default subreddits that got chosen were poisonous and it changed the tone.

Here is a list of what changed: https://www.reddit.com/r/defaults/


reddit's failure to attract different types of people is largely because of the way they use default subreddits. Facebook and Twitter start users off with a blank page and ask about their interests first, and thus, both platforms attract all types of people. When your average person goes to reddit.com and see anti-religious stuff or pro-some-political-candidate first thing, they assume that it's not the place for them and move on without understanding that the experience can be customized by subscribing to specific subreddits, etc.


It is perfectly possible for communities with different conventions to co-exist peacefully. The problems only come when you "cross the streams" and I think many people who do do it deliberately, precisely for the controversy (attention, clicks) it will generate.


> much of Reddit's problems with its userbase boil down to a failure to manage.

Removed some words to make this statement more accurate.

Reddit itself doesn't manage the communities, and that's a problem because the Thought Police are now doing it for them.

The idea that people should be allowed to create and manage their own communities is nice in theory, but why should Xx_JoeSchmoe123_xX be in control of the largest League of Legends community? Just because he hit the 'Create Subreddit' button before anyone else? That makes no sense, but it's how all of Reddit is run.

Reddit probably thinks "This is great! We've got all these community managers (moderators) working for us and we don't even have to pay them!" But the quality of their moderation is often proportionate to their salary...


> Reddit itself doesn't manage the communities...

That used to be true. But they've been stepping in and trying to clean up speech they don't like. As soon as you take on the responsibility of cleaning up one group, you take on the responsibility for every group. And reddit just doesn't have the manpower to do it.


Nor do they have a consistent and rational policy to do it, which is why things aren't going very well.


It's an interesting point, I can imagine reddit being a platform for Gen Y politicians once subreddits are forced to elect their moderators.


No, I think the failure is to realize that you might have a huge userbase of assholes, but assholes still have money advertisers want.

If you want to build a tolerant community, you build it. You don't take an existing one and try to put the screws to it until it becomes what you want.

And if you want a diverse and female inclusive culture, you build one, you don't do it by taking an existing group and cracking down the cohesive elements they've developed.


It is only a certain kind of harrassment that Twitter has a problem with, that against its friends. [1]

Celebrate cops being murdered, that's ok.

When Condé Nast employees express death wishes

https://i.sli.mg/XfKOEs.jpg

Or Sony Film stars incite mob violence

http://web.archive.org/web/20160721232327/https://twitter.co...

There is a significant silence.

[1] http://www.breitbart.com/tech/2016/07/20/twitters-history-ce...


>kind of people who need to be regulated the most.

It is kind of disturbing that ordinary people think their peers in society need to be 'regulated' . Only egomaniacs overestimate their own virtuosity and underestimate everyone else's .


Is it possible to generalize for the whole of reddit? every reddit group is setting a different level of expectations - for instance reddit.com/r/askhistorians has quite strict guidelines - much stricter than most of reddit.


There's routine swathes of [deleted] posts in /r/askscience also. In my experience though, having heavy-handed moderators is certainly an exception on Reddit.


Or perhaps the problem is that Reddit did not grow into its current userbase (with whatever problems that base brings with it) by being an advertiser friendly platform, which is what its current owner wants to try and mold it into.


It's pretty clear that the Reddit corporation doesn't want Reddit to be an anything-goes, absolute free speech zone with no moderation or anti-harassment policies -- but that's what the site actually was for many years. Now, when the company cracks down, users think their freedoms are being curtailed.

The change that came with moderation was very dramatic. It was basically cultural whiplash. It also accelerated the replacement of intellectualism and aesthetic freedom with groupthink and the same kind of web forum small mindedness you had in the early 2000's.


But they very much did want it to be an anything-goes community when it started out. Anything that would raise their user counts. Now that the site is big and successful, they need to retroactively curtail the unsavory bits if they're to have any hope of selling the site for a big pay day.

I think it's important that an actual, "say what you want as long as it's not illegal" site exists. A lot of the polarization in this country is due to both sides isolating themselves from the other 100% of the way.

Such a site needs to exist somewhere. Now do I necessarily want that on Reddit or Twitter? I don't know. I'm extremely left-wing ('90s liberal; so pro free-speech over political correctness); and honestly Milo's tweets were appalling whenever some people I knew there would retweet him. I won't miss seeing a gay person defending a VP that supports conversion therapy.

But I can clearly see that there's a liberal slant going on in these moderation policies. And I can imagine being on a site where the opposite were true, and know that I wouldn't like that very much myself. For that matter, there's another bias when it comes to celebrity status. I would bet good money that there is absolutely nothing Trump could say that would get Twitter to ban his account or remove his mark, even though he's in the extreme right. He's exempt from the rules that nobodies like us have to follow.

What I'd rather see Reddit/Twitter working on are better tools to defend against harassment. I've had to write my own Greasemonkey scripts to filter out certain types of attacks, and I'm not even remotely relevant. I can't imagine how someone famous, let alone someone famous who can't code their own Javascript filters, can use these sites. And well, I think we saw the results of that lack of control over harassment with the Ghostbusters actress.

But these scripts aren't hard to make. Give people the ability to filter out tweets that contain certain phrases from people you don't follow. Give them the ability to temporarily block all new notifications from people they don't follow for a few days after someone sics their followers upon them. Etc, etc. All very easy stuff to make.


>It's pretty clear that the Reddit corporation doesn't want Reddit to be an anything-goes, absolute free speech zone with no moderation or anti-harassment policies

I think lots of webmasters actually believed in that but we all now realise that that simply doesn't work. I won't mention examples of lawless online communities but I'm sure all of us know of a few. In my experience they are all disasters.


I have a theory — to clean up a site, find and expel the most egregious offenders, like the people who engage in repeated name-calling ("You are a fucking asshole!") as opposed to constructive discussion. You can't police every user, but if you find and expel the worst offenders, I suspect it will have a significant effect on the quality of conversation.


You'd be very surprised. The most abrasive people in a community do a very good job of building a positive reputation amongst other "power users" such that banning them causes a ripple effect in the community that _always_ makes the moderators look like one-sided haters.


Sure, the moderators may end up looking one-sided, but it sends a clear signal as to what kind of behaviour is not tolerated. The other members can adjust to the new normal, and some will leave. Either way, you've cleaned up the site to some extent.


name calling is not even on the list of stuff that I would have placed in a list of bad things users of Reddit do. Unfortunately, while that could be applied to HN, reddit is another world. Much of the freedom of speech culture still exists and while not on the main page, there are a lot of communities dedicated to hate or just fighting each other and instigating rage.


Name-calling was an example of bad behaviour you want to ban. I'm sure we can think of other, worse, types of behaviour. Whatever it is, find the most egregious offenders, and ban them.


I think that works only when you set those expectations up front. This is an apples-to-oranges comparison, but: reddit started as no filters. That was one of it's main draws. Then suddenly started applying filters, and banning "toxic" members. Now a lot of people are mad at them.

HN, afaik, has always had these rules in place. There's never been an expectation of free-speech here, and nobody minds, because they were presented up-front.


Here is a good time to repost a paragraph about why one should not trust Reddit, by Reddit's then CEO, Yishan Wong:

> I am continually astounded that people sort of trust corporations like they trust people. We can talk all day about how the current team is trustworthy and we're not in the business of screwing you, but I also have to say that you can never predict what happens. reddit could be subject to some kind of hostile takeover, or we go bankrupt (Please buy reddit gold) and our assets are sold to some creditor. The owners of corporations can change - look what happened to MySQL, who sold to Sun Microsystems, who they trusted to support its open source ethos - and then Sun failed and now it's all owned by Oracle. Or LiveJournal, which was very user-loyal but then sold itself to SixApart (still kinda loyal) which failed and then was bought by some Russian company. I am working hard to make sure that reddit is successful on its own and can protect its values and do right by its users but please, you should protect yourselves by being prudent. The terms of our User Agreement are written to be broad enough to give us flexibility because we don't know what mediums reddit may evolve on to, and they are sufficiently standard in the legal world in that way so that we can leverage legal precedents to protect our rights, but much of what happens in practice depends on the intentions of the parties involved.

> The User Agreement is intended to protect us by outlining what rights we claim. But it cannot protect you - you must protect yourself, by acting wisely.


And this is why we really need to put more effort into distributed, free software social networks. Having a single point of failure that will eventually be bought by $bigCo results in many issues with preservation of culture and history in online communities.


This is true for anything. People hit their heads somewhere and their character change.


isn't that more "don't trust any company"? what he says applies to way more than reddit


Well, yes. His main point is to not trust reddit because it is a company.


Reddit is a crossroads: It's an intersection between the cultures of 4chan (which is in itself an intersection of japanese and american sensibilities), and the culture of usenet, and internet forums, and a dozen other cultures besides.

None of these cultures handle censorship well. They all originated in an environment where, to some extent, you could say whatever the hell you like.

Many of Reddit's early users came from these cultures, and they were responsible for the early culture of the site.

And now, Reddit is desparately trying to adapt itself, and attract people from Twitter, Facebook, and Tumblr, whose cultures are radically different, and perhaps even to some degree less toxic than the pre-September usenet, whilst also being more toxic. I don't know how.

The point is, a culture that previously only dealt with unacceptability in relative terms - this is unacceptable in this context - is now dealing with absolute unacceptability - this is not acceptable, ever. This isn't a change that people will likely adapt to well. This is prompting a migration to sites like Voat, and others.

The problem is, Reddit is introducing censorship which is incredibly inconsistant to a site where the concept of censorship is anathema - Bans, yes, people get punished for breaking the rules. But having your posts quietly vanish without warning?

No wonder the userbase is pissed.

Unless I got it completely wrong, which is possible.


> Bans, yes, people get punished for breaking the rules. But having your posts quietly vanish without warning?

This comment particularly stuck out to me.

Very few other communication platforms of Reddit's ilk will go through the process of shadowbanning users - that is - the user still has full functionality of the site, however their comments are not visible to other users; only themselves. To an unknowing user, it appears no action has been taken on their account.

It is a shockingly effective means of silencing dissenters or those who disagree with the majority; and this punishment has extended far beyond those who speak abusively/offensively. The nefarious part is wasting the user's time as well.


As someone who's run a (small) forum, I completely understand the appeal of shadowbanning. It's incredibly effective if your goal is to make a community a usable place without spending most of your waking hours dealing with trolls and shitbags.

A single person can eat up hours of your time if you attempt to reason with them. They'll create new accounts, they'll use accounts they created a long time ago against this eventuality, they'll make appeals to you and to other users, and they will stir up as much shit as possible.

I've had a person literally call me on the phone to complain that their account was banned.

Shadowbanning means that none of this happens. I check a box, and the collective community can breathe easily, and I can actually sleep. It's an incredible force multiplier.


If only game makers would build this kind of system into their Anti-cheats, make the cheater think for as long as possible they are ruining other people's day only to find out that the last hour has been against AI with canned outrage responses from the AI.


I think most game makers just put cheaters into a cheater-only pool of players.


I like this idea; let's see who has the best and biggest cheats. A comedian once suggested we have the Olympics and we do today, and a no-holds-barred Cheaters Olympics where those competing can take any substance they like, to really see how far the human body can be pushed.

I wonder how long until we have to have a regular and an augmented Olympics.


That sounds like Quake. It isn't quite, but Quake is the closest thing.

Quake's metagame has evolved around extreme dexterity, complicated scripting, and an obsessive desire to use every trick in the book to push youself beyond the game's intended limits. This is why things like the bunnyhop, the rocketjump, and wallrunning are not only accepted, but expected.


I don't know of any game makers that do this.

They probably should, like how Dota 2 will put players who abandon games often into games with other leavers, but most popular games go through "ban waves" to get rid of groups of cheaters at once (Valve has done this many times with CS: GO for example).


> I don't know of any game makers that do this.

Well, apparently Rockstar did this for Max Payne 3 in 2012, and apparently it also applies to GTA-V

http://www.rockstargames.com/newswire/article/35441/taking-a...

http://www.gta5tv.com/gta-v-cheaters-pool-details/

http://kotaku.com/gta-player-says-he-hired-cheater-to-rescue...


Huh, I had no idea GTA5 did that. Thanks for the info, I really would love to see more game developers implement something like this.


Some minecraft servers used to do this. The "griefer" would see blocks being destroyed and placed, but their actions would have no effect on the server. Eventually they would get bored and leave on their own. It was hilarious too.

I think current anti-cheat tool suck. Cheaters are usually obvious, sometimes blatantly flying around, or performing inhuman actions. Simple machine learning should be able to detect cheats with high accuracy.


yes, but then we get an arms race, and that ends with all of the best humans who aren't cheating getting banned as well.


The admins of Reddit have said over and over again that shadowbanning is for spammers, and if non-spammers are getting shadowbanned, it's due to a bug. It was never and is not currently used to silence humans, only robots.


right... /sarcasm


Please follow HN rules when commenting.


I fail to see where you find issue with my comment in reference to the rules.

If you thought I was making a contentless post, sure, I can fix that: plenty of people who weren't spammers have been shadowbanned. Just look it up.


Anyone who is currently shadowbanned and not as a result of bot-spamming can request a removal of the shadowban from the admins and they will happily comply.

The fact that people have been shadowbanned is not evidence that it was intentional.


Okay, so you think I'm wrong. That's fine. But I didn't violate HN rules. I read then twice, just to check.

And while I freely admit that I might be wrong, in this case, it doesn't actually matter. Even if shadowbans are only given to spammers, that's not what the reddit community thinks is happening, leading to the current social climate on reddit.


> that's not what the reddit community thinks is happening, leading to the current social climate on reddit.

I... don't care? I guess you're making some kind of assumption that this conversation needs to know what you think the climate of Reddit's community is. I'd ask "Why" but, I care even less about that.

Shadow-banning is for spammers, full stop. I don't care if you think other people think otherwise.


>I guess you're making some kind of assumption that this conversation needs to know what you think the climate of Reddit's community is. I'd ask "Why" but, I care even less about that.

Why wouldn't I make that assumption? Go read the OP. One of the main points is that shadowbanning, or the belief in it, creates more distrust between reddit's admins and users.


I'm sorry, but the salient point here is that shadowbanning isn't what you're claiming. End of story. I don't know how many ways to make this clearer.


You've made it plenty clear. In turn, I made it clear that I was dubious of your claim, and that it didn't really matter to the point I was making.


It's not a claim, it's a fact. This isn't a matter of opinion, this is objectively true.

https://www.reddit.com/r/announcements/comments/3sbrro/accou...


Well, one thing that you may have failed to notice is that this is relatively recent: Less than a year ago, shadowbans were a common ocurrence on Reddit. Note how it said that suspensions would replace shadowbanning.

And it's a claim until you give evidence. Until a day ago, as of now, you didn't.


The problem is that reddit is trying to be both a neutral platform and a cohesive community at the same time. No one gets mad at phpBB when someone deploys a forum with an objectionable theme because it's understood that they had no involvement, they just provided the code for anyone and everyone to use in accordance with a FOSS license. reddit tried to do that with hosted subreddits, but it also tried to cross-pollinate and make one big reddit family and see everyone on reddit.com as "redditors". That results in a lot of infighting and resentment (/r/the_donald v. /r/enoughtrumpspam, /r/atheism v. /r/christianity, etc.), not to mention some doxxing, harassment, brigading, and invasions.

It's tense in the real world when people with diametrically opposed worldviews are forced to mingle, but it can sometimes go OK because the human aspect tempers it. Occasionally, with especially open-minded participants, a friendship can be kindled. That doesn't seem to happen at all when these people are not put in a room together, but on a message board, especially an anonymous message board.

reddit didn't know whether it wanted to be a warm fuzzy community, for which common ideals and values are foundational, or whether it wanted to be an agnostic, neutral, unfeeling platform provider. IMO that's responsible for a big part of the culture clash that we see on reddit, and it's left them in the awkward position discussed here.


/r/the_donald had the "sheriff star" still displayed several days after it hit the media.

It's such thinly veiled hate speech. The tone of the denials hint to those who are anti-semitic that this will be their platform from which to indulge in their hatred.

Even a decade ago, no large corporate forum would put up with this sort of liability. It's crazy.


Reddit's content curation has come at a time when social media writ large (Facebook, Twitter) has become linked into State Department and DoD programs. Counter-intelligence objectives are fought on the 'private property' of social media servers that host the content of individuals. Fighting the 'War of Ideas' in the 'cognitive domain of warfare', the effort to starve unwanted ideas for a place to roost and feed others to maturity is certainly useful, but it comes at a cost.

There is some value in ungoverned spaces, where advertisements, political astroturfing, politicized content curation ("no 'RT' allowed, but we'll allow VoA and Sky") play a secondary role to the contributions of individuals.

The internet was supposed to be an ungoverned space - a 'piazza' or 'forum' - but when it wasn't and when the 'Web' wasn't social media was supposed to fill this gap. Behind the cry of those protesting the take down of 'revenge porn' and 'fat hate' postings, I hear the more sober voice that adknowledges that there's one less place that's a safe and free place for expression - as unpopular as some of it may be.


I too long for the days of the ARPAnet, when the Internet was not associated with DoD programs.


Well I've read that the most "reddit addict" city is known for hosting a huge air force base, meaning intelligence agency.

So yes, it's "ungoverned", but it's still a goldmine for information study. I guess /r/syriancivilwar must be tracked heavily.


>There is some value in ungoverned spaces, where advertisements, political astroturfing, politicized content curation ("no 'RT' allowed, but we'll allow VoA and Sky") play a secondary role to the contributions of individuals.

All those things are daily occurrences on Reddit.


This says what I have spent the last few years of my life trying to say.


Why does Reddit have to become a media empire?

The formula for doing that is pretty well-known by this point:

We'll see a ban on throwaway accounts and a push for real names, then a ban on third party URL shorteners, then interruption ads, and finally some sort of paywall.

Reddit is a useful piece of internet infrastructure, and I'd be pleased if it would stay that way. It doesn't need to become its own media empire with its own Rupert Murdoch, etc.

Some things that could be improved:

- opt-in home pages that are tailored at specific audiences. The standard one is pretty low quality.

- more detection/policing of voting rings and vote fraud in general.


> Why does Reddit have to become a media empire?

Because investors have poured a ton of money into it expecting it to be a media empire.


This is what I detect in Reddit's new ideas: the fear that the existing site is good enough for most people and its population has therefore stabilised... which would be pretty bad for certain kinds of investor looking for a big one-off return instead of a consistent dividend.


Oh god. There are no "investors". Reddit is owned by Conde Nast's parent company. The only other shareholders are employees.


Wrong.

Reddit has recently received substantial outside investments: https://www.theguardian.com/technology/2014/oct/01/reddit-se...


I stand corrected.


URL shorteners have no value except to hide spam links and add someone else's ad interstitials. I should hope they'd be banned!


> opt-in home pages that are tailored at specific audiences. The standard one is pretty low quality.

How is that distinct from multireddits?

> more detection/policing of voting rings and vote fraud in general

One thing that'd help with this is better mod tooling for detecting when it's happening on a reddit you mod.


it'd probably have multireddits underneath, but multireddits don't currently play a part in onboarding

as far as detection tools for mods, https://www.reddit.com/r/ModSupport/comments/4tpla8/_/d5j7uo...


Yeah, I read that comment. I'm hoping they can give us at least some form of tooling around this, though. The inability to do even basic things as a mod seriously sucks.


>then a ban on third party URL shorteners

aren't they banned already?

from the reddiquette[1]:

>please don't use link shorteners to post your content

[1] https://www.reddit.com/wiki/reddiquette


I really wish that Reddit would however add HN-style about boxes.


Reddit, from a business perspective, baffles me. During the Yishan Wong/Ellen Pao era, we had Reddit-Made and Reddit TV, both of which bombed especially. Under Alexis Ohanian, we had Upvoted and Formative which as the article notes were killed silently.

Reddit released a native app and an image host years too late. (I just checked the data and it is not killing Imgur: Reddit image usage was 18% in the top image subreddits at beginning of June, today it is 25%).

The biggest fundamental change Reddit has made in the time since is...making self-posts count for karma. And tracking outbound links.

It really shouldn't be that impossible to have a successful business with hundreds of millions of users. Especially with the wealth of data available to Reddit.


> I just checked the data and it is not killing Imgur: Reddit image usage was 18% in the top image subreddits at beginning of June, today it is 25%

that seems like a pretty significant rate-of-uptake to me - why would you say it's not killing imgur?


Here's the plot of the Imgur and Reddit pics market share on /r/pics and /r/gifs: https://docs.google.com/spreadsheets/d/1cgoFF6njlyv1W0DQ3IF5...

There is growth in Reddit Pics and decline in Imgur, but it'll be a few months before the lines intersect. A cry from the Imgur killer everyone expected.


I think it's that's a very good rate of growth for a new service. The scaling challenges alone are not obvious and slowly taking up market share is the safer thing to do. If I was an investor seeing these growth rates, I'd be really happy as long as it keeps up and only plateaus when the goal is reached.


just how fast do you expect these things to happen? overtaking the overwhelmingly dominant reddit image sharing host in a matter of a few months sounds pretty damn dramatic to me.


I would love to "fail" by steadily increasing growth, becoming #1 in a ~year.


It's not apples-to-apples to traditional startup analogies when there is a giant UPLOAD IMAGE button on the Reddit submission page.


Unless at some point in the near future you turn off the imgur spout.


Not sure how this proves your point. Reddit is about to overtake Imgur...


I wonder how much of not evolving the site is due to legacy code. A few years ago I briefly read some of their code, since it is open source, and found it difficult to understand due to a lack of comments and descriptive function and variable names. And it is all Python. So basically any change to the code can be catastrophic because you have no guarantees everything will still work.


I tried doing some work with their code base around 5 or 6 years ago and at that time it basically didn't work out of the box and had no tests. Or at least their released code didn't.

Some of the problems were actual bugs that seemed odd in a production code base, and others were because they had stripped everything relating to some paid services they used for search or indexing or stuff that I don't remember.

In general it was a huge PITA to do anything with compared to what the leadership was hoping. (That it's be like the WordPress in terms of required investment was what they'd been hoping for.)


To have a successful business, there must be an underlying assumption one of your goals is to actually become a successful business. That would seem to be self-evident, but in the discussions I have seen regarding reddit ever turning a profit, their leadership says it is not a priority. I'm not sure why anyone would have expectations of business success in such a scenario.


It's not. Reddit could easily sell targeted ads based on subreddits (e.g. inking a deal with Uniqlo to advertise on /r/malefashionadvice or /r/femalefashionadvice).

It chooses to pursue other monetization means to preserve the Reddit experience.

We'll see if it's able to stumble onto another sustainable business model.


The problem is most high quality advertisers don't want to have their brand promoted along side user generated content. So while Reddit has a ton of traffic, much of that traffic is worthless to the most desirable advertisers.


Isn't that exactly Facebook's business model?


Sort of. Facebook also pours money into making its platform family-friendly through moderation teams, auto-detection of obscene/inappropriate content, etc. Not to mention all the analytics and targeting capabilities of their advertising programs. Being a household name, they're also so huge that it's impossible to ignore - grandma's not on Reddit but there's a pretty good chance she's on Facebook.


Exactly the same problem as Tumblr.


> It's not. Reddit could easily sell targeted ads based on subreddits

Except Redditors are explicitly anti-advertising. You can't have both - the current community/culture and billions in revenue.


Ad-haters at quite capable of running adblockers, and they aren't all of Reddit. Most people tolerate tasteful ads


True, but you can't scale tasteful ads.


> It's not. Reddit could easily sell targeted ads based on subreddits (e.g. inking a deal with Uniqlo to advertise on /r/malefashionadvice or /r/femalefashionadvice).

I think they would love to do that, but for whatever reason Uniqlo and others won't buy in. Maybe someone will correct me, but I think they've been trying to sell more ads for many years with little success.


I could see ads if used sparingly and subtly. I applaud their effort to find alternate ways to finance.



If the aim was to beat Imgr then it seems strange that the 'upload image to reddit' feature is only available on certain subreddits


Pao is thoroughly an MBA-type. Business schools teach that the way to wealth is to start a startup and have it acquired. Or, failing that, work your way into a nascent startup and get bought out. Every post from her that I saw simply oozed business-school sleaze. Pao's discourse was more appropriate for the boardroom than for the embodiment of alt-media that is Reddit. Gratingly so, and Yishan to a lesser extent. I honestly think they both were simply too "corporate" to succeed in an environment that resents big interests.

There simply hasn't been someone with vision and the skills to deliver at the helm.

Frankly I think it's because anyone competent is wise enough not to touch Reddit with a 10-foot pole.


I think Yishan Wong goes in the completely opposite direction. I think the opposite direction is better BTW. I really wish for example the restrictions can be reduced or removed so board of directors (like @pmarca) can tweet more on public companies.


I like reddit. I don't really care so much about the frontpage. I like other subreddits where discussion is central (bestof, subredditdrama, changemyview, self, ask<insert-subject>), or content subreddit (games, wallpapers, military, photos). The default subreddits feel like google news.

What's important is the users and how there is room for them to exchange both ways, unlike standard medias.

There also are many people watching for bias, would it come from moderation, brigade, corporate, etc. You will often reads posts about actual professional in a field explaining you something, and it often is enlightening (granted that I would not trust reddit for a decision that implies my own existence).

Generally, reddit works because the users can see and feel that people are exchanging, talking, sharing, reacting. It's "alive". Even facebook cannot really pretend being that lively place, that "bazaar".

What must be really tough is how you manage that many teams of moderators. That must be a nightmare, but to me it seems that it's vital. Fortunately it seems that they will always find people for that, because their subreddit revolves around something they like, and they will often do a good job (it seems) because they want to promote that hobby, not that it will directly benefit them financially (example, moderator of askhistorians).


Fortunately it seems that they will always find people for that, because their subreddit revolves around something they like, and they will often do a good job (it seems) because they want to promote that hobby, not that it will directly benefit them financially (example, moderator of askhistorians).

Yup. I'm one of the mods for /r/AskEngineers. I want to promote engineering in general: designing things, fixing things, learning how things work. I'm glad to do this, and to encourage good and useful discussions.

I especially like when we've got questions from engineers who are working outside their own discipline, and just need a little nudge in the right direction. Getting just a paragraph from an expert can save days or weeks of effort.


The human race is comprised of horrible people, any website that accepts user contributions will attract contributions from horrible people. As a service owner you have a decision; either you say that everyone's opinion is valid, horrible and all, or you say no; these are the rules around what you can post and anything outside those boundaries is subject to removal.

What you absolutely should _not_ do is build a brand around being in the first category and then transition to the second. Especially if all your content is user contributed.

Of course, no one thinks of themselves or their in-group as horrible. You could substitute horrible for flawed if it makes you feel better.


>Of course, no one thinks of themselves or their in-group as horrible. You could substitute horrible for flawed if it makes you feel better.

If everyone is flawed, then whats the point of having an arbitrarily selected group of flawed people censor all the other flawed people? That logic seems flawed, unless it is purely from a monetary interest.

Moreover, deeming people "horrible" is completely subjective. All of the great pioneers of human rights were considered "horrible people" by the majority.

>Like a boil that can never be cured so long as it is covered up but must be opened with all its ugliness to the natural medicines of air and light, injustice must be exposed, with all the tension its exposure creates, to the light of human conscience and the air of national opinion before it can be cured. But though I was initially disappointed at being categorized as an extremist, as I continued to think about the matter I gradually gained a measure of satisfaction from the label. Was not Jesus an extremist for love: "Love your enemies, bless them that curse you, do good to them that hate you, and pray for them which despitefully use you, and persecute you." Was not Amos an extremist for justice: "Let justice roll down like waters and righteousness like an ever flowing stream." Was not Paul an extremist for the Christian gospel: "I bear in my body the marks of the Lord Jesus." Was not Martin Luther an extremist: "Here I stand; I cannot do otherwise, so help me God." And John Bunyan: "I will stay in jail to the end of my days before I make a butchery of my conscience." And Abraham Lincoln: "This nation cannot survive half slave and half free." And Thomas Jefferson: "We hold these truths to be self evident, that all men are created equal . . ." So the question is not whether we will be extremists, but what kind of extremists we will be. Will we be extremists for hate or for love? Will we be extremists for the preservation of injustice or for the extension of justice?

-Dr. King, Letter From a Birmingham Jail


> If everyone is flawed, then whats the point of having an arbitrarily selected group of flawed people censor all the other flawed people? That logic seems flawed, unless it is purely from a monetary interest.

It seems reasonable that the people who put the time and effort into creating and maintaining a discussion site are perfectly valid to pick and choose what type of content they see fit no matter how arbitrary their rules.

If their rules are too harsh or unliked people will just go elsewhere.


I didn't realize drinking on the job was a thing. I've had the odd company party with beer in the late afternoon, but that's perhaps twice a year.

I'm not a teetotaler by any stretch of the imagination, but drinking at work seems counterproductive.


We have a keg at work. It's used enough to not go bad, but there's never been an issue with it. People will have a beer on a Friday or during a lunch on a really busy day. I'm not a beer drinker and bring in cider for those times.

This probably goes without saying, but alcohol isn't the problem. I imagine problem people and problem cultures often like to involve alcohol, so there's a correlation.


We have beer in the fridge and a keg at work. Sometimes it's nice to have a beer at the end of the day, but after a working all day I usually just want to go home.


Yeah, i can see that. Finish a big project and have a beer with your team. I have a hard time seeing that as a daily or even weekly thing.


I'd like to give a counter-perspective, since those who drink at work are often grouped as "brogrammers" when I don't feel like the term encapsulates the environment I'm seeking at all; I have a group of mixed engineer/PM peers accumulated from multiple past jobs who perhaps once a month after work on a friday go to a bar for happy hour and just shoot the shit about work. We have non-drinkers there (we'd glady have more but by no planning of my own many of my coworkers happen to drink) and I've never heard any murmur of pressure to participate beyond being a grumpy engineer :)

Sometimes, as the parent said, this happens out of the work fridge, as going to the bar isn't feasible, and it's nice to take 30m or so out of a long afternoon to be slightly less heads down, as said above, the fact that beer is involved is more just that it's a common source of enjoyment than any intrinsic ties to the hanging out. Coffee, video games, etc are all other ways of doing this, and I think if the culture is natural and very laid back you can avoid the peer pressure that naysayers like to bring up.

To your earlier point of being counterproductive, by 4 on a friday, I am so far from my peak of engineering productivity that a casual chat that spawns some interesting conversations is probably far more work-beneficial than my banging my head further on the same thing I've been doing for the prior 79 hours of the week, if I'm being honest with my workflow.

Just my thoughts. The stated lack of pressure comes from a recognition that these are just that, MY thoughts; I absolutely believe that this pattern doesn't work for anyone, but after so long of seeing the anti-drinking sentiment grow I wanted to chime in a bit with a positive spin; in as mentally intensive a field as we often have a chance to unwind is so valuable to me.


> [we...] go to a bar for happy hour and just shoot the shit about work.

Yeah, that makes perfect sense, i do that myself. Absolutely agree with the 4 on Friday point as well. I like drinking.

It's just the at work part that makes me nervous. I see the utility in not having to move to a different location, and how that is more inclusive to everyone. I think it takes a very light touch. Because it's at work people might feel more obligated to participate - even if they don't claim any feelings of obligation. Because people are drinking, they might be a little less tactful than normal. These two can compound in ways that aren't great for everyone.

You and your team sound like smart responsible adults. And i'm sure organizations that approach it with your level of delicacy will do great.


It is definitely much easier in a small company. We hire mature adults and treat people as mature adults. Drink/don't drink, play a chess game or not, work at home or not, stay and play some games after work or go pick up your kids, we do not care as long as the work gets done.

All people are different and value different things. If you hire responsible adults, you can give a lot of leeway with how the work gets done.


Something that often goes unmentioned is that there's a huge difference between having a pint over lunch and getting trashed.

A well rested person with a little beer in them is going to probably be less incompetent than someone who was up all night with insomnia.

I'm intolerant to caffeine and if I drank the same amount of coffee some folks drink it would leave me a juddering wreck


I worked at a software company that had Whiskey afternoons around 4pm on Thursday. It was good for getting people across departments to talk to each other, and helped bring out some introverts. I wondered if it had negative gender connotations, as guys were much more likely that women to partake.


Much more common in the creative industries.

In advertising for example it's not uncommon for creative teams to head off to the pub for a brainstorm in the afternoon if its thursday/friday.


Hell, is having a beer with lunch that big of an issue?


At a previous startup, I would regularly work past 7 or 8pm.

At that point in the day, I definitely liked being able to crack open a cold cider. It's not like I was getting drunk, and I would often continue to code for a few more hours.


For medium-to-large organizations, lack of communication is the biggest problem - especially no-one being willing to say anything negative. Drinking cultures can greatly increase productivity, IME.


Drinking at work while trying to work is counterproductive. Drinking a beer after work isn't and it's an okay way to get out of the silos one normally work in.


My first job, many years ago now, was at a place where the job of the junior (me) included going to the bottlo up the street and bringing back drinks, which often started at 9am.

Everyone from the CEO down was totally sloshed by afternoon, every day. I would absolutely hate to work in that environment, but it's definitely "a thing".


When I used to visit the Yelp offices in SF, and a few other high profile tech outfits, they had kegs in the office.


It's bizarre that a company seems to be struggling to administer diversity of staff, without being certain of its own medium-term success.

There's no point in having your quota of "people of colour", as the article puts it, if the business model is unsustainable and leads to people being fired anyway.

Why not focus on creating a successful company first, and then worry about things that carry an administrative and management overhead.


It goes to what kind of company and site you want to have. If all you have is 20-something tech bros then that's the kind of company and site you will have.

If you want to appeal beyond that, you may want to broaden your culture.

Sometimes the founding cultural DNA is too strong, or attempts to change the culture fall short for a variety of reasons.


I don't think hiring women or blacks is going to do much to the community.


No, but if your monoculture of tech-bros has not had to deal with half the internet shit that women and people of color do (Or have, for various reasons, not been deeply impacted by it,) then what you're going to get is a community where women and people of color are driven away.

I've never been stalked on the internet. I've never received threatening phone calls. I've never been personally harassed, or physically threatened. If by some happenstance, one or two people did any of the above, I'd probably laugh it off.

A company full of people like me is unlikely to consider those use cases as seriously as they should.


Luckily a company full of people like you will also need to have experienced marketing and legal staff, who will know that it would be unwise to laugh off such user issues. Your marketing team doesn't need to be diverse to know this.


Having a successful company that appeals to a broad range of users is related to having a diverse staff.

Reddit has serious problems with blatant racism, sexism, and harassment on many of the larger subreddits. Until they start taking these issues seriously, they will have a hard time expanding their user base to include more women and minorities.


I hear that sentiment echoed a lot, but is their any hard evidence proving that a diverse staff increases the value significantly to the diverse demographics they themselves represent? WhatsApp is incredibly popular in South America and Africa, were their a proportionate number of South Americans and Africans on the WhatsApp team of ~18 people? Genuine question because I hear this preached like the bible whenever diversity in tech comes up.


Yeah, it's made-up bullshit which sounds like it makes sense. Enforced diversity is only a thing in the past few years, so you only have to think of....almost any mass-market product from before then to know it's untrue.

It's funny to me that someone can say "having a diverse userbase is related to having a diverse workforce" with a straight face, while half the shit in their/everyone's home is Chinese and Japanese.


That explains why product localization is totally not a thing! Yet, ah, it is...


You might be confused from too much HN, but not every product in the world is a web app or 'box of the month' subscription.


The article seems to clumsily imply that the firing/resignation of a handful of employees is evidence of some conspiracy at reddit, yet people resign and get fired for all kinds of different reasons.

The tone of the piece makes me feel like it wouldn't even have been written if the employees who left had been 'tech bros'


> There's no point in having your quota of "people of colour", as the article puts it, if the business model is unsustainable and leads to people being fired anyway.

Your assumption is that people of color have nothing to bring to the table; that white people are better suited to helping create sustainable business models. The label I would use for this belief in the absence of reliable evidence is "white supremacy", although I understand that's a difficult word for many to swallow. You can call it "mild white supremacy" or "subconscious white supremacy" if that's a little more palatable. I don't intend it as a slur, although I know it will feel that way to some. I am using the term because I believe it is the most accurate term for describing the belief.


> Your assumption is that people of color have nothing to bring to the table

I didn't say, or even imply, this. I said that it makes no sense to put significant management and administrative resources, within a failing company, into something which will not help the company succeed. I'd say the exact same thing if we were talking about a company full of Japanese people in Japan, or black people in Kenya, or whatever.

If they find it difficult to hire and retain women (which the article states), then they can put that to the side while they ensure that their company does not die.


How do you know they don't need the women to survive?


Why the hell is Reddit trying to make their own content when the entire point of the site is for the users to create the content?

Don't make a Reddit podcast, make Reddit a podcast hosting network.

Don't make a Reddit video show, make Reddit a video hosting site.

Don't make a Reddit magazine, make Reddit a source for anybody to publish their own magazines.

Why does Reddit have writers and editors and creative directors? It's like a rock band having a position for a flower arranger.

Is this the point where Reddit has official jumped the shark? Where to next for my cat picture memes?

edit

Want to increase quality and revenue? Give a cut of advertising revenue to the mods of successful, high quality subs.

Incentivize for the behavior you want.

Provide the platform and get the users to provide, mod and benefit from the content.


I see why they want to create their own curated content. A friend of mine has created and run several user-generated content sites and he said: "user generated content is worthless; largely racism and porn." It's driven by the lowest common denominator.

As long as content is hosted on company-controlled servers where ad revenue is the model for survival, you're going to see slow declines and eventual busts like Digg. Users hate being advertised to and censored.

The future is probably something where people pay a small amount to access a forum that's somehow distributed on their machines (phones, PCs, etc) and not controlled by any organization. Data transfer (even mobile) and storage are getting cheaper. It'll be like bitcoin, but for discussions.


>As long as content is hosted on company-controlled servers where ad revenue is the model for survival, you're going to see slow declines and eventual busts like Digg.

That's really not what happened at Digg. The problem at Digg was a structure which rewarded "power users" and allowed third party commercial interests to gain control of the site. It's bad enough that Digg had advertising - eventually we were looking at Digg ads to get to content that was itself advertising.

And not all user generated content is worthless. Youtube is a good example of this - I can point to user after user who has uploaded quality original content of one kind or another for years.

Yes, 90% of it is crap, but if you subscribe to the channels you like and check out the channels your subscriptions recommend, you can avoid nearly all of it. And, you know, Sturgeon's Law.


Yes, to say that user generated content is porn and racism, that really says more about the person saying it.

(A more real problem is that early adopters of any web community building effort are overwhelmingly just there to self-promote in one form or another.)


Youtube can be a significant revenue steam for the users through so it's not a good example.


Digg failed because of attempts to monetize it better through ads. Users revolted and eventually it crumbled. Users weren't able to control their content.

Regarding your second point, if you include Youtube comments, and not just the videos, it's definitely 90% crap.


I disagree on Digg. I didn't mind the ads. Not enough to stop using the site, anyway. What I minded was the transition of the content to advertising. Plus, at the same time they did a site redesign that made it look like one of those auto-generated news sites.

Comments on youtube... yeah, they're crap for most videos. I guess technically comments are user generated content, but I never really think of them that way. Even more so on a site like youtube where the videos are the important bit.


>"The future is probably something where people pay a small amount to access a forum that's somehow distributed on their machines (phones, PCs, etc) and not controlled by any organization. Data transfer (even mobile) and storage are getting cheaper. It'll be like bitcoin, but for discussions."

No. The future is the next Digg, the next Reddit, etc. Just like a torrent site, when one goes down (for whatever reason), the next in line pops up to take its place (and users). The internet does not want to pay money to access a forum. The internet also does not want to be advertised to. So the cycle will continue.


Here's the problem (IMO): Reddit has too many features and too big a userbase to replicate as Reddit itself did a decade ago. voat.co has been in development for years and still lacks crucial systems for things like content moderation, monetization, scaling, etc.

Essentially for a coup of Reddit to occur you have to take a large chunk of Reddit at once. That is very hard to do.


Reddit doesn't have that many features hell it lacks tools for getting notification on sibling, or aunt/uncle comments, and you can't even edit threads more then 6 months old.

It doesn't have a standard spoiler syntax for comments or a spoiler tag(people hack it in with nsfw tags styled to say spoiler but from the main page it will look like porn(ie be marked nsfw)).

It lacks a wysiwyg editor by default. It would be nice to have a nsfw subreddit namespace so that sex related subreddits don't take up possibly safe/useful names and so that you know that /r/humanporn isn't porn. Also a nsfl tag. Ability to expand "ALL" comments. Ability to pick percentage of home page each of your subscribed subs get.

Also a policy that allows removing bad mods from power, ie /r/Holocaust really should not be run by anti-semitic holocaust deniers.

Oh a way to sort posts by reverse age and allow posts in between time x and y not just last hour/day/month/year.

---

Of course reddit does have good mobile apps (at least on android), a competitor would need a good mobile app for at least 2 systems which can take a while.


Voat's issue is that it managed to get users from the most controversial communities... and that's about it. If you're not interested in Donald Trump, GamerGate or something similarly controversial, then Voat is simply a really empty forum that doesn't have enough members or posts to keep people returning.

It's having the issue a lot of 'free speech' centric services have; it appeals to a minority who feel censored, but not the mass market that simply want to discuss less controversial topics.

And until there's enough people able to talk about a variety of subjects, the site is basically pigeonholed as the 'place people banned from Reddit might go'.


voat.co was developed because someone wanted to develop something with ASP.net.

Reddit, the software platform is open source: https://github.com/reddit


Voat is run on ASP.NET but it's open source as well:

https://github.com/voat


I'm not exactly proud of it, but I've paid more for hosting my own stuff (or really, having a server that I do almost nothing with) than I have paid in patreon-like subscriptions and flattr. So I don't think the "on your own machines" model is entirely impossible.


With the advent of the cloud and the ubiquity of mobile devices, the next Digg won't be controlled by anyone other than the users. They will supply the storage and bandwidth. The community will truly be in charge, and also be responsible for policing it. When the cost (time) of policing it becomes too much (even with machine-learning bots to remove ads, child porn, hate speech, harassment, etc), then a new version of that model will appear.


While we're at it, we can also throw blockchain at the problem. It will be hosted on Etherium as the DAO (Distributed Authoring Organisation). Policy changes require 51% of the computing power. If someone doesn't like some content, they can fork off and make their own child DAO, living in a universe where that content no longer exists.


> A friend of mine has created and run several user-generated content sites and he said: "user generated content is worthless; largely racism and porn." It's driven by the lowest common denominator.

What about HN? I don't disagree that a lot of ugc on the web is crap. I do think though that there are viable strategies for encouraging quality content.


> user generated content is worthless; largely racism and porn." It's driven by the lowest common denominator

stackoverflow would be one counter example.

So would a lot of subreddit (IS, CS, and science oriented ones for example).

Oh and what about wikipedia?


How much does it really cost to run a forum system? Even a big one? Look at 4chan.


Well, I imagine Reddit costs far more than 4chan to run, not just because they store the content at perpetuity, but also because Reddit has actual employees, not just volunteers working out of the g... evil of their hearts.

But, more importantly, how much Reddit costs to run, at their current scale, is immaterial. Reddit has raised $50 million in VC money and is valued at $500 million, that means that half a billion is the mark they are expected to meet/exceed in lifetime revenue, independent on what their costs are. For better or worse, for-profit startups have very different incentives than non-corporate internet communities. This allows Reddit to not be 4chan (wider audience, some level of control on trolling and harassment, etc), but at the same time, it can't be 4chan (including running on a ramen-for-one-and-electricity-bill budget if need to be)


Sites like Reddit should not be businesses in the first place. There's a conflict of interest right there, between the very idea of community site and the needs of a business. They've been handling it somewhat gracefully to date, but business pressures increase...

Normally I'd suggest they start another business doing some honest work (i.e. not ad revenue). Say, a hosting service, since they have experience with that already. Reddit then could become a marketing expense for them - something they'd run for the sole ability to say "Yes, we're the ones who're running the Front Page of the Internet! As you can see, we have some experience!".

But, as you said, they took VC money. It changes the picture significantly. I'm not sure they can be helped.

In every get-rich-quick scheme there's a catch. In startups, the catch is that if you care about your product then taking VC money is a difficult decision because it means surrendering your business to the whims of people who absolutely don't care about your product.


That means Reddit is trying to run a high-margin business in a low-margin space. As ad blockers get better, that may stop working. At one time, it was expensive to run a service with a large user base. Technology has progressed, and it no longer is.

Craigslist wiped the entire classified ad industry and much of the newspaper industry off the face of the earth merely by being the low-cost provider. Ad-supported businesses should be afraid. Very afraid.

As for Reddit, the VCs may have to take a haircut.


Moot came from a wealthy family and was able to subsidize 4chan. He deserves a lot of respect for that, but it's not a sustainable/replicable model.


Depends on a lot of factors.

Is the content mostly text or does it have a lot of images/videos/sound files? If the former, it's a lot cheaper to run than the latter (hence why a lot of big community sites are so 'vanilla').

What's the audience/topic? Some fields are easier to monetise than others. People in tech are more likely to use Adblockers, but they're also more likely to try out new methods of monetisation (like say, a Patreon type system or something involving Bitcoin).

How much do you know about running a server? Much cheaper to run a big site on an unmanaged service than a managed one.

How many people post vs how many people read. More posting in less time = more server resources and more service costs.

What script do you use? Something lightweight and lacking in a lot of features is going to hold up for longer on a shared server than something like XenForo or IPB or VBulletin.

So it can be really cheap if you're after a simple forum with an audience that doesn't block ads and might be willing to spend a lot on subscriptions or donations. It can be really expensive if you're offering a live video chat microblogging/livestreaming service.

I've seen large forums with hundreds of thousands of posts on $5 a month hosting, and sites with about 50 users needing a dedicated server.


As an average user, why would I care whether that forum is online or distributed to my machine?


You usually don't. Until someone comes along as says "you can't say that" and removes your content, because you don't control it.


So, Usenet?


Yes, but for tech-illiterate millennials. Usenet has to be wrapped in various interfaces to make it usable/searchable, and usually shitty VPNs are hawked as add-on to help you avoid your ISP sending you nastygrams for downloading all of Sex and the City or whatever Marvel superhero movie is the current rage.


There's this place between "create their own curated content" and free-for-all "user generated content" that is "moderated & curated user generated content".


Isn't that imgur?


>Incentivize for the behavior you want.

This isn't as straightforward as it seems. It's kind of like that old quote, "When a measure becomes a target, it ceases to be a good measure." If the behavior you want is there because people are just casually messing around for fake internet points, it will change when you compensate them. It's probably possible, but far from simple.


Well you manage the number. If your target is a high number of internet points your post will aim to maximize that.


[DELETED]


You could say it more formally as, "Many variables correlated to outcomes you want are not causes of those outcomes. When you intervene and change those variables, they cease to be correlated to the outcomes," but it's a bit verbose[0]. I've always read it to be not about measuring things, but about optimizing the wrong thing, because people will game the system. The concept is valid. What's your concern?

[0] https://en.wikipedia.org/wiki/Goodhart%27s_law#Expressions Turns out the original formulation was "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes."


Right on the nail. To me personally, the moves Reddit has been doing the past couple years read as if they don't know exactly what Reddit should be or what they want it to be. I imagine it's not easy, Reddit is massive and somehow not quite killing it, but somehow it feels like there's a strange lack of focus.


The mobile version of the site doesn't work and they only just released a mobile app. That says it all about their lack of focus.


Android has at least a few very good reddit apps (including the open source Slide), they don't really need an official client.


The old i.Reddit.com works OK.

The new m.reddit.com fails after a few minutes, on each page.


> Give a cut of advertising revenue to the mods of successful, high quality subs.

No, oh no.

Why mods?

Cuts should go to users posting the content.

Maybe make it so that OC is rewarded [more], karma "farming" is discouraged, echo chambers don't become the only places with activity, and make sure that hiveminds do not drown out all other discussion. Otherwise Reddit just becomes the online cousin of mainstream media, and a more compelling rival will pop up sooner than later.

They could even reward gilded comments that start good discussions, and so on.


Do you want clickbait? Because that's how you get clickbait.

There's a big difference between how you think when you ask "Do I want this?" versus when you ask "Will people want this?". You shouldn't confuse these.

A lot of scams are based on confusing people about what they want vs. what they think others will want. First and foremost MLM-schemes. You convince yourself the product is good, because you hope to make money from it. Then you convince yourself you will make money from it because the product is so good, in a vicious circle.

> echo chambers don't become the only places with activity, and make sure that hiveminds do not drown out all other discussion

And do you have a proposal for how to achieve this, beyond the old "putting the right people in charge"?


> do you have a proposal for how to achieve this, beyond the old "putting the right people in charge"?

I was actually brainstorming over this – an alternative, or rather, something that builds upon the fundamentals of the Reddit/HN format – during a long drive today, and I think I did hit upon something that could be effective, and fun.

Now to polish up my idea and figure the best way to put it out there. :)


Mods are one of the reasons why the site is dying. Huge threads and valuable posts are constantly being deleted because a mod sees a few worthless comments and nukes everything instead of letting the community use the built-in vote mechanism to push these posts to the bottom of the site.


The current trend seems to be a mod locking a controversial thread and commenting 'some of the comments are unacceptable and we don't have time to babysit this'.

Which annoys everyone who didn't leave an unsavory comment and also begs the question - if you aren't into babysitting threads then what did you expect the role of a mod to actually be, and if there aren't enough mods why not get more?


It's because everyone wants into the content creation game.

It's odd because Youtube, Netflix, Amazon, and Hulu did it because it's about the same cost as paying the content creators. And then they own it.

Reddit is already getting their content for free. They should really just focus on improving community features.


Or just provide better targeted advertising... On Facebook you can advertise to people based on demographics, interests, behavior, etc.

On Reddit you can't even place an add on a specific subreddit if it's not one of the big ones.


> On Reddit you can't even place an add on a specific subreddit if it's not one of the big ones.

Would someone please confirm this? If true, wow.


Hi, I have ads running on three subreddits right now, the smallest one has only 1,460 readers. Considering the CTR (click-through rate) I guess putting ads on subreddits smaller than that might not be worth it, even if that ad has been live for over a week it's still at less than 400 impressions/views.

As a reference, I have an ad on the /r/programming/ as well with 678,120 readers, that one has 113,613 impressions/views during the same time period.


This is incorrect. I have placed ads on /r/postmates, which is a small subreddit. You can't target some of the really tiny ones because they lack volume. I think it also "remembers" which subreddits you've visits, and also uses that to target.

That said their ad targeting system is quite bad in other ways.


I just tried to place an ad for /r/pics and /r/gaming and had no problem getting to "pending ad approval"

EDIT: I'm an idiot.


> if it's not one of the big ones

I'm pretty sure r/pics and r/gaming are two of the biggest subreddits out there.


Those are huge subreddits, both around 12MM subscribers.


those are big ones


I think the problem is that most of Reddit's current 'content' isn't monetizable to the degree that they need it to be. In its current form, Reddit is mostly an aggregator of other sites' content, which makes that content much harder for Reddit to make money off of. Self-serve CPM and CPC ads will only get you so far in terms of revenue nowadays.

I run a company with an advertising based revenue model with seven figures of yearly revenue, and CPM isn't what the largest advertisers are looking for. Everyone wants custom created content now, and they pay high premiums for this type of content.

I love Reddit as a news aggregator, but I can see why they need an editorial and content team for revenue generation. I'm certain that most large user generated content sites like Snapchat and Facebook have writers and editors to help paying advertisers create content.


> Want to increase quality and revenue? Give a cut of advertising revenue to the mods of successful, high quality subs.

I've seen a lot of sites try this, and I'm trying it myself on my own site.

However, it can very easily go wrong. If you restrict it to mods (like said here), you incentivise them to only allow topics that advertisers might like (at the expense of the community). Or only topics which can get clicks from average Joes that pass by every now and then.

If you provide revenue sharing in general, you often then incentivise spam by giving people a reason to post as quickly as possible to get their ads up there. It's why Digitalpoint scrapped the idea; quality was in the gutter because of it.


It might work with Reddit. The cost of creating a new subreddit is zero, so people can move from poorly handled communities.

Reddit is struggling for money, but they have an amazing platform to test out ideas on. Out of their millions of users one person should be able to come up with a way to monetize the platform. Google did this with their ads: the most popular ads get rewarded.

If I was a moderator and I was given a list of options to generate revenue for Reddit (and myself or my community) I would have to consider it. Maybe I have a community that reviews widgets and I can link to stores that have the best deals for highly reviewed widgets. It is transparent to the community, gives everyone a good deal, and pays for Reddit.


Why the hell is [Netflix] making their own content when the entire point of the site is for [existing studios] to create the content?

On the other hand, you're not wrong. Just because it works for Netflix doesn't mean it CAN work for Reddit, or that Reddit can execute on the idea.

...but Netflix is an amazing example as to the value of a content aggregator/distributor turning their hand to content creation.

fake edit

Flower arrangement does sound pretty useless, but your comment reminds me of the "no brown m&m's story": http://www.snopes.com/music/artists/vanhalen.asp

Maybe they're a hippy rock band and instead of a pyrotechnician to light the shit out of the drummer, they need a flower arranger to nature the shit out of it, instead?


Because the second a better alternative appears, I'm going to switch to the alternative without hesitation. Having original content provides something that an alternative can never provide, especially if that original content is good.

One thing that bothers me about reddit already is the amount of blatant advertising in many subs where people review things. A few brands catch on early and everyone becomes a promoter, sometimes because the brand developed recognition, sometimes because of subversive marketing.


Reddit's community has largely alienated content providers with their strictly enforced "spam" rules.


Amusingly reddit's own policy are very anti-creator in the way you describe. Funny comment? yes. Self publication/promotion = no. You can be banned for submitting your own work. There are slues of marketers that have learned this the hard way.


I suspect that being owned by a traditional publishing operation is making them behave like one.


Because reddit feels the need to 'curate' everything they host. Can't have that free speech, yo. Someone's feelings might get hurt. Too dangerous.


Since Alexis and Huffman returned I've seen more happen with the brand than in the several years preceding. I don't know who made their mobile app but it is damn good. I resisted at first but am now using it as my primary means for consuming content on Reddit.


Did you use an app before or did you use the mobile site? I've used mobile apps to browse reddit for years and it's always been a good experience. Sadly they bought the best iOS client (Alienblue) and then killed it off -- replacing it with the current app which is actually much worse.


Once Reddit do a Twitter and limit their API in such a way that kills off alternative apps such as Relay, I'm done with it. Monetization plans can only go so far before people start leaving.


The only app that I've used for Reddit is Relay, and it's one of the few apps that I've liked enough to pay for, and I've been very happy with it. The gestures work so well to navigate, great for one-handed operation.

Can anyone compare the Reddit app to Relay?


The team has really done a remarkable job -- shipping more in the last quarter alone than over years before.


Totally agree as a long time user. Keep it up.


They bought the mobile app family--which was in pretty good shape at the time, no less!--from a third party dev; and then basically abandoned it for several years. They just started actually publishing versions and patches a few months ago.


They bought it.


This version was actually built from scratch.


I am surprised that the core Reddit functionality is not run mostly on autopilot.

I only subscribe to a few subreddits (lisp, Ruby, Haskell, AGI, and a few others) and the user supplied content is plenty good enough for me to visit the site once a day.


"Ooh, lots of people like this thing. It's popular."

"Great! Let's take advantage of that popularity to make a ton of cash!"

"Hm. We'll have to dramatically change pretty much everything about how it operates."

"What could go wrong?"


Big chunk of the article gone explaining how the diversity policies have failed, but I don't see any explanation on how they would have helped the site or the community.

What if the company is failing because instead of focusing on hiring competent people (of which they have a severe lack, at least in the engineering side) they focused on having a diverse team?


I would like to see some reporting on the 10% of reddit's recent $50 million raise that will be distributed to the users.

Recent changes (eg. stealth adding link tracking) and comments (eg. Huffman's, we know everything about you) have been user hostile and making the distribution would garner some good will.


That 10% pledge died when "Reddit Coin" died. (And it likely violated SEC rules anyways)


Do you know where I can find the announcement of the death of the pledge and analysis of how it likely violated SEC rules?

I believe Y Combinator was the force behind the pledge, why would they push for and publicly announce something that they couldn't do?


> where I can find the announcement of the death of the pledge

The guy who was hired to make it has since left many months ago. Nail in the coffin effectively, they wouldn't announce the death of something they never got off the ground. Just silently brush the remains under the rug.

Edit: Found the article I was thinking of

https://ryanxcharlestimes.com/fix-reddit-with-bitcoin-7da3f8...

> Yishan would resign a month or so later due to exhaustion, and new leadership took over the company who set out a new direction that didn’t involve any of the work I was doing. The cryptoequity project was shelved, and decentralizing reddit indefinitely delayed. I was laid off after being at the company for only four months.


There is no announcement of the death, but the project was under Wong and the blockchain engineer with him, so issue is moot.

You are correct Y Combinator is behind the pledge (http://www.recode.net/2014/9/30/11631424/reddit-raises-50m-p...), but it was only a plan. Plans fail.



Oops, that should have been an is.


If this is related to the problems at Reddit it is only so tangentially, but my feelings on that site have been very mixed since I deleted my account there. I think that the subreddit structure and making it "a community of communities" showcases both the best and the worst of audience bubbles. At their best, like-minded people share interesting things with each other, build communities, and even form friendships. At their worst, they become echo chambers that are almost as liable to turn on themselves as they are on outsiders.

I've heard people make the case that audience bubbles are bad for society at large, because they narrow down what kinds of conversations people have. But ever since leaving Reddit, I've noticed my own outlook on life improving. I think that audience bubbles cause an individual harm, similar in kind to that reported by people who de-convert from extremist religious or political ideologies.

I wonder how much better off people would be if social networks implemented some kind of "group hug" algorithm that made posts less likely to spread if they were too in-groupy, and made people more likely to receive posts from wider and wider venn diagarams of adjascent audience bubbles the more insular their own posts seemed to be. You wouldn't even have to force people to confront antagonistic views, just make them more likely to see more moderate ones.


I find it strange that people refer to Reddit as a social network because I've never looked at it that way. To me it's always been a link aggregator with a social commentary aspect. I've never felt the desire to follow or even remember people there. Unlike other social networks, Reddit has never struck me as a platform for self promotion or attention seeking. Sure there are unsavory subreddits I have no desire to explore, but I generally follow those that center around hobbies or topics that interest me. I removed many of the topic subreddits like news and politics from my feed early on because they tended to be mostly useless posting.

I think in general the way everyone uses social networks is confusing. The focus always seems to be on who is talking and not on what's being said.


As soon as you go from clicking the links to regularly submitting or moderating them, the social aspects of the platform assert themselves. And there are a fair number of commenters who engage in that dimension of it willingly.

To me, Facebook isn't a social network as much as it is a chat application. Diff'rent strokes for diff'rent use cases.


No mention of the battle with The_Donald

https://www.reddit.com/r/The_Donald/comments/4oenqn/uspez_ad...

Reddit shit its pants when the Trump Train came to town and disturbed the echoes in the chamber.

I have a "Freedom From The Press" Reddit t-shirt. I am embarrased to wear it.


My use of reddit is way own. I'm tired of going to the front page and seeing so many submissions about things I don't care about like multiple video games, dumb inside jokes like r/circlejerk or all those repetitive links about Trump or Sanders.

I know I can buy gold and customize the front page, but I am a little hesitant to pay money to make the front page not suck. I have ads turned on in my adblocker so they do get ad revenue from me, I'm not totally freeloading Also, there is a limitation to the number of subreddits you can exclude. That is the nail in the coffin right there for me buying gold.

Finally, I'm not impressed with some of the censorship and social policies they have and don't really want to support a business who seems to have either questionable or widely varying policies on things.

The result, I check it maybe once a day, down from several times a day.


> My use of reddit is way own. I'm tired of going to the front page and seeing so many submissions about things I don't care about

Why on Earth would you go to the Reddit frontpage? I stopped doing that like four years ago. Why not just create a Reddit account and avoid seeing the default subs on your frontpage? Reddit Gold has just about nothing to do with that.


It's still called the frontpage even if you're logged in.

But yes, when you're logged in you only see the subreddits that you're subscribed to.


Right, I thought it was clear from context that I was referring to the defaults frontpage.


>Why not just create a Reddit account and avoid seeing the default subs on your frontpage?

I do have a reddit account, I have had several over the past 10 years. I look at my subscribed front page as well. However plenty of times there is a subreddit I didn't know about suddenly trending. If I just look at the subs I'm subscribed to, I'm kind of limiting myself to my current interests, I won't learn about new things in new subs.


The fact that the default view is terrible, and there's little to no guidance to tell you that you're not supposed to actually use it, is a huge part of Reddit's problems IMHO.


You don't have to buy gold to customize the front page. Any account can customize subscriptions.


yeah. makes no sense to use reddit and not customize which subreddits you want to see. i mean, the major point of reddit is being able to curate what content you want to consume.


>"Reddit’s Upvoted podcast, which Ohanian launched in January 2015, also appears to be abandoned. Aside from a single episode published in June, the podcast hasn’t been updated since October 2015. The “Formative” video series produced in partnership with Google, which aired new episodes roughly once a month since its launch, has been dark for four months."

Hopefully any employees hired specifically for these ideas were able to find other groups at the company or future employment.


It seems like the tribe that built the site and the tribe that is trying to run it now are not the same thing. The culture of reddit is not civilized, equal, or any of the other HR type directives they are going to try and make.

If the people in charge now tried to start reddit back then, with all the focus on fairness, equality, correctness, and inclusiveness reddit would never exist.

It would be something else. I'm not sure it would be bigger, smaller, better, or worse, but I know it wouldn't be reddit.

You can't have a bunch of "bros" build a popular site and then pretend that they didn't. You can't bring in a bunch of nerds into a community and then kick them out and take it away from them once it's popular and successful.

Actually you can try and do those things, but it won't work because the tribe will reject you and go somewhere else. It happened to slashdot, it happened to digg, it will happen to reddit and Hacker News too.

If you don't understand the tribe, you can't hope to lead them. You don't lead the tribe by pretending it's not what it is.

Reddit's owners and operators seem to be ashamed by their tribe. That is going to be their downfall.


The current CEO of Reddit (Steve Huffman) is one of the co-founders.


There's a lot of sloppy censorship on Reddit.

Mods are mods aren't paid, so they use a lot of very broad auto-moderator bots -- many of which are very poorly written. You can get a permanent ban from a Subreddit for having a user name the bot finds offensive, for example. You can have a post removed because you didn't end your question in the syntax the bot was expecting... and even if you fix it, and appeal to the mods, and they reverse the decision, your post gets restored as older and without any votes so nobody will ever see it. It's more that the moderation is an example of bad automation -- I think this is what reasonable users get up in arms about.


Let reddit die, lets all go back to USENET.


A reddit to usenet bridge has been on my TODO list for ages now--I'd be totally fine if we just went back...


Lets do this.


Well seeing failures in Reddit, are almost the same failures I saw in other dotcom startups, not having a business plan that works.

When Reddit was formed, it supported free speech of everyone. They didn't have a plan to earn a profit, they just wanted a better discussion board than Slashdot, Kuro5hin, Digg, Stumbleupon and others. Digg eventually had to make changes to their site that went into the paid accounts and paying for promotion/advertising of links/info.

I first studied computer science and data processing aka information systems it was later called. Then I went back to college to earn a business management degree to learn how to make working business plans.

At South Park they had a skit about Underpants Gnomes, that parodied the startups out there:

Step 1 Steal Underpants!

Step 2 ?

Step 3 Profit!

This is basically a joke, but some companies have an incomplete plan like that.

Ellen Pao was a patsy for the board of directors to blame when the changes they wanted to implement would prove to be unpopular but attract better advertisers that had liberal points of view to support Reddit.

I cite Kuro5hin, because it once was a very good site, and it didn't have a good working plan or a very good editors or management and eventually spirals down into a forum controlled by the trolls that chased everyone else away. Then it was mismanaged and then it went down and went to a new server and was never recovered from backup. Kuro5hin never had a good working business plan, it was like an Underpants Gnome business plan. The users created the content, it got voted up or down, section or front page, and if a story didn't make it you could always post it as a diary in the 'Ghetto' section as the users called it.

Reddit is suffering what Kuro5hin did, the trolls start to take over certain subreddits, and drive people away. They post racist, offensive, and mean things and all band together to vote it up to the front page. Subreddits like /r/Ferguson that was about the Ferguson riots and Mike Brown got taken over by trolls posting racist stuff and so Reddit quarantined that subreddit and gave warnings to people subscribed to it.

Ellen Pao was a scapegoat who carrier out an agenda by the board of directors. She was given the job of CEO knowing that she would fail, and make the changes the board of directors wanted that would make her unpopular to users, but popular with advertisers.

At that point Reddit was no longer about free speech, but censorship, Reddit didn't trust the users to create content so they hired editors to create their own content and blog. Sort of like what Digg did. If they follow Digg they will take paid promotions of links and try to shut down accounts they don't agree with and delete and censor them.

I have to say looking at it from a business angle they can't monetize content if they keep banning and censoring users and try to take control of what appears on the front page. Either they are for free speech or not, either they want controlled speech that meets Liberal guidelines and a Social Justice Agenda, to attract more people like that to provide a safe place on the Internet, or they let the users decide and vote on it democratically even if they don't agree with the politics, or speech, and then the trolls get control of the front page like they did with Kuro5hin.

Actually I like bane's suggestions that Reddit make podcast hosting networks, video networks, get into e-publishing and other stuff that they can sell advertising on or use to pay for a membership to remove the advertising.

There has to be some sort of sane way to earn an income, by advertising, or paying for memberships to avoid seeing the advertising, and making a paywall to verify accounts for $1 or $5 to keep the spammers and trolls out that want to use free accounts and bots to control what is on the front page.

They need a Baysian filter to detect the spam and junk, the same way email programs do it. I've seen a lot of spam and junk posts in /learnprogramming and other subreddits and I always flag it, but it takes a long time for someone to look into it.

After the Ellen Pao scandal many alternatives to Reddit got founded. They have to treat their employees as human beings with equal rights, which is what they are supposed to believe in via liberal values, but instead they fire employees and don't work with them to settle differences. Reddit seems to be hostile towards diverse hires, even using some like Ellen Pao as scapegoats and patsies.

They need to take responsibility for their mistakes, change their business plan so it works, and find a way to hire more diversely and treat employees and contractors better so they don't leave or get fired.


>either they want controlled speech that meets Liberal guidelines and a Social Justice Agenda, to attract more people like that to provide a safe place on the Internet

There's a lot of well made points to you post, so I feel bad focusing only on this one thing, but I really don't believe in creating a "safe place". It's simply not possible to provide a safe place online, while providing a platform for any meaningful debate.

Even if Reddit, or others, had the resources to provide a "safe space", the result would be a place no one goes to. People are offended by everything, and eliminating content until everyone feels "safe" would leave you in a situation where most content is never seen.

Reddit is already a mostly left-wing echo-camber. Opinions that diverge from the norm is pushed to smaller subreddit. Making Reddit "safer" would mean closing down the subreddits that you disagree with, but you're shrinking the user base every time you do that.

I understand that it's a terrible thing to tell people, but if you want to be in a safe space, stay of the Internet.


> Reddit is already a mostly left-wing echo-camber

WTF ? My subbredits certainly are - but there is also conservative xenophobia aplenty if that is your thing !


> WTF ? My subbredits certainly are - but there is also conservative xenophobia aplenty if that is your thing !

Or subreddits like /r/worldnews, which are basically left-wing xenophobia echo chambers.

I'd actually argue that most larger subreddits are like that as well (strongly left wing, with a bent of xenophobia), just less explicitly.


> but I really don't believe in creating a "safe place". It's simply not possible to provide a safe place online, while providing a platform for any meaningful debate.

Rape threats and other threats of violence are not part of meaningful debate.

Unsolicited images of genitalia are not part of meaningful debate.

Anti semitism like this [1] isn't part of meaningful debate.

You can safely ban all of these without harm to the robust debate in the community.

[1] https://pbs.twimg.com/media/ChH5Va7WkAAktXV.jpg


It is fascinating how many downvotes you get for posting a mild condemnation of heinous and/or illegal behaviour and hate speech. I wonder what their internal justification is.


>You can safely ban all of these without harm to the robust debate in the community.

Of cause you can, my point is that banning these things, all of which is illegal anyway, won't even get you half-way to creating a safe space.


Which is what Kiro5hin tried to do. Banned a lot of people and then key users left the site. The diaties and story articles slowed down as more users stopped using it. Until nothing was left but a few trolls that controlled the story que and then FNH and other stories got published.


My uncle used to say to me nothing ruins a business faster than changing it. He gave an example of a diner that did well for decades then decided to expand and very soon after went out of business; bankrupt.

To me reddit has done well because it has stayed the same for years digg is the diner that expanded.


Reddit isn't Internet infrastructure. It is a startup. The plan from day one was to make a kinder, gentler imageboard, get acquired, and make the founders rich.


You can't make inflammatory assertions about people without evidence here.

Also, please don't create many obscure throwaway accounts on HN. This forum is a community. Anonymity is fine, but users should have some consistent identity that other users can relate to. Otherwise we may as well have no usernames and no community, and that would be an entirely different forum.

We detached this subthread from https://news.ycombinator.com/item?id=12140780 and marked it off-topic.


I don't agree that the post in question is "inflammatory"in any way. If hn is going to be so strictly/randomly moderated that you have to think twice before presenting your reasonable views, it can't be good for the website.


That's not a "reasonable view", it's a fantasy that purports to read other people's minds from 10 years ago in order to demean their work.

I'm pretty comfortable saying comments like that don't belong on HN and that we would all do better to think more and inhibit ourselves before posting them. If people want to slag others on the internet, there are plenty of places to do it; here we try for a higher standard.


>Reddit isn't Internet infrastructure. It is a startup. The plan from day one was to make a kinder, gentler imageboard, get acquired, and make the founders rich.

Can you point out the specific words that "demean" the work of others? All I see is the assertion that reddit is a startup (which is factual) and the founders wanted to get rich (which is true for every founder I know, and is not demeaning in any way).


"The plan from day one was to make a kinder, gentler imageboard [and] get acquired".

This superciliously presumes to know what the founders were thinking, makes it sound like they didn't actually care about Reddit since "from day one" they supposedly had something more trivial in mind, and implies that they were merely in it for a quick flip.

In reality they were almost certainly more genuine about it than that; they sold Reddit early, but they also both went back to it years later, which is extraordinary. So the above is uncharitable and trivializing, and seems intended to diminish a.k.a. demean them and their work. Such a low level of discourse is not welcome on HN, whether it's the Reddit founders being demeaned or anybody else.

It's also easily disproven by what's publicly known about the origins of Reddit (pg suggested they make a social news site, not an image board), so the comment is guilty of intellectual laziness too. It's not surprising that these poor qualities show up together.


>makes it sound like they didn't actually care about Reddit

So the inference made by the commenter conflicts with the inference made by you. So what? Maybe they (reddit founders) really didn't care, maybe they did. Cases can be made for both, with varying degrees of conviction.

The whole thing is highly subjective. I don't agree with the conclusion either, but it's not a mindless personal attack. To me it's an opinion, and even the language used to express the opinion was structured in a civil manner.

I am defending the comment because I can see myself making a similar comment in another context without any intent in my mind to offend anybody or poison the discussion.

Honestly, I feel censoring these sort of comments is extremely childish. I am not the moderator, you are. So the final decision rests with you. I hope I have made my case and you will consider my comments in future.


Intent doesn't matter. It does poison the discussion, and that is not what I come here for. I come here for discussion with a somewhat more professional tone than other fora (like reddit!), and am totally okay with content (including my own!) being removed when it conflicts with that, regardless of intent.

(And honestly, "The plan from day one was to make a kinder, gentler imageboard, get acquired, and make the founders rich." does not play with my notions of civility or informed opinion.)


> I am defending the comment because I can see myself making a similar comment in another context

That's legit, but I don't think you're likely to run into a problem. I took a quick look at your comment history and it doesn't seem related to this at all. Perhaps you're just naturally charitable and it's hard for you to see it when someone else isn't.

(Completely offtopically, I noticed an interesting comment of yours from over a week ago and replied here: https://news.ycombinator.com/item?id=12146979)


HN is something of a safe space for the Silicon Valley startup industry in the "forbids discussion" sense - a place protected from ideas that might make them uncomfortable, that question whether they're actually improving the world or actually understand the people they claim to be helping.


That's massively untrue and a good example of the kind of thing people imagine and then project onto HN.

In fact a plurality of this community is critical and skeptical of Silicon Valley (which is fine when the criticism is substantive and not when it isn't, as with anything else). And certainly the vast majority of HN users resides far away from SV.


XD Oh no, I hope Yishan and Alexis didn't get too upset after someone criticized their billion dollar company.


I don't care about that. I care about HN not sucking, and that requires commenters to be more charitable.


I'll assume you're not included in that opinion. Because attacking a fair comment was uncalled for; but allowed because you're an admin.

The reason why we use throwaways is precisely because of sporadic and heavy-handed "moderation" by you and your employees.

Not only that, but you engage in hellbanning. It's an absolutely disgusting practice, which only encourages us more honest users to hide behind throwaways... Or toe the line with silicon valley groupthink.

(note: you can delete this username whenever. I plan not to use this again. I don't want my regular accounts hellbanned.)


Of course I'm included. I'm far from perfect, but do work on it every day, which is all we ask of others.

If you want your claims about HN moderation abuse taken seriously, you'll need to supply links to specific cases of alleged abuse. Outraged comments such as this one almost never do so, and on the rare occasions when they have, I have not observed the community agreeing.

Data suggests that these comments are typically posted by a tiny number of users who create multiple accounts to do it. If there were reason to believe that the broader community felt the same way, we'd be a lot more proactive about changing how we moderate HN. But in fact the evidence points to the reverse.

What you said about banning is misleading. In the majority of cases we post publicly about it: https://hn.algolia.com/?sort=byDate&prefix&page=0&dateRange=....


Hellban? The only person I have ever seen use that term regularly is the same one who "coined" the term "VC-istan."


Uh, I think that was the <5 year plan and the exit was selling to Condé Nast ...

Like Google or FB, Reddit could just gradually become unbelievably massive. It already is approaching that level of name recognition. Monetization should be subtle, with an appropriate long-term view.


Wow. What a hit piece. I bet their anonymous source happens to be the ex VP of Product that was fired by Pao, and is now building a 'safer' Reddit. Conflict much? In their sources quote, it's evident the source knows a lot of the inner workings of the leadership, is a man, and doesn't work there anymore. Well, that leaves only the ex VP, seeing as though Pao was a woman.

So they took all of Dan's words as truth? The guy with the competing Reddit? Come on Techcrunch. That's pretty bad.

Now, for the facts:

1. None of these people left voluntarily. They were let go. So they need to stop with the 'people are leaving in droves' nonsense. Reddit has has almost multiplied in employee count in the last year, and has moved to a new office, much larger office building.

2. All of these people, except for Nicole were part of an experimental product that was cancelled. If this was Google or Facebook, you wouldn't even know. It happens every day. But because you got a juicy tip from an ex-employee building a shitty competitor, you run with it. Because, hell, it's Reddit, and you just might hit the frontpage!

3. Shit like this, "The plans to overhaul Reddit’s reputation as a hotbed for harassment and to remake the company as a multi-media publisher have yet to prove successful — at it seems that the departures of senior employees are impacting Reddit’s product and performance." -- WTF does that mean? They tried Upvoted, it didn't work, and they're folding it back into Reddit.com and letting go of people that they don't need for the next iteration. That's business. It has nothing to do with color of skin or your genitals.

4. Numbers always dip in the summers. Especially for Reddit. The kids are off for the summer. It would make sense that they peaked right before summer. According to Alexa, Reddit is the 9th largest site in the US, and up 9 spots this month on the Global list to #27. So it seems they are definitely growing.


Going from "I bet" (i.e., you imagine) something about somebody to making factual-sounding claims ("Dan's words") using that person's name is a big breach of civility and you can't do it here.

You went even further than that and hounded this guy elsewhere in the thread. That's unacceptable on HN, so we've banned this account.


It's a shame u238ed chose to make unsubstantiated allegations against Dan, but he/she does have some valid points. A lot of the article itself is unsubstantiated, and doesn't really give much detail about the actual problems at reddit. It seems to be talking mostly about problems with trolls on subreddits, but that doesn't really have anything to do with toxic workplace. The only somewhat concrete allegation is the sexual harassment at the work parties, but I'm not quite sure how that translates into people of colour leaving.

It sounds like u238ed worked or works at reddit, so presumably he/she has some insight into the culture there.


I was not a source.


How do you know those people did not leave voluntarily? Do you work there?


You should also disclose that you're the source for the TC article. It's pretty obvious.


We've banned this account (https://news.ycombinator.com/item?id=12141496). You can't concoct unfounded charges about what you imagine somebody has done, let alone pursue them even after they deny it. That crosses into harassment, and nobody gets to treat anybody else like that here.

We detached this subthread from https://news.ycombinator.com/item?id=12140907 and marked it off-topic.


I'm not, and I don't appreciate the accusation.


tirefire keeps on burning


Reddit is a content management system for spam and stupidity.


Just like Facebook, Twitter and Tumblr. When sites with user generated content grows to big, they seem to degrade to "monkeys with typewriters". The good content is mostly still there, it just drowns in the bad stuff.


That's a lazy meme. If you spend significant time with a site you can see through the superficial qualities and learn what makes it unique.


But re-blogging content from AMAs seems at odds with Upvoted’s mission to produce original journalism, and many of the writers hired in October 2015 were let go just three months later

What a joke, a bunch of kids playing startup one more time in their favourite playground. Utter disgrace.


Reddit is, was, and will for the foreseeable future be, a giant pile of porn. Denial of that fact shows a fundamental lack of knowledge of not only Reddit but the internet as a whole..

I can understand that this idea would be extremely unpleasant to a lot of people. And a slow migration to curated content may seem more wholesome. But it doesn't work that way.


On what metrics are you making that assumption? Most people who use Reddit are going to be hitting the "default" communities, which are about as bland and inoffensive (read: has lowest common denominator appeal) as internet content can possibly get.


I believe he's referring to the definition of porn that most people don't often think of.

"television programs, magazine, books, etc. that are regarded as emphasizing the sensuous or sensational aspects of a nonsexual subject and stimulating a compulsive interest in their audience."

There are plenty of subreddits like r/earthporn that fit this definition. In fact many of the default communities do.


Still a garbage dump full of racists, misogynists, and low-class trolls, too. Reddit should be nuked from orbit.


The main problem with Reddit is the users dishonesty to themselves.

Ask a reddit user why they like reddit they'll probably mumble something about AMAs.

Then visit /r/all and a completely different picture will be painted for you as the sites most popular content is a mixture of pornography, racist jokes passed off as being subversive and political ranting.


http://archive.is/uOTnv (grabbed latest thing from archive is, so like, 7 hours ago)

There is... like, less than 10% matches what you describe

and a AmA is in 10th position with near 5000 points


It is always amazing how consistently people will lie about what reddit is "full of" when anyone can just look at the site and see they are lying. Reddit is overflowing with BLM and islamic terrorist apologists, and we're supposed to believe reddit is all nazi KKK grand wizards.


Black Lives Matter is not a terrorist organization, please take your racist opinion back to reddit


When did he say that it was a terrorist organization?


He implied it by placing it next to "islamic terrorist apologists" in his sentence and gave both equal emphasis. Context is key, language is powerful.


Pie is not a fruit, please take your sexist opinion back to facebook.


> Then visit /r/all and a completely different picture will be painted for you as the sites most popular content is a mixture of pornography, racist jokes passed off as being subversive and political ranting.

This is just false. I have no idea where you are getting this idea from or why you are stating such clearly inaccurate claims, but they are false. Take a look at the history of /r/all and find me a single day when what you are claiming is even remotely true: https://web.archive.org/web/*/reddit.com/r/all. Yes, there is plenty of the content you describe on Reddit if you go looking for it, but it is absolutely not the "site's most popular content."


Porn is very consistently within the top 25/50 (first 1 or 2 pages) of /r/all at certain times of day. It doesn't dominate it, but it is just sIt only shows up if you're logged in though since it's marked nsfw. It's pretty embarrassing and, frankly, gross. I used to use RES to filter out every porn sub I knew of and that was more pleasant.

fwiw, there is blatant porn at #90 at the moment.


It's almost like the website is designed to let you filter out crap you don't want/care to look at. If we assume the site's self-professed label of "front-page of the internet" is correct, it shouldn't be surprising that the total of its content reflects the content of the internet as a whole.

Hate to break it to you, but porn is pretty popular on the internet at large, not just on reddit. Racist people exist, and will flock to any platform where they can freely express their shitty ideas.

Pardon the hyperbole, but your prejudice towards all users of reddit is like to claiming that all people who use the internet are closet porn addicts and KKK members.

Generalizations don't help anyone, and they make you look just as closed-minded as the racists you dislike so much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: