Hacker News new | past | comments | ask | show | jobs | submit login

For example, why do we moderate flamewars?

The desire to suppress "flamewars" is great, but once one has a tool called a "flamewar detector" it's easy to lose sight of the fact that not everything that triggers the detector is in fact a flamewar that should be suppressed. Occasionally (rarely? frequently?) what's being suppressed is high quality fast-paced discussion. Once one has a tool that provides some degree of the benefit (the flamewar detector), I'd guess that it can be easy to confuse the actual goal with the incidental behavior.

I'm intrigued by this idea of "expected failure". That is, cases where the outcome is actually non-desirable, but since the software worked according to spec, the failure is considered a success by those who designed the software. How do you go about assessing whether the suppression is having the desired effect? Is it based on something more subtle than slowing down the rate of commenting when it's going "too fast"? Are there knobs that can be tweaked to keep it from triggering on desired discussion?




> not everything that triggers the detector is in fact a flamewar that should be suppressed

Scott and I get emailed every time that software trips so we can quickly look at which threads are being penalized and reverse the penalty when it isn't helpful. The only time we don't do that is when we're sleeping.

We tend to call it the 'overheated discussion detector' these days, since it detects more than flamewars. However, that phrase is more awkward to say than 'flamewar detector'. If anyone can come up with a better name I'd love to hear it.

Turning that software off is not an option, because HN would be overwhelmingly more dominated by flamewars if we did so. It's not primarily the individual threads that I fear, it's the systemic effects of having them be more dominant. HN exists most of all for the quieter, deeper, more out-of-the-way finds that would be the first to get excluded under such a regime. That would really be an existential risk to HN.

Incidentally, that last point generalizes. When people complain that we don't do X, for some obvious X, it isn't because we don't value X (e.g. free speech or whatnot). It's because we're worried about systemic effects.


Do you have any quantitative evidence that shows that the moderation efforts meant to minimize systemic effects actually have the desired outcome?

What are the quality metrics?


Well, we have lots of data and we look at it all the time. Does that count as quantitative evidence? I wouldn't say so. I would call it just-so stories with data.

People who ask us about this usually would prefer a world in which all such calls would be free of human interpretation. That's a fantasy, in my opinion. We don't try.


> all such calls would be free of human interpretation

This is a straw man argument. Simply revealing on what basis the decision was made and letting the community voice approval or disapproval would go a very long way.

Consider a title change decision. Most of the time, title changes are minor and reasonable, but once in a while they effectively hide what is interesting about the story from readers. Readers should simply be able to vote on whether or not the title change was helpful, so that mods can learn from those votes how to effectively moderate the forum.

Similarly, it would make sense to reveal any strategies that are used to improve HN such as deducting points from stories, etc, and to make it obvious when this has been done to a story, so that the community can express approval or disapproval of the decision.

I think the main fallacy that the moderation approach embodies is the idea that by simply not discussing politics HN is avoiding becoming political. That is not the case at all. By suppressing or avoiding important political issues, HN is making a very vocal political statement that those issues should be ignored.

Instead, why not bless certain HN users to write comments on highly political articles that will be pinned to the top of the page to give voice to reasoned and discussion-shaping views. This would break the cycle of highly politicized retorts and rejoinders that is what HN should actually be trying to avoid. The key here is to grant more trust to the community, not less. There are a lot of people on HN whose views I deeply respect, and when there is an important issue going on I want to know what they think about it.

Human editorial is fine, but banning/throttling/suppressing users and views is a sinister way to express editorial views. You can frame it as being about manners, but it's rarely actually about manners.

Edit: My account seems to have been throttled for 30+ minutes. Thanks for the mature response.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: