This is an interesting move, and a topic I've been discussing with friends. I've stopped using sites like Reddit, because the comment sections are just toxic. I've gone as far as installing Chrome extensions that make comment sections disappear from popular sites like YouTube. It's too easy to get drawn into the negativity, and I'm completely over it. "Social Networking" has reached it's low point as far as I'm concerned. Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.
Even HN has declined, but what bothers me most is that HN has a definite range of views that are acceptable. This limits useful discourse. e.g. someone should be able to criticize Elon Musk without being downvoted to hell.
While HN does stay civil, I think it is too limited in general. Don't get me wrong, the reason I use HN is because the discussion hasn't devolved into /. or reddit. However, HN would do well to encourage dissent.
I also get that HN, being mostly about tech startups, has different priorities than me personally.
I am forever in search of a place online to have good discussions and debate with a respectful community. The comment sections on news sites are rarely the place — even for very respectable publications. I'm fully in favor of NPR's move and other sites should follow.
> e.g. someone should be able to criticize Elon Musk without being downvoted to hell.
I wish downvoting wouldn't be allowed without a "reply" to parent post, that way if I'm being downvoted nonstop I know why. Sometimes I get downvoted like crazy and have no clue as to why. Or even if I guess it doesn't add to the conversation to just shut me out because you disagree without teaching me why. I wonder how HN would be different if to downvote you had to respond to a post.
Also, I think that replies accompanying down-votes should be displayed directly next to the comment as a sort of tag (i.e. you can’t click a down-arrow alone, you have to type out something like “not accurate” or “flame” or “I don’t like his hair” or whatever the reason is) and then the comment appears as: foobar_user 1 hour ago | -1:“not accurate” -1:“flame” -1:“I don’t like his hair”.
"Not accurate" may be useful feedback even if it's false. For example, perhaps several such responses in a row would prompt the author to link to supporting data, or reword their comment to be clearer. The real question though isn't whether it's always helpful, but whether on average it's more useful than just a bare downvote. While some of the responses will be "not accurate", others will be an extremely actionable "link broken" or "missing word".
I think the real issue of requiring comments on downvotes is that it would either disallow anonymous downvotes, or encourage anonymous comments. Perhaps de-anonymizing downvotes without requiring a comment might be a better first step: if you aren't brave enough to have your name associated with the downvote, then maybe you shouldn't be downvoting. I'm sure this would cause its own problems, but I suspect it would be a net positive. At times I think it might even be good to have all voting and flagging actions on HN to be public record.
If I add a comment I jnow will be downvoted I simply move on with my day, not bothering to see the responses generated. This is because, even though I work hard to only put forth constructive criticism and in a manner such that my logic / reasoning is plainly spelled out,,,
On HN, unpopular comments are, often, still voted down despite the facts and despite the presentation.
I usually reserve these instances for when I know more than a little on the topic, generally professionally speaking, and often first hand. But, if it is unpopular, it doesn't matter. It's predictable.
A hypothetical social-discussion model I've been meaning to build an MVP of:
1. there are replies and up-voting, but no down-voting;
2. you can classify a comment reply, at time of posting, as a "rebuttal";
3. up-voting a rebuttal comment will also be considered as down-voting its parent;
4. the aggregate score of a subthread is calculated as a Euclidean distance, with all the positively-scored comments as dimensions. Thus, if you've got one great comment made in response to a bad comment, the thread will stay un-collapsed; if you've got two equally-great comments, the thread will rise, etc. (Likewise, if you've got good discussion happening in response to a really bad post, the discussion's value will still propel the post to the hot page.)
> 3. up-voting a rebuttal comment will also be considered as down-voting its parent;
I often upvote both a comment and its rebuttal, because in many debates I have no dog in the race and I often find that people on both sides have valuable or otherwise interesting insights. Even if I don't agree with them completely, I sometimes find that I'm able to view an issue from another perspective that I hadn't considered before.
Not to mention, many debates really just don't have a clear answer, and to pit commentors against each other like that is to enable one of them to be declared "winner", which I think would be misleading in many cases.
Not every reply would be a rebuttal. Not even every reply challenging the premises of the parent comment would be a rebuttal. Maybe "rebuttal" is the wrong word.
In a discussion being had in good faith, a thread will frequently go into a pseudo-debate mode to "find the truth" of a statement someone offered. Kind of like we're doing here. We all add anecdotes and counterfactuals and so forth, and see where the inductive process takes us. None of these comments would/should be tagged as rebuttals.
But there's a very specific situation where, I think, this feature is an important addition: when the original comment (or the post-link heading the discussion!) just doesn't know what it's talking about, and the reply outlines why this is so.
The concept is less like "this post disagrees with its parent, and so up-voting (agreeing) with it should mean down-voting (disagreeing) with its parent", and more like "this post is a petition to flag/retract its parent, and up-voting it is signing that petition."
This is why I had point 4 in the above, which might otherwise seem an unrelated feature: negatively-scored posts are not "disagreed with", but rather "flagged dead." But negatively-scored posts must still stay visible, in order to give requisite context to their positively-scored rebuttal-comments. And, in fact, a thread containing a negatively-scored top-comment might even still be the top-sorted thread, if its replies are considered valuable enough.
(One might still want to add some visual effect to remind those reading the post that the community considers it "retracted." Perhaps adding a strikethrough that disappears on hover, or a background of faint red Xes. You don't want to make the post illegible—it's still necessary context for its subthread, unlike the current HN/Reddit system where the post "fades" to nothing, and then the whole subthread gets considered a lost cause and collapsed.)
Why remove downvoting directly? I'm not against the idea, I just want to understand the thinking.
I wonder if the problem is the binary nature of so many things. 5 star/1 star reviews are the most common, we have up and down votes, but no real context for that. I wonder if the problem that needs to be solved isn't context, rather than directional.
This is me spitballing, so feel free to IGNORE everything below.
Anyway, the idea is that we all get X votes, and we can vote more on some things, e.g. if we REALLY want marijuana legal, I can vote 4 votes. The twist is that the cost of extra votes is the square of the number of votes, 2 costs 4, 3 costs 9 (we all get maths works).
The theory is this lets people vote on what they care about, much less than simply voting equally.
I was wondering there isn't some way to create a system where you earn votes, that you can spend in some similar way, e.g. you "earn" upvotes from others, and can spend them elsewhere. If everyone got a single vote, but you earned extra votes you could spend in interesting ways at increasing cost, e.g. you could heavily downvote an idea at an exponential rate, it might make people less a victim of common denominator views, give fringe views more airtime, and make people consider a vote more deeply. Not just yes/no but how much do I hate/love this idea? Enough to blow 100 points on 10 down/up votes?
Another social-discussion voting system I wanted to try out was one where:
1. both of the 'unit-weight' up- and down-votes were only done through implicit/passive actions (basically a piece of Javascript running in a browser extension, that tracks whether you read the whole article/comment—i.e. scrolled through it at a reasonable reading speed—before scrolling away/closing/going back/beginning to reply)
2. there are above-'unit-weight' up- and down-vote mechanisms that are explicit, but require something humans consider very-slightly onerous: solving a CAPTCHA, paying a tenth of a cent of pre-paid credit, etc.
In my conception, the small votes would be named "OK" and "Meh", and the large votes "Love" and "Hate", with about a 10x difference between their power. The site, of course, would be called "Mehddit." ;)
That is the single most compelling reason, why I find most people (on|off)line disturbing. They just want to drown out dissent without helping other people argumentatively to understand the opposing view.
OK, that is probably, because they oftentimes do not have real arguments and haven't rationalized their position at all.
Maybe the past paragraph is just me being a misanthrope. Or maybe the internet is just too big - as bigger discourses tend to deteriorate.
Fixing "disagree without teaching" is tough when you have to do the teaching to thousands of people across as many or more places. I bet people just get burnt out after awhile.
Whenever I find that happening on a comment, I comment on it to call people out. That seems to slow the decline (and judging from my points, people seem to appreciate it.)
It's never clear what the downvote even means.. at least with an upvote or "Like" it doesn't matter too much if you misinterpret the result, but the downvote on certain sites is too coarse-grained to be useful.
I totally agree, I wish there was a requirement to reply when downvoting.
But I don't think you need subcomments on comments, I think it's just overcomplicated. If the reply with downvote is nonsensical, it can be downvoted as well. Or it can be even flagged.
But sometimes the point of downvoting is to bury a comment so that it doesn't dominate the discussion. If downvoting required a reply it would achieve the opposite effect.
Only if the replies were visible on a buried comment, or perhaps instead of hiding or automatically collapsing, they could be "moved" to the user's Threads page.
In the end that is really all that internet discussions boil down to, so why fight it? I could write a long rebuttal, backed up with anecdotes from my own worldview and cherry picked stats and facts I googled theory seconds ago to confirm my prior worldview and it's superiority to yours, but what is the point? No one has their minds changed by HN comment threads or witty political commentary on Facebook. This site caters to people who think that a few hours bouncing through Wikipedia and some Google searching for supporting articles makes them an expert on just about any subject. Expecting meaningful debate and reasoned discussion in such a forum is a fool's errand. On anything beyond narrow technical topics it is better to accept it for the shallow bar chatter that it is and not take it too seriously.
Re HN: also anything that can be coerced into a political discussion will devolve quickly. Conservative views typically wind up grayed, even when they are substantive arguments.
Right-libertarian views are generally most popular here - both "conservative" and "liberal" (American definitions of both) views are generally not popular. And god forbid you have views that would be considered slightly left of center in much of the EU.
I have to disagree. While I do agree that the "liberal-right" (referring to the "classical liberal" definition) certainly is tolerated rather more than American style conservatives, Progressive/Left viewpoints are rather more in vogue here than anything that could be considered libertarian. That a vocal and supportive minority of libertarian types exist here doesn't change the predominate viewpoint of the community as a whole.
One need only look at the number of articles that deal with ideas like basic income, regulatory responses to climate, energy, or finance, public transport, or other areas where systematic centrally planned responses are supported as being the way forward and you can't help but see that progressivism is the larger ideal supported here.
It's hardly surprising though. This is group of professionals that engages in build systems to tackle problems in new ways. We convince ourselves that we're the disruptors who's vision can change industries and our way of life. A natural extension of that viewpoint is to believe that the public sphere can be molded, too, if only the right programs are put into effect under the watchful eye of intelligent and visionary custodians... which has much in common with progressivism.
Of course, that's my personal viewpoint (and I am certainly not a "progressive").
I think this is pretty true for economic issues, but not for diversity/identity politics/social justice. On those issues, the HN-hivemind is pretty solidly leftist (though not to the extremes of some corners of the web).
Users who make self-flattering claims about why they were banned from HN—always for some allegedly tinpot reason—are unreliable narrators. The real reason is invariably more sordid, which is why they omit to supply links.
The typical such user has a long history of breaking the rules here, getting banned, and creating new accounts. They often do so shortly after proclaiming that they're leaving HN forever and will never set foot in this crappy, horribly-run echo chamber again. Most HN readers would be surprised to learn how small the number of users generating all this drama really is. The rest of us just want to use the site for its intended purpose, which isn't ideological flamewars or meta grandstanding.
There's nothing more common than posturing as a brave independent thinker standing up to the mob. Therefore that posture has the interesting property of being self-refuting.
The typical such user has a long history of breaking the rules here, getting banned, and creating new accounts.
I'm hoping that you base your moderating decisions on the actions of the user in question ("human behind the keyboard" user regardless of account name) and not the "typical such user"? Do you have evidence that you are chosing not to release that 'james-watson' is such a user?
Most HN readers would be surprised to learn how small the number of users generating all this drama really is.
Perhaps there would be some way to publicly enlighten us about the serial offenders? For example, maybe you could explain the full backstory of why this post was killed: https://news.ycombinator.com/item?id=12313089.
In and of itself, it doesn't appear to be too egregious, and thus it would appear to bolster the poster's argument. But presumably there is more to the story?
this crappy, horribly run echo chamber again
The most interesting thing to me is that there are often multiple conflicting "echo chamber" claims in flight, with each side feeling like the lone outcast. There's an excellent example even in this subthread right here: https://news.ycombinator.com/item?id=12308842 (I'm referring to the back-and-forth in the thread, rather than the individual comment I linked).
To some, HN is a hotbed of socialism, and to others the epitome of evil capitalism. My recent conclusion is that (counterintuitively) HN is frequently accused of being an "echo chamber" because it has greater diversity of opinion than most other spaces online. The truly anechoic chambers aren't called out as such because the filtering is so effective, whereas "leaky" spaces like HN are assigned the label.
[Edit: I just noticed that "anechoic" doesn't quite fit the narrative here, but don't know how to reword it. The point was supposed to be that full echo chambers and anechoic chambers may have more in common with each other than each does with the points in the middle.]
Perversely, this might mean that accusations of being an echo chamber is a good metric for diversity of opinion. If the norm is that one lives in a world where one normally hears no fundamental disagreement, it can be disconcerting to be in a place where there is no clear "right way of thinking". Only when people stop proclaiming it to be an echo chamber is the canary dead.
I think what you say about the 'echo chamber' bias is right, and I enjoyed reading it because this is something I've been pondering for a while* . It's an interesting case of a point so subtle that nearly everyone not only misses it but is sure that the opposite is true.
To answer your other concerns: Yes, we go out of our way to try to make moderation decisions individually. I don't think it would work to publish information about users' past accounts—that would be a surefire shitstorm. As you and others already figured out, 12313089 was flagged by users—that's what [flagged] always means, both on comments and stories. Vouches were turned off on direct replies, but I was inspired by the below to enable them.
Oh, wow. I find myself equally glad and sad I missed this when it was originally happening. It's exactly the type of community meta-analysis and naval gazing I seem to be drawn to, but really isn't very good for me. :)
> The most interesting thing to me is that there are often multiple conflicting "echo chamber" claims in flight, with each side feeling like the lone outcast.
Confirmation bias (specifically confirmation bias on bias) + terse domain specific terminology (of many different dialects) favored by members for it's efficiency of expression and the subsequent loss of some implications by those less versed in that DSL + normal communication inefficiency in expressing thought = arguments where both sides are mostly in agreement + tendency to attribute opposing positions more strongly or more often then they actually exist.
> Perversely, this might mean that accusations of being an echo chamber is a good metric for diversity of opinion.
I think the contrapositive might be easier to rely on. No accusations of being an echo chamber probably means there isn't enough diversity, while accusations indicate that there's at least enough diversity for people to form a perception that there is an echo chamber, whether there is one or not.
That's the usual answer, but I think there may be more happening here than that. One slight oddity is that there was no "vouch" button visible to me on the dead comment. This made me wonder if there was a separate mechanism in play here. Out of self-interest, it also made me wonder if maybe my "vouching" privileges have been removed.
Separately, although multiple flags can kill a comment, it's still subject to moderator review. Since Dan commented in this thread, this probably implies that he consciously decided to let the user flagging stand rather than reverse it. My phrasing may have been poor, but I wondered why this was.
I've argued elsewhere in this thread that it would be interesting for both flags and downvotes to be public. I don't expect Dan to release this information here, but I'd personally be very interested to know who those user's were in this case, and on what basis they were flagging it.
Separately, your tone seems particularly condescending. Is this by design? Why?
> One slight oddity is that there was no "vouch" button visible to me on the dead comment.
The vouch button appears for me.
> and on what basis they were flagging it.
I didn't flag it, but it contains some deliberately provocative phrasing from someone who's previously had user flags (and a ban) for their posting style.
In particular:
>> the original post which dang replied to and subsequently killed.
These comments tend to attract downvotes and flags because they're untrue. For one thing that post doesn't appear to have been killed, and if it had been killed it probably would have been user flags, not mods, that did the killing.
Interesting. I might understand this now. To discourage retaliation (I think) both downvoting and flagging are not allowed for direct responses to one's own stories and comments. Since "vouch" was added late, it reuses the same logic, even though the "retaliatory vouching" is not really a danger.
I didn't flag it, but it contains some deliberately provocative phrasing from someone who's previously had user flags (and a ban) for their posting style.
In the context of discussing perceived bias in moderation, I didn't find that particular comment to deliberately provocative. While context is important for interpretation, I think flagging (like vouching) should be done comment-by-comment rather than based on previous actions under a different account. Killing comments based on historic behavior makes "recovery from mistakes" more difficult, whether the mistake is on the part of the moderator or the poster.
It's also not clear to me exactly why the FD3SA account (should we consider this the same user for purposes of flagging?) was banned. He ('james-watson') believes it was because the content of the posts and not the style. I doubted this, and suspect he was banned due to his expressed intention "to return hostility in kind". While bans on this basis may be good policy, if this is true 'james-watson' would have a reasonable argument that this is punishment is indeed for thoughtcrime. As one of the targets, you are of course entitled to have your own interpretation.
For one thing that post doesn't appear to have been killed, and if it had been killed it probably would have been user flags, not mods, that did the killing.
I agree, and made the same point here: https://news.ycombinator.com/item?id=12315815. While this might be a good reason to downvote or rebut, I don't think that flagging is an appropriate response to factual inaccuracy.
> To discourage retaliation (I think) both downvoting and flagging are not allowed for direct responses to one's own stories and comments.
Yep, that's exactly right. Downvotes are also disabled for comments older than a certain time interval (IIRC it's currently 8 hours).
> Since "vouch" was added late, it reuses the same logic, even though the "retaliatory vouching" is not really a danger.
That may well be true, and enabling the "vouch" link for replies to one's own comments sounds like a good idea. You should email hn@ycombinator.com about it -- they're usually very responsive.
I routinely flag drama. I didn't originally flag that comment, but I would have had I seen it before this part of the thread, because it's drama. The trend away from drama on this site in the modern Dan era of moderation is heartening, but we can always use less of it.
PG was far more aggressive about banning people than we are; partly because he had less time, and partly less patience. After years of trying to be more patient, I'm unconvinced that patience in this area is a virtue. It seems to lead to more trouble and no fewer complaints. When efforts to do things a certain way lead to condemnation for the opposite, it seems like a sign that something's not working.
Perhaps I live with rose colored glasses, but I do not believe that anyone has ever be be banned from HN for making that quoted statement unless you have left out some essential detail. My guess would be that you made that statement, and subsequently you were banned for something else. Perhaps you could post a link to the thread in question?
Never mind, maybe I found it, or at least the account: https://news.ycombinator.com/item?id=9039872. A little bristly, and I'm not sure if the assumptions are correct (Is it unequivocal that "Status is zero sum"?) but doesn't seem ban-worthy. But since it's a year before the account appears banned, maybe I'm missing a later comment with the exact quote?
Presuming this is the account you are referring to, it looks to have actually been banned for the similarly themed but much more aggressive comment "Sexual dimorphism is real. Your impotent rage and ridiculous ideologies will never change that fact. I will be greatly amused by your kind's zealous need to tilt at windmills." (https://news.ycombinator.com/item?id=11226294), which when challenged by Dan you defended by saying "I have a policy to return hostility in kind." (https://news.ycombinator.com/item?id=11227788).
I think this counts as leaving out an essential detail. The problem is not that discussing sexually dimorphism is off limits (others appear to have done so without being banned) but that taking "he did it first" is not a acceptable excuse for rude behavior here. Or maybe I have the timeline slightly off? It's hard to tell when some of the edits occurred.
In any case, I'm going to keep my rose colored glasses on for a bit longer. I'd encourage you to keep making scientifically accurate statements about sexual dimorphism from your new account, but given that some interpretations have sensitive implications, it would probably be wise to approach it as politely as possible.
In fact, I have a feeling I'll get banned for this comment.
No, I don't think you will. You can actually discuss many "controversial" topics here as long as you try to do so politely. Assuming I found the right thread, it looks like the ban wasn't because you raised a forbidden topic, but for your stated policy of returning hostility in kind. "Tit for tat" has it's place in game theory, but in real games (and communities) it has some serious defects if blindly applied.
The main defect in this context is that if all players adhere to the simplest version, it has a tendency to end in a "death spiral" of defection. Eventually, a poorly phrased response is interpreted as an insult, and after that it's a permanent race to the bottom. The strategy can be improved by adding some degree of "forgiveness" to allow both parties to reset to cooperation. Adjusting the trigger to require multiple offenses before retaliation ("tit for two tats") can also help to account for inevitable communication errors.
[Responding to myself since I can't respond to the the flagged and dead original. You can click on your username and turn on "Show Dead" to view it.]
I have witnessed HN darlings get away with far, far worse ad-hominem vulgarity with nary a peep from the moderators
Likely, but enforcement is always going to be "spotty", so it's almost impossible to distinguish bias from bad luck in any particular case. And perhaps they are "darlings" because they interact politely with the moderators?
For history's sake, let's link the original post which dang replied to and subsequently killed.
I don't know if it is intentional, but I think you are conflating different issues. Dan marked the thread off topic and explained why. Subsequently, another comment in the thread was killed by user flags. It is unlikely that Dan killed it himself.
There was zero rule breaking in that one, just thoughtcrimes
Arguably, although the counter-argument that Dan presents is that "attractive nuisances" are against the rules. He explains his reasoning (which I agree with) in a more recent response: https://news.ycombinator.com/item?id=12163939. The goal of moderation is not ensuring technical compliance with arbitrary rules, rather then goal is "protecting civil, substantive discussion".
Like the common (and often truncated) quote that "democracy is the worst form of Government except all those other forms", the standard against which HN should be judged is not whether the moderation is perfect, but how it compares to the alternatives. Based on the current state of HN relative to other internet discussion sites, I'd claim that the moderation seems to be working pretty well. Where do you think is doing a better job?
Separately, are "clevernickname" and "FD3SA" both your accounts? The transition it the thread from one account to the other seems odd to explain otherwise. If so, the "zero rule breaking in that one" seems like an odd claim, akin to the "Freemen on the Land" claims of immunity from the the courts based on being "Incorrectly Identified". See page 75 here: https://thelastbastille.files.wordpress.com/2014/02/meads-v-...
If HN is about entrepreneurship and technological progress, why do they ruthlessly suppress established scientific consensus?
I know this must feel like a well framed question, but it's really not. Why should we assume "HN is about entrepreneurship and technological progress" any more than assuming "the moon is made of blue cheese"? Who exactly is the "they" that is suppressing scientific consensus, and if this was the goal why choose such a round-about way to do so? For that matter, what does "consensus" have to do with science?
And here's my hypothesis: because it does not fit the current political zeitgeist.
A scientific hypothesis should make a falsifiable prediction. Is this the sense in which you are using the term? If so, what predictions does your hypothesis make that are different from a plausible "null hypothesis" like "the moderators are trying their best to keep HN as a place where substantive discussion is possible"? How would we go about testing these predictions?
There's a lot of disparate claims you are making here without a central argument.
No, I did not use sockpuppet accounts. Unlike fulltime moderators, I have things to do other than spend every hour of the day on HN to score political points. I only had one previous account, and it was banned as a result of that thread.
As to your other questions, you are losing the forest for the trees. What I am saying is this:
You are free to make of it what you will. Obviously, neither I, or Dr. Cronin, or the Holy Ghost himself can convince you of a fact that you do not wish to be convinced of; the choice is yours.
I only had one previous account, and it was banned as a result of that thread.
I appreciate the clear statement. My guess based on Dan's comments is that he thinks you are associated with other undisclosed accounts, and that this might explain some of the confusion as to what is acceptable discourse.
As to your other questions, you are losing the forest for the trees.
Possibly. Also possible that we are in different forests, or that I care more about trees than forests.
Thanks, I will read and consider. At a glance, I think it reflects my beliefs as well. I read the "Ant and the Peacock" long ago, but don't recall the specifics of her argument.
Hrdy also wins my personal award for "Best Evolutionary Development Theorist That's Been Almost Completely Ignored". Happily that seems to be changing a bit recently, with her work finally starting to get respect: http://blogs.scientificamerican.com/primate-diaries/raising-...
That's just not true. For example, universal basic income--a leftist idea even by European standards--has been discussed several times on HN. Lots of substantive comments both for and against.
Are you sure UBI is a leftist idea? The things that go along with UBI read like a right-wing wishlist: eliminate the minimum wage, discontinue welfare and unemployment benefits, and end the food stamp program. In its place, the free market would determine how best to spend basic income.
The core notion of it, though, which is government handing out money to people, is rather contrary to pure right-wing (especially libertarian) thinking; even more so when it comes to the "universal" part.
It tends to be supported by some more pragmatic libertarians, who recognize that some form of welfare is necessary, and see UBI as the least-overhead form that, if not the cheapest economically, keeps the associated bureaucracy (and the government sprawl induced by that) to the minimum.
However, it is also a popular idea on the left, especially among the more individualist-minded liberals, left libertarians etc.
It was originally a right wing idea, often known as mincome (minimum income) or a reverse tax. Milton Friedman was one of the earliest proponents of the idea.
Yes, it is a leftist idea. Definition of left is that they care about social and economic equality. (The understanding - IMHO wrong - of left as proposing big state ultimately comes from Communist Manifesto, where Marx used "the State" in a certain sense as a shorthand.)
Basic income would mean that part of GDP is redistributed in purely democratic manner - everybody gets the same piece of pie, whether they deserve it or not. It's very similar to democracy, but on economic level. And democracy is in fact very leftist idea (if you look at history of the left), because again, it proposes that all people should the exact same political power (one person, one vote), regardless what they are or what they do or contribute.
The fact that today's right (at least partially) accepts some of these ideas, such as democracy or UBI, is actually a success of left, or manifestation of the reality having a liberal bias. :-)
Yea, basic income isn't really that associated with leftism. The super-hard-right libertarian stance of "let people starve if they can't hack it" obviously doesn't like it, but otherwise it's basically a simple combination of wanting a social floor but without the attendant complexity, perverse incentives, and inefficiency of means-tested gov't programs. This view fits pretty comfortably in the center and center-right, while the left often just sees it as a way to low-key dismantle social programs that they're fond of.
I've noticed flagging being used similarly, too. The argument that downvoting should be used to register disagreement falls apart when you consider that only users with enough (>= 500, I think?) karma have disagreement privileges.
I think the 500 limit is justified as a means to discourage vote-brigading. Age of the account, by itself, is not a good enough indicator, because a person might create hundreds of accounts to be used for brigading at a later date.
But, yes, both downvoting and flagging can and have been abused here, despite measures to prevent them.
Flagging is extremely effective in taking content off the front page. I understand the intention, to quickly kill spam and avoid flame war topics, but it's extremely powerful in even preventing discussion on issues and it is probably a greater contribution to groupthink than downvoting comments is. Agenda setting, controlling which topics are discussed at all, is way more powerful than controlling the range of views expressed on those topics.
I think you can "vouch" for posts and submissions now to counter the flagging. Of course, that assumes you will catch them before they fall off the front page or get auto-deleted.
I was referring to comment flagging, for which there appears to be no recourse (at least from my 140something-karma account's view). Similar problem(?), though.
To me, HN's "bias" is not so much about left/right, but rather the individualism/collectivism aspect of things. "Regulation is a good tool to weed out the bad stuff; free market can't solve this or that; ban human drivers when self-driving vehicles mature; taxes as means of control" and so on. We agree to disagree on many cases very maturely but very often I see just discussion that would warrant to rename this site to "Collectivist news".
> Conservative views typically wind up grayed, even when they are substantive arguments.
Not sure about that, could you find one such comment? Many people would upvote any greyed comment that comes with substantive arguments, regardless of their political view.
The anti-dissent phenomenon is amplified by the greying of comments, which continues to be a bad idea all these years later. The literal message and incentive there is: leave a comment that gets upvotes or it will get progressively harder to read until nobody can actually read it. So now you have to care about what you say, because it will be completely removed from the site by natural voting if you don't toe a certain line, meaning folks cannot objectively review your comment on its merits alone unless they go out of their way to highlight it. The absence of a showgrey even though showdead exists is an interesting thing that somewhat describes intent behind the feature, too.
Greying is one of the key factors in the anti-dissent, party-line mode here, which I've noticed as well. It was weird to pull scores and then implicitly put them back with greying, so you can at least tell if a comment is <= 0. One difference from Reddit is that Reddit explicitly suggests[0] that you not downvote to register disagreement, whereas HN does not take a stand and pg clarified that he's fine with it a long time ago[1]. So that ties together voting and agreement, which then ties together greying and agreement, which helps gives rise to the anti-dissent environment you're observing.
I'd love to see some analysis on whether a grey comment is more likely to be downvoted, as well, because I'm almost positive it is. Once someone hits me with a 0, which is discernible as slightly greyer by itself, that comment is almost guaranteed to end up very negative.
My opinions often diverge from HN and I am rewarded with barely-visible commentary very often, so I've been trained over time to resent voting rather than wish to contribute here, which I don't think is the spirit nor the intent of the greying feature. I don't think HN wants to be an echo chamber, to be clear, it's just that the quirks of the system create incentives that give rise to it. The last thing on planet Earth that I care about is my karma score, but I do care that what I say matters, and I cannot really talk about it because it sounds like complaining about downvoting.
HN could remove this disagreement moderation bias by implementing a meta-moderation feature similar to Slashdot.
Slashdot used to be a great place (like HN was a few years ago), and many of the best moderation systems originated there.
Meta-moderation kept everything fair because you could moderate the moderators. The way it worked was that people with high karma would get asked at the top of their page to please take time and meta-moderate 10 comments a day. You get presented with 10 random comments and the moderation +/-, and simply check a radio button indicating whether the moderation was fair or unfair.
The way it was enforced, I assume, is that people who were meta-moderated as unfair too often would lose their ability to moderate completely.
I agree the greying out of comments is problematic, but I think it also serves an important purpose, and that's to incentivize care in the crafting of your comment. It is the punishment of community norms as much as the negative points are. Unfortunately, community norms have a way of adopting views as well as actions, and we have what you refer to as anti-dissent mode.
Ultimately, the question is whether negatives of people who down vote on disagreement (as opposed to more objective items, such as lack of evidence for assertions or overly personal and/or aggressive comments) outweighs the benefit of a very effective way for the community to self moderate, given there is occasional pushback against subjective down voting when it happens.
The problem will of course seem larger if some of the common community views conflict with your own, and you've been subjected to the consequences of this. This is countered by the human tendency (IMO) to view unexplained negativity as unwarranted. Were you downvoted because your comment was lacking in some regard, or because you stepped into a pet issue of the community. It's easier to believe the second, and while it is the case sometimes, it's hard to tell what is perception and what is reality.
> I agree the greying out of comments is problematic, but I think it also serves an important purpose, and that's to incentivize care in the crafting of your comment.
Sure, I'd grant that as intent, but where this falls down is that it instead incentivizes me to just not be a part of the community. Why would I put care and work into a comment if I'm going to be rewarded by the time invested becoming essentially worthless? Arguably, silencing someone because they diverged from "community norms" is slightly hostile; I have a hard time believing your position that a community is served well by being built atop hostile moderation. You might build a community, but it might be a much smaller, much less desirable community. (I don't know.) It's also a weird incentive to throw down, as you allude to, because "acceptable opinion" and "acceptable comment" are extremely conflated. So now one finds oneself writing comments that contain acceptable opinions, which is the care in crafting that you're describing.
Look, the opinion of HN from outside and the opinion of HN from inside are wildly different. I think this community tells itself things about civility, moderation, and so on; your comment sounds good, for example! It's just slightly off-base and overlooks some consequences, and it's not obviously wrong because we're within HN discussing it.
> Were you downvoted because your comment was lacking in some regard, or because you stepped into a pet issue of the community. It's easier to believe the second, and while it is the case sometimes, it's hard to tell what is perception and what is reality.
Careful; this is lightly making the case that I'm unable to discern the difference and therefore off-base for criticizing a real issue. I can't prove this to you, but trust me when I say that I can tell when I've "earned" the downvotes. And I do, occasionally.
(And, were I less civil, I'd use colorful four-letter words to describe community "norms," as I'd hope any hacker would. I didn't become a hacker because I cared about normality.)
> Arguably, silencing someone because they diverged from "community norms" is slightly hostile; I have a hard time believing your position that a community is served well by being built atop hostile moderation.
It is slightly hostile. It's also how groups enforce culture and less codified systems of conduct. Not only in the active consequences to the offender, but in the visible consequences to others. I think it works in some cases, and does not in others. in the case of HN, it's ham-fisted, but I doubt more intricate systems that allow people to choose levels of response would actually work at all, given the amount of time people are generally willing to spend on such things (and the variability in the scale and type of response based on initial state).
> It's also a weird incentive to throw down, as you allude to, because "acceptable opinion" and "acceptable comment" are extremely conflated. So now one finds oneself writing comments that contain acceptable opinions, which is the care in crafting that you're describing.
I'm not really sure what you mean by the first portion of that sentence. I do agree one can find themselves writing comments that conform to opinion and not just just an accepted method of argumentation. That's up to you to combat on your own, here, as the rules are now. People will value different aspects of discussion than you, and you have to deal with that in every discussion anyway. When speaking about something you know your audience is sensitive to, you either make some level of effort to present it in a way that minimizes miscommunication and irrational reaction, or you don't. That's true of every singe instance of communication, I'm not sure why we would expect the problem to be solved here for some reason.
> Careful; this is lightly making the case that I'm unable to discern the difference and therefore off-base for criticizing a real issue.
It most definitely is not. It's making a case that people, in general, are bad at this in my opinion, because it challenges their view of themselves. That doesn't mean you are off-base, that doesn't mean it's not a real issue (I did agree with you, for example), but that the level of importance you attribute to this phenomenon is highly subjective. We agree there's a problem, I think we disagree on the scope and whether it outweighs the benefits imparted, and this is meant as a possible explanation of why why disagree.
> (And, were I less civil, I'd use colorful four-letter words to describe community "norms," as I'd hope any hacker would. I didn't become a hacker because I cared about normality.)
I assume you became a hacker because you like to know how things work. I like to know how things work. "Norms" (culture), are about how groups of people work, or more specifically, it's the informal rules they establish to allow interoperability. The "standards" (in the IEEE sense) by which we are enabled to build our more complex system on top of. Sure, there are negative aspects, such inefficiencies, cruft, and errors, but it's allowed us to get to where we are, so I'm disinclined to view them quite as negatively as you seem to. Something better may come along, but given the irrationality of all people, I'm not sure it will work if it's all that different. I'm interested in hearing alternatives though.
Exactly, you have to care about what you say, and that's exactly why I love HN comments so much. Also, you make it sounds like there are such great comments that disappear because of the downvotes, but it's not the case. The only disappearing comments are stupid jokes or trolls. I don't remember any well-argued position that I was unable to read because of the downvote.
> The only disappearing comments are stupid jokes or trolls.
This is not only completely false, it's rapidly provable as completely false. You are, quite literally, lying to yourself. As mentioned elsewhere in the thread, discussion on controversial issues is Exhibit A.
As somebody who generally posts in a contrarian manner (to a right-libertarian): there's a lot less grey here in general than there used to be. I also think that in recent years greying algorithms are protecting controversial comments to some extent (comments with lots of upvotes and downvotes.) I've had negatively rated comments that were not grey. Also, as mentioned below, greying seems to attract attention to comments, getting them upvoted until they ungrey. Downvotes are limited; I don't think anything gets lower than -3 or -4, so greying doesn't have the ability to reinforce itself very much. You have to keep posting to allow people to keep downvoting you.
Our HN can decline 5 x more and it would still be wonderful. With out ever seeing, talking to, meeting any HNer I am connected with smarter, curiouser, open minded (er) people than I work with everyday.
Really? To me, the thing that is most discouraging about HN is the rampant closed mindedness. For example, anyone who questions the long term safety of genetically modifying organisms and eating them, will always be downvoted to oblivion.
It's probably because doubting things isn't very useful. I can question the long term safety of walking up stairs, but unless I can provide an argument such that rational people are convinced they should also be concerned, I wouldn't be surprised at downvotes.
Not to knock the Anti-GMO argument[0], but it's similar to other conspiracy arguments. I can make so many rational arguments against it, but I know from the outset if I see a comment that says "Well of course the Hyper Loop isn't going to work, it was designed by Lizard people in order to keep us complacent!" I know that I will never convince that person to see what I would consider a rational viewpoint. It's a lost cause. And as I user of HN, I would consider such comments to be noise, and I would downvote them.
Edit: I have the unpopular opinion (at least on HN) of not liking the "right to be forgotten", not to tangent too much, but I think that the ability to substantially change as a person is an important concept of human development, and that by allowing people to willfully scrub their actions from the record, they encourage the myth that people can't change, and that we're always how we currently are. And that, I think, trains people to be less forgiving. It's harder to forgive when you don't think someone can change.
[0]: although I do disagree with it (even beyond the screaming naturalistic fallacies)
My questioning the long term safety of GMO's is logical though. Peer reviewed studies on the topic just plain don't address the very real possibility that GMO's can have very harmful effects 20, 30, or 50+ years down the road. Both to our environment and our bodies. Believing that a short term study that concludes they're "safe" also means they're safe in the long term is quite a leap of faith, and illogical.
Edit: To put it another way, I'll bet short term studies concluded that asbestos was "safe" to use as a building material. In the long run though, that didn't really turn out to be the case.
Maybe I'm making a good case for eliminating comments by contributing to this tangent. On the other hand, maybe I'm making a good case for my claim that HN's comments are not nearly as plagued by groupthink as people claim.
The problem is treating GMOs as a category. Maybe there's good reason to be concerned that genetic modificiations that make crops less attractive to pests are hazardous to long-term health (I have no idea if there are), but what does that concern have to do with the risk of introducing genes to grow larger fruits?
I'm sure you can find reasons to be worried about any possible application of genetic engineering, but if you're willing to get that creative, you can find reason to be worried about any agricultural innovations. Maybe you claim that the introduction of a lentil gene to soy will indirectly create come carcinogen through protein interactions, but why don't you worry about the same issue when a new fertilizer is introduced? It seems to me that it's only because transgenic plants are new and scary and people are uneasy about "playing god".
Asbestos was ALWAYS known to be dangerous [1] so not a great example. I can't think of a case off the top of my head where long-term safety was incorrectly assumed from short-term studies. Perhaps some medicines? Anyway, in principle you're right that short term studies say little about long term impacts on health.
>but what does that concern have to do with the risk of introducing genes to grow larger fruits?
Off the top of my head, there are several consequences that fall into the realm of possibility. Larger fruits will require more resources from the host plant, which could alter its development in unpredictable ways. More nutrients could be taken from the soil in order to make larger fruits, leading to earlier depletion and requiring more crop rotation (a process many farmers put off due to profits). Larger fruits will attract larger animals to graze (just look at what happens when hikers throw apple cores / other compost into the woods, it alters the movement patterns of multiple species).
I understand these examples sound hyperbolic, and I think GMO's are definitely beneficial in some situations. However, introducing changes to the very genetics of our ecosystem faster than they would naturally occur has definite effects. To think that we can adequately anticipate, react, and solve issues caused by these effects seems a bit hopeful, at least until we have a very advanced computational model simulation of our ecosystem.
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should."
- Ian, Jurassic Park
I feel it is hard to change you reputation so long as people remember your previous actions.
For most of human society, memory was constrained to the brain, which fuzzily remembers at best, and within mere years only can make summaries of past events.
Now with high quality video from phones, voice records, and text preserved in perpetuity, people can not only merely remember a person held inane ideas, but also see exactly what, when, and in which context they held those ideas.
So long as that's possible, it's easy for others to critique by saying "people can change, but X person said Y and that's beyond the event horizon for me".
Wait, I didn't realize the prevailing opinion was that "right to be forgotten" is a good thing. I thought the pseudo-libertarian strain combined with the strong archival beliefs of many on HN would be turn common opinion against it.
> To me, the thing that is most discouraging about HN is the rampant closed mindedness.
Absolutely true. But is HN more closed minded than the folks you see everyday?
Personally, the HN community is more open minded than 95% of the people I personally interact with daily -- even after including what you've just mentioned.
That hasn't been my experience. There are views that HN generally opposes, like being anti-GMO, and newcomers who don't know what HN's opinions are might accidentally run into something and get punished for it. But once you know what the standard HN opinions are, you just need to be very clear about exactly how you depart from them when you voice dissent. This means disclaimers and hedging — poor style in traditional writing, but necessary on the internet where you can usually assume that someone protesting GMO foods is committing the naturalistic fallacy.
Dude. I work in 2 jobs where the act of hearing any sentence with more than one multi-sylable word is like trying to find my Ashton-Tate manuals in the attic.
The purpose of community is to express common interests & shared values. Hackernews tends to express a shared value around Elon Musk. That is perfectly fine. It's something that should be respected even if you have differing views.
I bet a lot of the reason people down vote you about Elon Musk is that maybe you don't accept him as a "hacker" and don't see the way he's challenging the status quo and pushing technology, cars, transportation and energy systems into the future. Hacking the auto industry, the solar industry, the space industry. What he does very much fits the "Ethos" of what makes hackernews hackernews..
HN has become some kind of left liberal forum these days.
Personally I love the negativity of reddit and I enjoy it too. Snarky, sarcastic, vulgar or insulting comments in my opinion are part of discourse too.
I know that on HN, we shouldn't downvote unless something doesn't add to the discourse, but I'm pretty sure th13 is just factually wrong, his arguments are weak, and it really doesn't add much to the discourse (e.g.:
An hyperloop is going to be a regrettable experience for everyone involved.
I think the fact that it's difficult to find examples of this is proof that there's not as much of a slant as you think there is.
Not difficult, just tedious. My being lazy and wanting to do something productive isn't proof of anything. Also, there isn't really an advantage to digging them up, since it is clear you'll just refute whatever I dig up.
Well, I might disagree with whatever you dig up; I can only really refute it if I present a good argument against it, which maybe I wouldn't be able to for some comments.
Trying to bring up gender equality issues, which is a genuine problem in our industry, also has a similar response. There are a variety of 'taboo' subjects that will get you grayed out and gone in no time at all.
> Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.
Funnily enough, right around the time I made an acct here (4+ years ago, after ~1 year of lurking), I saw an internal post by a well-known Googler talking about the recent drastic decline of HN comment quality and looking for a replacement. I still get value out of HN comments (clearly), but it's definitely changed substantially in the last few years, and not really for the better. I've resigned myself to every high-quality Internet community without heavy, opinionated moderation being a slow treadmill towards popularity and then mediocrity, or worse. Though this is of course something of a "you're not in traffic, you are traffic" problem.
You and I are around the same group, but I've long since abandoned participating in these conversations. Honestly I am in these comments because I had hoped this was the conversation being had. The HN community drove me away long ago.
I've had several people try to discern the meaning of your comment, but none of us could figure it out. You have words here that together sound like a sentence, but don't make any actual sense.
The implication is that if one were truly too good for HN, then one would already know about "the next HN": that exclusive comment site at which sober mature discussion among intelligent sophisticated people takes place much as it used to long ago on HN. Which is funny not least because the level of discussion here even today seems vastly superior to what one finds elsewhere online. Part of that quality, may be a willingness to welcome those for whom English is a second language.
I have never felt 'too good' for HN at all. That's one of the reasons I loved it in the past- I can pick almost any link and look at almost any comment section to find information that's over my head and the content and conversations were challenging and force me to learn- and I loved that. At a certain point I felt that it turned in to a bunch of mean girls, and I left. I sincerely wanted OP to clarify his comment, because it seemed to be saying something poignant I just couldn't understand it. I see how my phrasing seems insensitive.
Maybe for the next community, instead of making it public - potential users will instead have to find the site by solving a bunch of hidden puzzles. Think secret club.
Jokes aside: a major part of why HN works is that the community here self selects for several traits which ensure healthier communication.
As a forum becomes more mainstream, many things happen to attenuate the signal to noise ratio. (Pg had an excellent essay on the mid brow argument effect, which for many other forums is a problem they wish they had)
Eventually part of the crowd who came first realize it's not as useful anymore and decide to go somewhere else. The question is where.
We've seen eternal September come to almost every place and there's only a few traits which can help protect against it.
From my experience with many forums/modding, demographics are 80% of the issue, the remainder is topic of discussion, goal orientedness in discussion and moderator quality.
So the best solution is to create the forum and never advertise it. Let potential users figure out how to find it and how to get in.
Ah, that's a great clarification. I wasn't trying to be argumentative in my response, I was sincerely trying to determine what you meant. I'm glad you did, because your comment is great. Thanks!
There is always going to be the elitist attitude of newcomers ruining things for the "OG" people. However there is certainly no way HN compares to the rest of the drivel out there. Always going to be a fine line between popularity, and trash.
> However there is certainly no way HN compares to the rest of the drivel out there
Shrug, that just depends on where people choose to set their standards; there are much worse fora and much better ones. As I said, I still clearly find HN comments at least worth reading, but the distinction isn't as binary as you seem to think it is.
>Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.
the interesting thing about this is that HN comment system seems to discourage dialogue - i.e. you are not notified when someone replies to your comment. It seems to be more geared towards comment-it-and-forget-it.
I think it's prohibitive enough to not allow things to spiral out of control in some crummy argument. I feel it's more of a peer review system. State your comment on what you've read/seen, and move onto the next thing. Which honestly is fine with me.
Yes, and this has always bothered me. There's a chrome extension (Hacker News Enhancement Suite, broken at the moment due to recent HN changes) that made it easier to see your own comments. Since I no longer use the extension, I rarely check for replies. Bums me out.
Sometimes I'll ask a question and forget I asked it, or someone will ask me a question. I'd like to know about those cases more reliably. But if it contributes to any degradation of HN's high quality comments then I'd prefer to keep things the way they are.
It really is annoying. It allows for a very brief and mostly superficial conversation on whatever happens to be on the front page in the last few hours, and then it's done. That's no way to have a legitimate, in depth discussion. And the result is definitely a stunting of the quality of the conversation in the name of... I'm not sure, forestalling flame wars, keeping people interested by keeping the front page churning, I don't know.
It encourages dialogue between more than two people, which is often a good thing. The quality of a discussion usually improves when more people participate.
It's rarely a problem if someone forgets to follow up since someone else will usually do it, at least if their views are shared by other people.
That being said, the "threads" page is useful for finding and responding to new comments in your threads if necessary.
In my opinion, that's a feature not a bug. While it probably reduces some of the sense of community, it also makes it much more difficult to keep flaming on until the thread becomes a dumpster fire. A user has to actively seek to pick a fight and keep up with it which most normal people don't have time for.
The threads link at the top should work for recent topic replies, but you can also use a service like HN Watcher if you want to be emailed when someone replies to you. While I find the emails useful, I don't think it improves the quality of my comments.
It depends on the subreddit. There are many where civil conversation is the norm. r/politics and places like that are horrible. Just remove them from your default set.
You know, I get this response every time I talk about this topic. I really dislike the "well no one is making you use / buy / watch / listen" to something. People can be critical of something they want to succeed. It happens a lot in certain game communities. What if I WANT to talk about politics, in a civil manner? Any sufficiently popular subreddit, just devolves into complete trash.
> You know, I get this response every time I talk about this topic. I really dislike the "well no one is making you use / buy / watch / listen" to something. People can be critical of something they want to succeed. It happens a lot in certain game communities. What if I WANT to talk about politics, in a civil manner? Any sufficiently popular subreddit, just devolves into complete trash.
You're missing the point of e40's response (and others of its kind). It's not "stop whining, don't use it if you don't like it". It's informing you that there are still parts of Reddit you can get value out of if you change your usage patterns a little bit, instead of your blunt-instrument approach that leads to losing gems like (e.g.) /r/AskHistorians just because /r/politics is terrible. There's of course nothing _wrong_ with you deciding that Reddit overall isn't worth the effort, but there's nothing wrong with someone giving you an alternative in case you were unaware.
When Reddit blew up in popularity, almost everyone I know who'd used it for a long time got sick of the default feed (both submissions and comments) and then at some point found out that being parsimonious in your subreddit subscriptions can give you a pretty high-quality feed. Reacting with defensiveness and hostility to someone giving you that advice in case you didn't know is frankly just bizarre.
It certainly tends to get a lot of people riled quite easily.
What's that thing about polite company not talking about politics, religion, or sex.
The average person, myself definitely included, probably doesn't have much worth listening to on the subject anyway. To paraphrase PJ O'Rourke: though there certainly are many political commentators who might be worth listing to. Under some circumstances. In a crisis. Maybe.
One of the issues that makes political discussions extra toxic is that because there are hundreds of millions of dollars spent on propaganda using the most sophisticated psychological techniques available everybody is full of outrage-inducing hatreds against "the other side". This makes it really hard to have a constructive exchange of ideas without falling into a minefield of unexamined, implanted mental prejudices, even if you try really hard.
My least popular comment ever was downvoted for trying to point out the problem with assuming that the people who fought against the healthcare bill are the same ones who hate the idea of SS/Medicare reform. Are we really so camp-y in our politics that you can just assume that the people who don't like something you like are on 'the other side', and that we can ascribe the opposite of all our other preferences to those people as well?
r/programming is another good example. The community there is very hostile and actively downvote and hate on most new things that aren't C++/Java/.NET/Python sometimes. It's been stereotyped as a bunch of overly angry .NET/Java devs, not sure how accurate that is but the angry part is right. It's just a waste of time.
I think it's grown that way over time. Used to be very focused on functional and dynamic languages back in the day. Hell, Perl was the hot topic back in 2005-2007 era reddit.
Right, which just reinforces my initial statements above. Popularity breeds trash. Forums move from a small group of passionate users, to general population. This is where quality deteriorates quickly.
It's just not politics, it's anything sufficiently popular to attract a large portion of the general public. Only niche categories, with a small group of passionate people work. Otherwise, it turns into a free for all.
Not so (imo). The neutralpolitics subreddit has grown continually for some years now, especially in the last 12 months, and I'd say that the moderators are doing a solid job of maintaining appropriate discussion and keeping things where they were intended at the start: rational, sourced, with a heavy hand on emotional content and personal attacks.
Similarly, I dropped r/gaming years ago, but r/games has been reliable enough and good enough for such a period that it's evolved into my default place I look for gaming news. (The League of Legends subreddit also does pretty well on content and moderation, for something running at the scale of some of the defaults).
The problem I have with reddit, usually, is discovery - its difficult to find subreddits that are popular enough not to be dead, have the content I want, and are managed in good fashion to prevent the usual content/comment trash.
/r/politicaldiscussion was created as a response to the malarkey that has become /r/politics and they place an emphasis on civil, reasoned debate. May be worth your time.
I don't think this comment adds to the discussion at all. The comment is actually a great microcosm to the dearth of comment quality. 1)You would never know about this reply unless you sought it out. 2) Once you replied to it you said nothing of value to anyone. You even threw in an off topic item. Let me guess, you don't have a TV either? Perhaps you are a vegan....
This is what is flawed about comments. It becomes an ego circle jerk...Which is necessary to even generate comments in the first place ("oo look how much Karma I get") but ultimately destroys a conversation.
As a very rough heuristic, it seems that subreddits with subscriber counts below about 10,000 seem to be able to maintain what makes them great (if they can be said to be great) without moderation.
You have a much higher percentage of users who are deeply interested in the raison d'être of the subreddit.
Beyond that, you either need moderation or have to make a new sub.
I actually enjoy articles that allow comments, because I like to read what people have to say about it before I actually read it. I don't really know why, maybe to gauge whether or not I want to get more information about the topic that the article title doesn't already provide.
I haven't read the article this thread is about yet, either, because the title gives me all the information I need at this moment. I know how crazy the comments were getting for NPR, which I started reading regularly a while back because they allowed comments on their posts. Now, I skip the comments if I'm reading an NPR article because most of the time it's some guy spewing something about how Obama is destroying the country when the article has nothing to do with politics in the slightest.
Same boat. No longer frequent Reddit except for a very small number of subs.
I have Adblock/Tampermonkey rules to hide comment sections from most news sites that I frequent. (If someone could point me to an extension that handles this, I'd be quite happy...)
I consider comment sections (or even comment counts) to be visual noise that distracts from grokking the actual content.
It's unfortunate, because I like hearing a variety of perspectives. But without highly aggressive curation, reading comments is a net negative on popular sites.
So true, there is maybe 10% comments of actual value anymore on reddit. The most valuable sources of info on the web used to be forums or comments, now there is very little. I used to search hackernews comments for useful info, but that is getting less and less. We really need a good place for expert info and opinions. Hopefully my friend working on a reddit like site, but with a whole new voting algorithm will help fill this void.
I've always thought of technology as being the sole driver behind the fragmentation of communities into small niche communities. It's interesting seeing that comments appear to be one aspect of technological change that drive people to seek niche communities (going off of another comment that it's better to frequent smaller subreddits rather than the larger, default ones).
Not sure where I stand in this one but when we removed a distribution list for all the condo owners, all the negativity which usually comes from few people stopped and all the owners lived happily ever after. So maybe taking some of the online discourse out it is not as bad as we think. People now come to condo meetings to talk about burning issues.
> Hacker News is about the only civil place I'm capable of contributing to a discussion to at this point.
that statement floored me
because historically, HN has been one of the most toxic comment forums I've seen. I've kept coming back despite it, not because of it. there's enough signal to justify it. but it's a kind of low-grade toxicity: a weird mix of passive aggressiveness, disagree-based-downvoting, "cite paper!"-ness, minutial edge case-oriented pedantry (that misses the forest for the trees) and neverending humblebragging.
To give just one example I can't think of any place I've seen more humble-bragging and stealth-but-blatant self-promotion as on HN. Whereas on Reddit I know I can always read discussions where the participants are friendly, funny, insightful, mature, organic, etc. Not always, not everywhere, but I don't have to wade through a shit salad like I've had to do here on HN for years.
I do think it's gotten better in the last year, maybe due to the work of dang and some of the other UI improvements.
I do agree that on sites like CNN, YouTube, the comment threads tend to be overrun with the LCD behavior. Lots of noise there.
If this is one of the most toxic forums you've come across, you must be very fortunate almost to the point of being sheltered. Not saying this is the best ever, but it's nowhere near the worst. The worst traits on HN seem to be an abundance of nitpicking with some disagreements on a few pet topics. Many other sites have active anti-social posters that abuse users for fun, start flamewars for the "lulz" and will argue any side of any issue that causes grief to someone. Not even close to seeing that sort of nonsense here.
you're right and wrong. :-) (which is the best kind of right/wrong? haha)
what I mean is... I tried to make a subtle point, but it didn't come across. what I was trying to say was this is one of the most toxic sites I've seen for a certain mix of toxic behaviors: mostly passive-aggressive, hyper-nerdy/autism-spectrum types of comments, humblebragging, etc. I agree with you wholeheartedly, from my own experience that yes there are other sites where there is more direct abuse, more direct trolling and harassment, which doesn't happen (or nearly as much) here on HN. what I'm saying is, on the flip side, I see a lot more of a certain kind of... there needs to be a word for it... the slang terms I learned for it were douchebags, pinheads, compulsives, booksmart-yet-streetdumb, etc. I just have never encountered that as much over on Reddit (itself a broad category because so much variance between subreddits there, and front page vs not) as I have here on HN.
again, I wasn't saying there's no signal here, and no politeness. I see lots of politeness and maturity here. also lots of muck. thus my slang term "shit salad" -- mix of good and bad. my point was that there's a different assortment of bad behavior I see here than on Reddit, CNN etc.
it does appear to me to have improved a lot, especially over the last several months. we'll see. YMMV.
The nit-picking and edge-casing is what spoils it for me. You can write 500 words about a general topic and someone will reply, "See in that sentence, you said 'all' there instead of 'some' and I can show you a counter-example! Your entire message is therefore wrong!!"
bingo! well said. part of why it's become, for me, that whenever I make a comment on HN, I almost always regret it. I'm so tired of having to think about filtering my comments to cater to every possible fine-grained exceptional case, or conforming to P.C.ness, or this site's particular participant hive-mind (which seems focused disproportionately around what it's like being a current semi-autistic teen/twenty-something white American male programmer of a relatively privileged background financially. Sorry if this hurts, but it's the best broadstrokes description I can think of for the median commenter personality I've seen here on HN.)
HN doesn't appeal to the general population. It works because it's niche to a specific crowd, that is probably more educated, and working towards a specific goal of improving things in general.
I see comment moderation as one of the 'unsolved problems' left in this generation of the web. When I worked at Foreign Policy we worked hard to integrate new commenting tools and encourage power users, but we were just buried by the threats, spam, and low-value noise.
Web technology scales, journalism scales (poorly, but a relatively small publication can pull big traffic), but right now there's just no substitute for someone at manually checking out reported comments and banning problem users. When you have a site with as much traffic as NPR, that would probably take dozens or hundreds, and these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges.
Maybe moving comments to people's own social groups on FB/Twitter will help to defray the costs, I don't think they're really seeing any discussion value for the most part.
What are your thoughts on incentivizing constructive comments? I've seen publishers (The Guardian, if memory serves) select thoughtful comments and re-print them as micro-articles in their own right. This seems to solve part of the problem by setting an established, if not mostly-objective, standard for comment quality: journalistic publication standards.
As such, bias and opinion is welcome, provided that it's analytical, verified by fact to a reasonable degree, and respectful of common etiquette. The genius in this approach, as far as I'm concerned, is that it manages to preserve the original purpose of comments: scalable content-generation!
Clearly, moderation is a Hard Problem, but one that I think benefits from an economic/incentives analysis. One conclusion I've drawn is that restricting comments to paid-consumers makes banishment and sock-puppetry costly enough that moderators can mop up the rest by hand.
To ask a specific question: what, exactly, remains "hard" with this approach? Do you think "free to read / pay to comment" is viable, in principle? Do you think the promise of publication is not a good incentive? Why?
I think that incentive idea is great, and is a smart move to build a community, particularly when you're trying to draw subject matter experts. I like how some of the Ask<X> reddits do it, by flagging people with verified advanced degrees. People think that news sites are afraid of conflicting opinions, but in my experience that's nonsense, it just has to be well thought out and not "DEATH TO <ISRAEL/ARABS/SUNNI/TURKS/AMERICA>", which is the vast majority.
It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.
I'll err on the side of caution with revealing employee counts, but in my experience many of the FP/Atlantic/Mother Jones/Weekly Standard/Pick your midrange site are running on a single digit to low double-digit number of web production staff, many of whom are also trying to make a writing, article layout, or fact-checking quota. The suggestion that these magazines can either get those staffers to moderate tens of thousands of comments per day, or quadruple their web staff just to improve the comments ignores the business reality.
User moderation in the normal HN/Reddit way doesn't work well on news sites, it's too easy to game or brigade, and news sites can't or won't give add unpaid moderators to be gatekeepers.
That's what's hard; creating comments is scalable, filtering them is not. Leaving them unfiltered doesn't work either.
>I think that incentive [is good], particularly when you're trying to draw subject matter experts.
You bring up an excellent point. One of the fundamental problems with comments, I think, is that it creates a space in which ignorance and expertise are equally-weighted. In fact, it's often worse than that for reasons we all know: interesting issues are hard to distill into 300-or-so characters, and short, simple points are often more percussive.
Vetting credentials is a very good option IMHO for certain forums but not for others. Reddit's /r/askscience is an example of a forum in which it works well.
>It still doesn't solve the problem that for someone to _find_ those great comments, they have to _read_ them, and stop them from getting buried.
I wonder if this problem can't be solved through the use of machine-learning to classify comments into high-versus-low quality by grammatical and semantic analysis. This kind of first-pass filtering could, at the very least, help throw out the obvious trash and pre-select candidates for recognition.
Such a system can be tuned to minimize false-alarms (shitpost getting flagged as good), which I think represent the most problematic of classification errors. This is a nice problem-space for ML because the increase in misses implied by a bias against false-alarms doesn't degrade the service much: not having one's comment select for re-publication is unexceptional.
RE:Machine learning: I think there are two problems with that approach, one cultural and one technological.
The cultural issue is that many news orgs are still run by people for whom the idea that technology could accidentally censor a valid criticism or ban a decent voice is just too risky. I think this is changing, and many newsrooms today a little more fluid than when I really cared about the problem 4 years ago.
The tech issue is a little bit of a cop out on my part. An ML approach is super attractive to me as a techie. Google (youtube), facebook, NYT, WaPO, and tons of other billion dollar orgs have this problem, and could loads of money by being seen as better communities.
On the more guerrilla side, hundreds of subreddits have automoderaters written by savvy, caring moderators.
They have terabytes of training data, already tagged, and world class ML experts on staff. If it was a tractable problem with business value, why wouldn't they have fixed it? I'm guessing it's the sort of thing that looks doable from the surface, but you get buried in the details.
Again, cop out answer, so please go prove me wrong!!
I understand, and I think that's probably the most difficult problem of the two. I'd just like to point out -- in the interest of discussion -- three things:
1. Pre-filtering for moderators is different (much safer) than auto-banning by a bot
2. It's valid both to filter informed opinions that are poorly expressed, and for a publisher to have a preferred "voice", i.e. a style of writing that it favors.
3. The argument can be made that machines are no more biased than human editors, and that in many cases, the biases of the former are known. As a corollary to this point, there exist certain ML techniques (e.g. randomized forrest classifiers) for which the decision process of an individual case can be retraced after the fact.
How do you think publishers would respond to these counter-points?
>technical problem
Counter-cop-out: someone has to be the first!
Somewhat-less-cop-outy-counter-cop-out: by your own admission, certain sites (e.g. Reddit) have high-quality automoderators.
I would argue that the problem is "approximately solved" and that this is sufficient for the purposes of moderating an internet news publisher. Again, I would make the signal-detection-theoretic point of my previous comment: I can selectively bias my automoderators in favor of reducing either false-alarms or misses. Of course, this brings us back to the cultural problem you mentioned.
By this I conclude that the bottleneck is cultural, which brings me to a follow-up question: what do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?
omginternets we're starting to run into HN flame-war restrictions, and I'm working so apologies if responses come slowly.
> How do you think publishers would respond to these counter-points?
In my experience 1 and 2 are fine, but 3 is actually a _net negative_ to some of them. People who by and large have come up through 10+ years of paying dues in a 'The patrician editor is always right' culture _hate_ giving up control, even when it makes their jobs easier.
Editors I've seen have balked at things like Taboola and outbrain, despite them being test-ably better than human recommendations, and saving staffers work. It's a fair argument that picking which stories to promote is a core part of the editorial job more so than comment moderation, but the attitude match is there. Editors at one DC media org I didn't work for shot down A/B testing any new features in the first place, because there was an assumption that the tech staff would rig it!
I don't want to paint 'editors' with too broad a brush, but there's definitely a cultural reluctance at the high level to automated decision making.
> What do you think is driving the increased tolerance towards accidentally censoring a "decent voice"? Is it the understanding that it doesn't matter so long as a critical mass of decent voices are promoted?
It doesn't matter to you and me. We think like HN'ers, where there are trillions of internet packets flowing around every day, and a few will get lost. They think like hometown newspaper editors parsing letters. When you take on the responsibility of being a gatekeeper, screwing it up is a big problem, every time.
I think increased tolerance is coming from more exposure to the sheer volume (Every week at FP the website gets more visits than people who have ever read the magazine in it's 50 years of existence combined), and a bit of throwing the hands up and saying "who knows"
Again, I'm speaking for a pretty specific niche of old-school newspapers and magazine people turned editors of major web properties, because those are where my friends work. Things are probably different at HuffPo or Gawker or internet native places, but clearly not that different because their communities are still toxic.
> I would argue that the problem is "approximately solved"
So I disagree here, but don't have evidence to back it up, other than years-old experience with Livefyre's bozo filter, which we didn't put enough work into tuning to give it a super fair shake.
Taking spam comments as mostly solved, I think there are 3 core groups of 'noise' internet comments:
1. People who don't have the 'does this add to the discussion' mindset to use HN's words. cloudjacker and michaelbuddy 's comments below demonstrate this pretty well. I'd lump cheapshot reddit jokes in here as well. They're not always poor writers, or even negative -- "Great article! love, grandma". Which falls back into the ethics of filtering them.
I suspect that this is 80%+ solveable.
2. The 'bored youth' and 'trolls' group. This is actually the worst group I think, because these are the people I suspect that make death threats and engage in doxxing and swatting. Filters will catch some of these people, but they're persistent, and many of them are tech-savvy and reasonably well educated. They can sometimes be hard to tell from honest extremists. A commenter from group 1 who is personally affronted can fall into this group, at which point they become a massive time suck. Hard to solve, but verified accounts help here in the US case.
3. Sponsored Astroturfing. Russia, Turkey, (pro/anti) Israel, China, Trump (presumably the DNC?) all have a large paid network of people just criss crossing the internet all day trying to make their support base look larger than it is. Especially in the US politics case, they often speak good english, and are familiar with both sides' goto logical fallacies. They'll learn your moderating style in a heartbeat, and adapt. Unsolveable.
Anyway, if someone builds a good bozo filter, they're almost certainly a zillionaire. I hope it happens, but I suspect we'll just start looking back on website comment sections like usenet, as a good idea that didn't scale very well, and find something better.
Taboola and Outbrain's recommendations are so pathetically insulting, and the tracking so obvious, that I've both blocked their domains (router DNS server) and specifically set "display:none;" properties on any CSS classes/IDs matching their names or substrings.
It's pathetic bottom-feeder crap.
Maybe if I fed the beast through tracking, I'd see higher quality recommendations, but I won't, and I don't. They only serve to tell me just how precariously miserable the current state of advertising, tracking, surveillance-supported media is. I'm hoping it will crash and burn, not because I want present media organisations to die, but until they do, we don't seem to stand any chance of something better.
(What better, you ask? Information as a public good, supported by an income-indexed tax.)
I was referring specifically to their paid same-site recommendation engines. So you drop it into an article, and it recommends other articles from your site. In my experience it's decent to good, depending on what metadata you provide it.
I agree that the '10 weight loss secrets' promoted junk to third party sites is bottom scraping.
I really disagree.
Yes, taboola maybe is promoting literally ANY content- even spam. So yes- I blocked them but currently Outbrain is really operating as a content discovery- I didn't find any content the abuses me as a reader. Not Yet. I know that they have strict guidelines as well for their advertisers.
Reading the other reply thread with slowerest gave me another possible solution, too.
Perhaps the comments sections for journalistic pieces from organizations like Ars, NPR, NYT, local news, etc could be more of a competition (like Slashdot). Top 300 comments get preserved, leave it open for a month with no comment limit and some light moderation, and let the conversation go wild (I like Reddit's system for this), then delete all but the top 300 at the end.
Adjust "300" and "top" to fit your organization's needs, just make sure they're clearly defined. Would also help limit the scope for an ML-based solution, too. :)
For news sites with a paid component, they could allow comments only from subscribers / donors. Having a gate which involves money will improve the conversation somewhat. I'd even go a step further and make comments invisible except for subscribers. People creating trial paid accounts could see the comments but not comment themselves. This latter step would prevent astroturfing from firms willing to pay $10 for a trial but not $100 for an annual subscription.
Moderators would still be needed but their workload would be reduced. And there would be money available for them since many would subscribe / donate just to be part of the community, which would make moderation less of a drain and more of the core profit-making.
> What are your thoughts on incentivizing constructive comments? I've seen publishers (The Guardian, if memory serves) select thoughtful comments and re-print them as micro-articles in their own right.
I don't think you're correctly identifying the problem. In my experience, the problem with comments, especially on news sites, is a glut of bad comments, rather than a lack of good comments. This solution doesn't disincentivize bad comments.
The solution to bad comments is deleting them before they are even visible to other users. Deleting aggressively, as is done in certain subreddits (r/science) may seem offensive towards naive users who just want to add their "2 c" to the discussion, but it's the only effective AND honest strategy: if your comment adds little of interest, it's worth nothing. The bar should be very high, the more popular the website the higher the required quality. But in the end I think NPR are making the right choice. Comments on websites are not a constitutional right, after all.
The aggressive moderation in /r/science is quite honest compared to other subreddits, which is partly why its moderators attract less controversy when compared to others such as /r/news.
The slashdot system for categorising comments seemed to work really well at making the highest quality comments stand out, I wonder why other sites haven't tried something similar, I don't think I've seen it used elsewhere.
slashdot nailed moderation, no one has attempted something similar. most systems are simple up/down vote or like/report
i am also starting to wonder if the agegroup being hired to implement "social" for websites is now young enough to have missed slashdot in it s prime.
the fact that people are still brainstorming from scratch instead of talking about how to improve slashdots model reeks of reinventing the wheel because they never heard of it.
> i am also starting to wonder if the agegroup being hired to implement "social" for websites is now young enough to have missed slashdot in it s prime.
That's me! Can you explain the Slashdot model and why it worked? Or point to a good write up about it somewhere else?
Slashdot's model was perhaps a little overcomplicated, but my favourite feature was the ability to tag up/down votes with flavours. +1 Informative was different to +1 Funny, and "Factually incorrect" was a different downvote to "Off-topic spam" (whatever they were called).
Other quirks off the top of my head: it capped at +5 and ... -1, I think? The score represented a thing closer to the up/down ratio than "Facebook likes". There was a dedicated -1 Overrated moderation for "I don't disagree that it's interesting, just not +5 interesting".
Also, logged in users got a fixed number of moderation points at random intervals, and you couldn't moderate in a story that you commented in. I'd like to believe this discouraged "throwing away" points on low-effort joke comments, but I'm not sure the facts of Slashdot comments entirely bears that out.
Slashdot's method of scoring comments was overly complicated and probably did not produce any better results than reddit-style voting. However, Slashdot's killer feature was that the reader could filter by comment score and thus only read the 'good' comments, and not have to wade through hundreds of replies.
correct, they were better than reddit because they let the user sort based on their preference. slashdot generated a ton of metadata that described their content, and then gave you the power to intelligently utilize that metadata.
Slashdot's moderation system was vaguely effective. Complete crap rarely rose to the top.
A great deal of high-quality commentary was buried, however, often the best and most informative. That's fairliy much par for the course.
Much the early vibe on the site came from the fact that it was simply where intelligent people were commenting online -- especially the early Free Software crowd (well, early in terms of Web 1.0 -- there was the whole 1980s and early 1990s contingent as well).
ESR (before he went fully whackjob mode), Ted T'so, Alan Cox, Bruce Perens, Rasterman, and others.
Much that group seems split amongst HN, LKML, LWN, and Google+ these days, along with some blogs.
When I was delivering newspapers as a small child many aeons ago, the best page of The Guardian was 'letters to the editor'. The rest of the paper was pretty good back then, there was no email, so anything printed in the 'letters to the editor' had to be posted in, to appear some time after the events in question.
Needless to say an event happened and was reported the next day, so it could be a whole week between the Trump-of-the-day saying something and comment appearing about it. All of this would be filtered by the 'editor', however you did have frequent letters by the likes of Keith Flett, who somehow got his letters published more often than the other 3-5 million readers (as it was back then, just UK sales with poor distribution in places like Birmingham).
There were no 'likes' back then so you had to have something to say to bother writing in.
How do we get a digital equivalent? I don't buy the dead-tree paper these days so no idea if 'letters to the editor' still exists, but, back then it was good, very good.
Its interesting that simply restricting immediate commenting might at least deter useless comments. People who are commenting in order to elicit a response, i suppose, probably have less important things to say. Or maybe they wouldn't say them if they are not granted the immediate satisfaction.
I assume it would kill some collaboration/innovation like on HN or a meaningful subreddit, but maybe no one really ever has anything meaningful to say when reacting to general news...
I guess it would also produce duplication from many people not knowing something was said already (however, the duplicate reactions could be monetized later down the line maybe...)
A podcast I frequent does this sort of thing. If your comment is read on the podcast (and they read one a day) then you get sent a .NET Rocks coffee mug. Which is kinda neat.
The podcast is .NET Rocks and their comments seem to be pretty good overall.
Nobody old enough on here to remember Slashdot's moderation system?
Not everybody could promote or demote comments. You got randomly assigned the ability to moderate comments so when it came your turn you took it _seriously_.
That community had one of the highest quality comments. Then somewhere in the mid-2000's it got super anti-Microsoft and anti-anything-not-F/OSS. I'll give them credit; it probably reflected the highest quality comments of their userbase at the time.
Slashdot's moderation still had some problems - which might be inevitable, I don't know.
There was a big bias towards early comments - moderators had to see your comment before they could upvote it to the top of the page, but once it was at the top more people would see it and keep it there; so a comment that would score well if posted as comment 10 would score nothing if posted as comment 50.
And karma tended to reward /popular/ comments, which were often things the hive mind agreed with, rather than high-effort comments. Discussion about DRM? Get in early with "DRM is impossible because" or "format-shifting should be a right" for a quick high score.
> That community had one of the highest quality comments.
One of the biggest differences between slashdot, and a site like reddit is simply size. Reddit is now the 8th or 9th largest website in the U.S according to Alexa, it's getting as big as Twitter, and is larger than Netflix. Slashdot at it's peak popularity wasn't even a drop in that ocean of traffic & pageviews. When you get that big, your problems are of a different sort, requiring different solutions. Hell, I think reddit has single subreddits that are bigger than Slashdot was at its peak.
This is important because it's easy to have "high quality" when your traffic is low. It's easy to moderate and easy to keep people on-topic. I speak from experience -- I moderate one or more default subreddits on reddit, as well as smaller subreddits, and the smaller ones are much easier to handle. They're virtually on autopilot with minimal moderation required. The larger ones on the other hand... It's like a non-stop war.
While I think there may yet be some sort of NLP/ML-based filtering that can improve the signal to noise ratio, the fundamental problem is that the effort is incredibly asymmetric.
It takes an author far, far longer to craft their work than it does for someone to heckle it.
If people weren't driving up page-views by coming back to the same article to see if their comment was liked or replied to, I think this would be a very easy decision for most sites: at some point you are responsible for all of the content on that page.
> "these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges"
But suggesting people engage instead on Facebook brings a whole new set of ethical concerns. (1) Facebook manipulates users. (2) Facebook reorders feed. (3) Facebook would lower priority of conservative news sites. And lets not forget that Facebook is probably outsourcing moderation anyway. Plus, Facebook commenters can be just as bad as regular site commenters.
> (3) Facebook would lower priority of conservative news sites.
I worked on the trending product. This did not happen. The whole thing goes back to one guy complaining now that he couldn't pick Breitbart for the highlighted slot for some story because it wasn't on the list of approved sites. And this list is actually available here https://cdn.ampproject.org/c/newsroom.fb.com/news/2016/05/in...
Of course no one ever asks why he wanted to pick a controversial site to highlight instead of say a boring straight forward wires service report like the AP.
Of course the story still appears, and the Brietbart could appear in slots 2-N by the personalized ranking algorithm, so it's not like it surpressed. He just wanted to shove it into the I personalized slot 1 where everyone would see it.
Sadly I feel that this is one of those cases where it's impossible for the correction to ever overcome the initial misinformation. On average, People do not accept new info when it refutes their existing knowledge base. This is doubly so in tribal areas like politics.
Yup. It's basically saying "we can't afford this, so we'll make it not our problem."
FWIW FP briefly used an embedded facebook widget, and a nonzero percent of their livefyre users logged in via FB.
It did little to nothing to stop abusive comments. The HN crowd cares a lot about what sort of history follows around our names and our handles. Many others, both in the western world and abroad, do not.
There's a german blog that used to be popular (blog.fefe.de) without comment function. So some people built a website that offers the same blog just with a comment function.
They built in a captcha function, that fails with a the probability that your comment is a troll comment.
I know someone running a company that aims to solve exactly this problem, and they were attempting to sell to NPR, too. Last I talked to them about it, they said NPR seemed interested but has the typical years-long enterprise buying cycle. So, this news is really too bad.
Unfortunately at least 90% of internet comments are trolling, vitriolic, ignorant, generally useless, poorly written, unhelpful, add nothing to the topic, and basically serve as web pollution.
Yes, I think using FB login pretty much solves the problem as it is today. Take a look at civilbeat.com a regional news site by Pierre Omidyar (The Intercept). I'm fairly certain they only allowed comments via FB login for years. It meant a lot fewer comments than they would have gotten but they were all legit. Now it looks like they allow FB, Twitter, or local auth. But the comments are still mostly ok. Maybe they are looking for more activity by easing the requirements and believe they've built a culture of good commenting?
I think describing FB login as "solving" the problem is definitely overstating it by a lot. I've seen plenty of dumpster-fire comment sections that allowed only FB users to comment.
I'm not sold on it because a lot of newspaper web sites use facebook comments and any topic about politics, race, or gender seem to be full of people making hateful comments.
Seems like it was quite "better" than the alternative (basically anonymous user logins making comments) for my home towns recent transition to fb comments, for what it's worth...
The one place I've actually found awesome comments was "the economist" (well, HN isn't bad either), and the ny times is kind of OK. Everywhere else feels pretty iffy...
It solves the problem for me, I guess. I won't be commenting anywhere you have to log in to facebook because I don't want them tracking me all over the web.
While a facebook account gives some legitimacy, I also like sites where you can post anonymously or at least pseudonymously.
If you are using Tor Browser in Tails over a coffee shop wifi while you are laying down under a blanket in the back of a truck driven by a stateless hobo with no fingerprints who you intend to murder later in a country with a healthy democracy, you are probably still not anonymous. If you are not doing those things, you are definitely not anonymous.
"When I worked at Foreign Policy we worked hard to integrate new commenting tools and encourage power users, but we were just buried by the threats, spam, and low-value noise."
Assuming you're trolling, but at the risk of feeding:
Someone who posts "Sir your magazine and Hillary Clinton are tools of Israel and should be killed by Hamas, God willing", on all every story about the State Department, or "Oh $WRITER I see you live in DC and went to $COLLEGE, maybe I'll come pay a visit to the next alumni event and teach you some respect for $COUNTRY" isn't the target user for a major American publication. It doesn't want those kind of abhorrent sentiments to live alongside its brand on its website, and is under no obligation to give voice to their ideas.
They're an exceedingly small percent of total readers (when they're even real readers), but a much larger percent of online commenters, hence the problem in the first place.
Even in the non-bot non-astroturfing case, the people who make those comments may be actual readers (although they're exceedingly unlikely to be paying subscribers), but they definitely fall into the bucket of 'can be filtered out, to no appreciable loss'.
They're users in the sense that the website is free, and anybody can be a user, but not in the sense that the publication has a duty to them, in exchange for their money or attention.
Aside from [bot] spam, I agree with the statement of Those were your users.
What OP really wants are the good comments, which is more than just spam filtering and also more subjective. If an ill-informed, 13-year-old's comment would be considered low-value noise, website operators would need to engage in something resembling censorship, which has its own set of problems.
Don't have time to elaborate - but moderation tools actually link to many other deeper problems in meat space, and IMO lead to the kind of tools which-should-not-be-made.
Disqus more or less figured out comment moderation around me. I'm yet to see a Disqus-powered comment system overran by undesirable content.
HN is failing at comments. During last years, the community deteriorated to the point where for many articles every single comment is grayed-out downvoted. That signifies quite a rift in community. HN used to be upvote-intensive excitement-driven but today it's downvote-intensive, annoyment driven.
Probably a signal that the user base does not find those issues interesting.
Snowden because it's nothing we don't already know, and refugees or gender politics because they always degenerate into political (i.e. not interesting) mud slinging matches.
On a side note, if a community with the general high quality and good moderation of HN can't have a good discussion on those topics online, I'm inclined to believe that having same is just plain impossible.
Personally, my thought process upon seeing one of these articles is something like:
1) Ugh, another one. Let's check the comments..
2) As expected, a dumpster fire. Nobody even RTFA. Let's look at the article..
3) Nothing even remotely new or interesting. Who voted this up? Flag.
It's far easier to manipulate systems than it is to accurately reflect either your typical reader viewpoint, or an intelligent and informed viewpoint. This is a classic failing of any democratic system, election balloting included.
Early "democratic" systems were often anything but -- about 14% of Athens' citizens could vote, and about 6% of the US at the time of George Washington's election. There are arguments for a broader electorate, but they come with distinct problems.
Vote brigading in particular is a standing issue on almost all online moderation systems. Some sort of trust cascade might help. It's what, say, the US electoral college was meant to provide initially, though how much of that function remains (and how it might manifest) is rather in question.
As for Snowden, a counterpoint is that some people see this as an issue which requires constant reminding. Advertising and propaganda both work through repitition, and sometimes the truth gets a chance for that as well. There's certainly enough repeat traffic on other topics at HN. (Though yes, many of those get beat down in the submission queue.)
Of the article isn't interesting, the article wouldn't have been voted up.
Marking down the comments indicates a desire by some to to enforce groupthink. Why? Because many people use votes to indicate agree-disagree instead of a quality metric.
I think it's harder to agree/disagree with the typical headlines featured on HN. Most articles on HN appear to be straight.
But let's say that two articles were in the queue, one pro-X, the other anti-X and the pro-X forces were dominant. Sure the pro-X article would hit the FP, but the anti-X forces would still comment on it and be down voted.
Also the bias is only visible in the comment section because down voted comments remain visible, whereas a down voted article gets flushed down the memory hole.
Just because you want to argue about politics with people doesn't mean that people want to argue about politics with you! Maybe they do, sometimes, in some contexts, but if the social cues (i.e., downvotes) indicate otherwise, then maybe not at that time and place. There's nothing wrong with people not talking about stuff they don't want to talk about.
Also, internet forums have learned over multiple decades that otherwise interesting discussions can easily get derailed by people screaming at each other over unresolvable issues. If the community doesn't keep a lid on it to a degree, the quality of discourse goes into a downward spiral that it can never recover from. It attracts people who just want to argue about stuff and it drives away people who want to have interesting discussions. This has been seen time and time again, in newsgroup after newsgroup, mailing list after mailing list, web forum after web forum.
Holding back that inevitable decline is like fighting against entropy- if it stays popular, HN is almost guaranteed to decline, and become more and more like Slashdot circa 2010, right before it poofs out of existence and/or relevance. But if users actively push back against the tides of forum entropy (i.e., discussion getting drowned out by arguments), a forum can at least have a nice long run before that happens.
I think what people want to avoid on HN is the sort of discussions where people are just asserting hot takes back and forth to no other end than the act of publicly asserting hot takes. This was never fun to watch on Crossfire or First Take or whatever, it's not fun at awkward drunken family gatherings, and it doesn't fit in with the vibe of HN. It's invigorating to the participants but much less interesting to read, and for every poster there are hundreds or thousands of readers.
That applies to online forums just as much as it does to real life, some forums are just more focused than others (just like some households are way louder, more chaotic, and have more drama than others). Almost every place other than HN thrives on arguments, so at least there are plenty of places to have them.
I don't know enough about Disqus to render an opinion, but I do find it entertaining that the sample comments shown in the animation on their front page are entirely noise, in that they contain nothing more than a "Yay!" sentiment.
How has disqus figured out comment moderation? As far as I know, they don't make a big effort to create great comment communities. Do you have any extra details?
HN has far better comments than any disqus comment feed, on average, in my opinion.
Unfortunately, not much interesting happens outside of politics. CRISPR and exoplanets spring to mind as exceptions, but software field definitely stalled.
Politics seems to be the force that can bury any amount of advancements in other fields, hence interest.
95% of internet comments are pure trash and basically internet pollution. The other 5% can be a mixture of deep insight, thoughtful discussion, and relevant opinion. Sorting out the trash and insisting on quality comments is an unsolved problem, perhaps with an eventual tech solution.
The New York Times is probably the only site I know of that does comments well, and they are obviously heavily moderated. But, they're smart, sometimes funny, often insightful, and generally worthwhile to read.
Some general forums and social sites do comments reasonably well too, this one included. But Reddit is a toilet, and Facebook and Twitter are the dirtiest of cess pools.
The New York Times comments are free from trolls and spam, but it's a frustratingly obvious echo chamber when it comes to politics. I'm a liberal guy but I can't stand it. David Brooks wrote an interesting column (http://www.nytimes.com/2016/08/09/opinion/the-great-affluenc...) a week or so ago and most of the comments are just bashing him for being a Republican, as if that has anything to do with the subject matter.
i think your diagnosis is wrong. show me a conservative news outlet on the web with a high SNR of thoughtful and intelligent comments, free of frothing, conspiracy-laden bullshit. maybe the NYT is an echo chamber because modern conservative positions are so weak and contradictory, they can't stand the withering critique of a well-moderated forum. instead, they only survive in troll havens.
as for brooks' column, you might be missing some context. brooks has made a career of talking out of both sides of his mouth and (annoyingly) providing intellectual cover from the NYT for a plethora of bad conservative ideas. now that they're blowing up in his face, he's backing away from these stances.
"He should realize that we’ve been trying to bring the tribal ethos to the U.S. for a long time, with strong local communities providing the sort of help and social services that bind people together and take care of each other as we get older, or fall short in some way.
But He Who Talks with Forked Tongue likes to imagine an egalitarian utopia where 99 percent of us are quietly stitching blankets while a few get to hoard the vital resources. When the tribesmen and women protested and occupied Wall Street, Brooks nearly went on the warpath, and wrote a column in the Times entitled “The Milquetoast Radicals,” (10/11/2011) in which he castigated the unwashed hippies who dared to protest the insane degree of income inequality in this country."
I was not comparing NYT to any other news outlet. It's an echo chamber regardless of the fact that conservative news sites also have echo chamber comment sections.
The David Brooks article was just an example. When Bernie Sanders was still campaigning every comment on Hillary/Bernie-related articles was about how the New York Times is wrong and that Bernie is the best, people will learn about the political revolution soon enough, etc. I was a huge fan of Bernie and I got bored of those comments instantly.
neither am i comparing them. you're observing that the NYT comment section is free of trolls/spam but is otherwise a liberal echo chamber. that's another way of saying that it's lacking a counter-balance of intelligent conservative comments. i'm accepting that critique for the purpose of argumentation, and replying by asking you to look around and find anywhere on the web that has a majority critical mass of intelligent conservative commentary. once you realize that it pretty much doesn't exist, maybe that will lead to a different conclusion...
anyway, one of the few places i read that, for whatever reason, does carry an even mix of intelligent comments across the spectrum is interfluidity. for example:
Holy Hell, you are right. I actually have the distinguished privilege of having one of my NYTimes comments be a "Times Pick", meaning the ed. board actually read it and recommended it for insight, I suppose.
My comment was mostly meta, calling out people for missing the point of an op-ed. The op-ed was from a privacy/civil liberties person about why a "no buy" list for guns would be a bad idea. He wasn't arguing on the merits for or against gun ownership, just that these secret lists on which LEO acts are dangerous.
Every comment was something along the lines of "What about my right not to be shot in the streets?!" - I tried to point this myopic view out, and every reply to my comment was "What about my right not to be shot in the streets?!".
As a smallish-government liberal (Public services are well and good but government should be kept in check by a powerful and vigilant population) I get torn up whenever I defend gun rights or spending reductions on that website.
Metafilter does moderation well. They have a team of paid moderators. It costs a token $5 to join, which means it's expensive to generate sock puppets and anyway the moderators keep your credit card on file so they can permaban you if they need to.
I think it's more than that; it's a psychological thing. You're not likely to shell out five bucks to spew an epithet at someone - you have to think about it. You have to get your payment info together. You have to think about your budget, even a bit in passing.
Automatically generating a stupid sentence is hard. Stupidity is difficult to simulate. Not lack of coherence. Not pure noise. Not word salad. Not poorly trained neural networks. But stupidity. It's hard.
Well, you can pretty much come up with a stupid paragraph on any topic if you just write a small script that searches YouTube using the appropriate search pattern, and just grabs some random comment from the top link. Chances are pretty good it's a stupid statement.
I've always appreciated the NYT's approach to flag their "Picks" as not just the ones receiving the most upvotes/recommendations, but those that represent a wide variety of opinions.
Sturgeon's law[1]: 90% of everything is crap. 90% of comments is crap. 90% of journalism is crap. The difference is that a good portion of the 90% of journalism that is crap is off on sites that you don't read.
The German Frankfurter Algemeine Zeitung (FAZ / http://www.faz.net) also has thoughtful comments. I used to think it was something about the community but at this point it seems more related to moderation. You don't see a lot of trolls on the site.
Given that it's FAZ, I suspect all they need is a check that the comment is correctly punctuated, uses the subjunctive, and consists of 3 paragraphs written in 2 sentences :)
We should ask ourselves what value on-article comment sections provide. I believe HN is good because it serves a specific trade purpose and caters to a specific high-end niche audience. HN's participants are willing to cross a significant participation threshold to be part of the community here. I believe that kind of structure needs to be in place to get good community participation. I guess I'd summarize it with these points:
1. require significant investment from the userbase to participate successfully
2. promise to assist the userbase with something that is critically valuable to them in exchange
3. regularly deliver on 2 to keep 1 worthwhile
4. provide reputation tracking and management utilities so that the user can cultivate a profile that reflects the investments made in point 1
5. for recurring participation, provide variable rewards that trigger the brain's hooks for surprise, which translates to enjoyment.
HN hits those points, but blogs definitely don't and don't want to. They want to bring in barely-interested readers from search, from anywhere on the web. Many of these readers won't even make it into the comment section. Thus, a good community tacked on to the bottom of their articles seems unlikely.
If those principles are required to cultivate a worthwhile community, the community should always occur external to the publication of the article. The community needs to be the centerpiece, not the article. I use HN this way; the discussion is the primary thing, the articles are the subjects submitted for the community's discussion.
The other caveat is that it's difficult to provide points 3 and 5 when you're just starting out. From what I've seen, it practically always has to be artificial until the momentum becomes self-driving (if there is a physical community that uses the online forum for spillover, this may not be applicable; this is basically what happened with HN). We need better solutions there.
Since individual blog posts have certain quantities of Google juice, comment sections will be overrun with spa--err, "SEO professionals". Other participants are often low-effort drive-bys. If the community isn't the principal focus, participation will be spotty and it will be hard to develop elements fundamental to meaningful community engagement.
Facebook and Twitter are normal people making normal comments on random stuff they see. These people generally feel a compulsion to let their feelings out and Facebook/Twitter provide it. I believe this is probably what was originally intended for blog comments, but because Facebook/Twitter are real, stable communities, the participants are inclined to leave comments there instead of on the target article itself. These comments are often loose and instinctive, which is not necessarily to say they're invalid or worthless, but a community won't form around individual postings because there's no common unifying dictum (point 2 is unfulfilled, and point 1 is minimal on random FB/Twitter posts).
Mostly everything you described in that list was something that most web forums and usenet provided, but they were supplanted with a mix of on-article comments and social media in it's current state. It is frustrating to have seen most forums end up evaporating but they are just hard to keep momentum in.
It would be really nice to see someone figure out a good way to bring forums back in a way that didn't turn into a black hole.
As a heavy forum user (since the late 90s), I find reddit supplanted them. Reddit is basically every forum I've ever been a part of, all on a single website. More importantly, it's every forum I didn't even know existed, on a single website. I found many niche hobbies and interests I wouldn't have otherwise found thanks to reddit.
I think that's why reddit grew so large, so quick. It does what forums do (provides similar discussions), except better.
I agree that reddit supplanted most of them for particular niches but building a community around a few niches doesn't seem to work well. You can't really combine r/ArtisanVideos, r/programming and r/ECE into one 'whole' where a forum such as the EEVBlog or the RPF can combine these more diverse things into a more cohesive community with more users overlapping.
Reddit definitely does better as 'front page of the community' but there are less 100's of posts over 2 months to a single entry on reddit that happen without a moderator's pin.
> It would be really nice to see someone figure out a good way to bring forums back in a way that didn't turn into a black hole.
Isn't that what Reddit is trying to do? Creating a subreddit seems to be the default way to stand up a new forum. Whether the platform is up to the task.... jury is still out.
well some one's trash is another person's insightful comment. that isn't to say there isn't hate and trivial comments but far too many will simply dismiss any comment as ignorant that they don't agree with. This is most likely to occur with political or religious discussions.
Cons: Requiring comments via Facebook means loss of anonymity.
Pros: Requiring comments via Facebook means loss of anonymity.
I suppose it depends on what your priorities are. If you'd like the insightful input of someone who might be close to the source of the topic at hand, but maintains anonymity for safety or fear of repercussion, then the requirement to use Facebook could be quite damaging.
On the other hand, if your primary concern is the nameless faces spewing hateful, racist, or otherwise inflammatory garbage on your comments section, the Facebook requirement with its real-name policy could go some ways to curtailing that kind of dialogue.
I imagine your average article discussion consists of 5% of the former and 95% of the latter, and so I can understand why they might choose to go this way, even if I am disappointed by it.
> the Facebook requirement with its real-name policy could go some ways to curtailing that kind of dialogue.
This was the theory was for a while, but empirical evidence has refuted it. People are just are bad or worse when posting under their real names, and it's not just an anecdotal feeling anymore: https://www.techdirt.com/articles/20160729/23305535110/study...
It's interesting to see that real name policies can actually make trolls behave more aggressively. On the other side of the spectrum, I anecdotally find that they actually prevent me from posting constructive things. I frequently refrain from leaving reviews for restaurants and stores on Google because I don't want them associated with my real name. It's not that I would otherwise be leaving low quality or bitter reviews, I'm just not particularly interested in anybody who knows my name having access to information about where I shop, what I eat, etc (even though I would be happy to share this information anonymously to help other people make informed decisions). I feel similarly about commenting on news articles, blogs, and apps that I use. It wouldn't surprise me if this was a significant factor in how real name policies affect overall comment quality.
Thank you for the link providing non-anecdotal evidence that a real-name policy does nothing to discourage aggressive trolls. I run a Facebook page with a moderate number of likes (~75K), and the fact that the trolling was, if anything, louder and nastier than in a comparably sized community on reddit, was one of the first things I learned.
I imagine it's because those trolls still have no consequences to their actions. If people would make more public the comments of those who are the most toxic (death/rape/other violence threats, etc), by, say, sending those comments to the person's mother/grandmother, their boss, etc, and they actually had consequences for those actions, then it might taper off.*
* Of course, this kind of action can have some far reaching consequences, and is quite ripe for abuse. I don't know if there is a way to fix that. And myself, not being the target of such abuse online, I'm not really that invested in trying it.
Facebook Comments (e.g. TechCrunch comments) are a secondary product for them, with even fewer features for publisher control and worse spam detection than Disqus.
Posting with your real identity does surprisingly nothing to stop trolls/flame wars; it arguably makes it worse as now there is a target.
Site using Facebook for comments usually end up full of horrible spam. Facebook isn't a good solution either.
IT's interesting to see how a lot of big websites moved from Discuss to livefyre, I wonder what's wrong with Discuss, livefyre looks like it has fewer features and options.
I wonder if there's a way to allocate names that are pseudonymous but not disposable.
That is: you can have an alter ego (or 5) that aren't easily linked to your real identity, but new identities are hard enough to come by that getting banned isn't trivially bypassed by a new identity.
This would be a major step for email/messaging spam too.
SomethingAwful handles this by charging for accounts, so if you get banned it costs 5 or 10 dollars to get a new identity.
Urbit handles this in a decentralized way by intentionally limiting the namespace, so that once it runs out you have to buy a name/address from someone who's selling.
Metafilter has a $5 dollar entry fee, but will waive it in cases of hardship. Seems like a decent solution as long as the community is small enough to have responsive management / moderation like Metafilter.
>On the other hand, if your primary concern is the nameless faces spewing hateful, racist, or otherwise inflammatory garbage on your comments section, the Facebook requirement with its real-name policy could go some ways to curtailing that kind of dialogue.
Facebook isn't the only social option. Twitter replies are an opportunity for anonymous or semi-anonymous commentary. But, also an opportunity for garbage.
Twitter does not allow for conversations in the same page of the article, which is part of the appeal. (And Twitter's recent changes make it hard to track conversations even on the website)
I propose the following system for commenting on news sites:
-comment period open for 3 days after publication
-comments are not published until the end of the comment period, then all published at once
-you can submit 1 comment per article only
After the comments are published, then a voting/ranking system is enabled for automatic sorting, but nothing is deleted except for spam
This eliminates any back and forth arguments. It would function like an online letters to the editor.
I really like this. When I think about the most interesting comments, they are the ones where a reader adds something to the article, whether an anecdote, a correction, an educated opinion, etc. This system preserves that and removes the mud slinging. It also preserves the page views, as people are encouraged to return and read--perhaps with some "remind-me" mechanism.
I could see other permutations of his idea as well: you can only reply to the article, once you view the comments, it no longer accepts a comment from you; keep comments open but delay comments by one or two days--allows slow back and forth, not flame wars. Or allow replies, but only one comment per user. There are a lot of mechanisms to explore here.
I would imagine such a system would be mostly equivalent to no comment system at all. I'm not going to bother commenting if nobody can even see it, and 3 days later when they can I won't even remember that I commented.
I like it the idea of an interval between publication and comment! However I don't think back-and-forth is necessarily bad. I'd modify your proposal to include two or three rounds of rebuttal, each at a 3-day interval. You'd have to be seriously invested in the discussion to participate, and there's very little room for a flamewar to develop.
Nope, don't like that idea. I enjoy reading replies to comments - that's usually where discussion gets interesting. A static list of "letters to the editor" can be interesting, but there's no life or action happening there. More spice and counter-arguments are found in someone's reply to a comment. Sure this increases odds of domination by few users... so perhaps limiting people to 3 comments per article would be an idea.
Why can I only downvote some comments here? I don't downvote much, but any time I see a comment such as "I like it" it really deserves a downvote for reasons that shouldn't need to be explained.
The key issue here is universal across the internet. That is, a bunch of people in the same room (i.e., commenting on the same article) is not a community. A community has standards, protocol, social norms, etc.
Yes, moderating comments is an issue, especially when you don't really have a community. On the other hand establishing and managing a community would go a long way to making comments manageable.
> The conclusion: NPR's commenting system — which gets more expensive the more comments that are posted, and in some months has cost NPR twice what was budgeted — is serving a very, very small slice of its overall audience.
If this was the extent of their analysis (the article doesn't say), shame on them. People reading the comments should count too.
Wayyy different orders of magnitude. From NPR's self reported estimates, ~10% of the US population listens to NPR radio weekly. Only ~0.013% of web users commented on articles.
No telling how many regularly read the comments without contributing. Does anyone have lurker estimates?
Some comment service providers like Disqus only load the comments if the user scrolls to them, while others only load if the user clicks on a button. If these services' analytics tools don't track this, they should. I'm much more interested in how many people interact with the comments section. This includes posting, lurking, voting, and flagging.
Anecdata: a few years ago I did a road trip through NV, UT, and CO, and for large swathes of that trip, the only radio station that wasn't glassy-eyed christian pap ("if we make 20k donations, it can ONLY by god's will!" and similar) was NPR. If I lived in the area, it would be the only sane choice for radio...
>But the Facebook discussions that do take place, in particular, tend to be more civil, most likely because users are required to use their own names (not that fake accounts don't get through, but there seem to be far fewer than the predominantly fake names that NPR commenters currently rely on).
I have not found this to be the case. Before I installed content blockers in my browser to block comments sections altogether, I was often taken aback by how many people made comments that included racial slurs and direct personal attacks on other users using Facebook accounts with their apparently real name and photo.
I find the lack of comment section to correlate strongly with the author's self-importance and inability to take criticism. I even read such articles with a sneering, bloviating voice in my head. Makes reading all the pointless medium articles much more entertaining.
All the time. For many websites, the very first comment is a succinct and correct refutation of the article or blogpost itself. And that is why comment sections are disappearing. People really don't mind "trolls" or whatever all that much. They do mind being proven wrong.
In the earlier days of blogging (circa mid-2000s, I think?) bloggers and article writers were encouraged to have comment sections as a mechanism to "engage" with their audience. Social Media and blogging experts like Chris Brogan urged people to actively participate with commenters to build a community around their work (and he used to be very critical of Seth Godin for not having comments on his site).
But that was before Twitter, Facebook, and a bevy of other platforms with which you can actively converse with your audience. At this point, I really question the value of comments on articles. On sites that are really primarily unidirectional information sources (like old media news sites that are now online), what purpose do they serve? We used to have "Letters to the Editor," which were few, curated, and sufficient. Do we really need comments on every article?
I agree entirely. I've found comments below the story to generally seem outmoded and often jarringly tacked-on.
Outsourcing to other communities seems great, but I could imagine NPR doing well with a Boing Boing BBS / Discourse model and effectively offering their own community board (subreddit, if you will) auto-threaded with their content. Then as somebody potentially having a serious thought in a comment you don't feel ghettoized to the bottom of the page, speaking to who, the author?
Rarely does the threading in on-page comments give great visual direction to feel like you can engage other commenters or the author you are responding to in any manner other than "broadcasting" (or maybe a better word is screaming).
You tell me, we're doing it right now, albeit by proxy. A return to a read-only web outside of Twitter and Facebook would make articles less insightful, sites less accountable and the web a little more boring.
For example, I have saved dozens of comments from The FT and The Economist which added brilliant addendums as well as counter-points to the article topic. It would be dull without these mini-communities below the line.
Sure, some comments are toxic - it's a reflection of society and a consequence of everybody having access to the web. How about a 'show comments' button like some websites have? Make them opt-in.
We don't need to be able to comment on EVERYTHING. Unless you actively plan to use the comments for something, I think most sites should just remove them.
It's fine to give people a outlet, a place for them to let their voice be heard. It often just misused and not a core feature for most sites. Even Youtube barely need comments.
What I believe we need is a revival of the forum sites. Give people place beyond Reddit and Facebook to debate. More 4chan and less Disqus.
They're full of people who just have to get things off their chest, but at least they don't have cheerleaders encouraging them to be even more asshole than they already are.
That said, it's hard to imagine fixing the problems that news organizations have with comments when you're tied to Disqus, which doesn't seem to give publishers the efficiency and control they desire.
Many tweets and FB posts are made solely on the contents of the article title, by people who've not actually read the article. (Although some Disqus commenters undoubtedly fall into this category as well, it's likely a much smaller percent.)
I've been meaning to write a "Dark Souls"-style comment/annotation system for the web, via browser extensions, for some time now. What better use of the web is there than annotating a site like "goatse.cx" with the message "Be wary of but hole"?
Genius is almost there now, but last I checked you had to open a new page (on genius.com) to annotate the page you're on, which ruins the immersion. And you can write freeform, which spoils the fun :-)
It feels like the quality of online discussion is related to the narrowness of the topic/community.
For example: I find the comments that make it to Reddit's homepage humorous, but not really that helpful/insightful/smart. Most of the subreddits I frequent offer much better discussion, assuming the source topic is interesting. I.e., I don't expect much from r/cats. Although, even r/cats has much better comment that the homepage...
HR is a great aggregator, but still fairly narrow compared to NPR, CNN, HuffPost, Fox News, etc. And most of the commenters on those sites don't have any direct experience or insight to offer to those discussions. Until that problem is solved, the comments aren't valuable.
Communities have some commonality, something shared, something that unites them and then because of that encourages some decency standards. People that live in the same space, people that have the same hobbies, people that work in software related businesses and startups in the case of HN.
So you could build a community out of something and then have a general news discussion, that seems possible. Generally though, the vast vast majority of commenters have nothing original to add and they simply create noise. Perhaps those few nuggets of gold are worth the effort, it's hard to say, at least with HN we all have that common bond and then the actual topics tend to be related to it. Go scan the comments on a couple random foxnews.com articles and a couple random msnbc.com articles and tell me, if the article on foxnews is about the president or a non-white, I will bet you a nickle that you'll see a racist comment in the first batch or two. That's what you'd be working with..
You still need people to moderate the comments (at minimum, to remove spam). That doesn't scale at all without an absurd number of volunteers (e.g Reddit).
Yeah, it's more insidious than that. I've been a heavy Reddit user for 7 years. The situation on most subreddits is now basically all comments being moderator-approved at a fine level of detail. They just do it after the comments appear, giving the impression that free discussion is still happening. Which comments and threads are allowed to persist is becoming very obviously narrower over time, all over the site. There is daily consternation among users over it.
Agree or disagree with the politics otherwise rampant therein, subreddits like /r/KotakuInAction seem to do a good job of cataloging some instances of this (ex. [1][2] and basically anything else in their [CENSORSHIP] tag/flair). Granted, this is likely not the best source, due to the fairly political nature of such subreddits, as well as that I haven't checked many of these to verify the accuracy of them; I just occasionally like to see what different sides of the debate(s) are up to.
I always liked slashdot's comment system back in the day. It allowed you to adjust what you saw and how many points a "funny" post got vs how many an "informative" one got and then set a point threshold based on that.
Since then they seem to have eliminated that level of customization so either it wasn't working or they just didn't see it as worth reimplimenting when they moved away from their old code base.
Their argument is bogus. It's either cost or a laziness in moderation.
The ability to discuss specific articles and topics regarding specific source material is valuable. Having to search for the hashtag or article link on reddit or twitter to try to debate a topic is much, much less engaging, and is much more separated from the NPR writers and ombudsmen.
They readily acknowledge that cost is a major factor in the decision:
> The conclusion: NPR's commenting system — which gets more expensive the more comments that are posted, and in some months has cost NPR twice what was budgeted — is serving a very, very small slice of its overall audience.
Given that the column notes that only 0.6% of visitors comment, but that the costs can run to twice what is budgeted, they seem to explicitly and transparently acting on that you mistakenly claim are being hidden by "bogus" arguments.
I suppose the charitable assumption would be that the parent is a subtle meta comment, designed to call into question the notion that discussion is valuable.
> Having to search for the hashtag or article link on reddit or twitter...
Isn't this laziness on your end then? Also, as mentioned in the article, NPR's obligation is "to provide information," not to "create and maintain a public square," Montgomery said.
I would disagree with their assessment of the mission statement, then. NPR talks constantly in their drives about their community service. Local stations have events calendars, and my local station is running a promo for a campaign called "I dare to listen" [1]. So to say it's not NPR's job to maintain a public square is lazy and lacks vision.
The argument that it is a very small minority of users who are commenting on the site itself, seems correct. I've visited npr.org fairly frequently over the years and I've read the comments every so often over the years. while I've not taken a statistically rigorous sample, the comments invariably had a lot of the same users and were generally poor quality.
I think that maybe hosting comments alongside the original article isn't the best approach. Half the fun of internet comments is discussing with other commenters, but that works a lot better when you know the community that you're discussing with. You can't reasonably keep track of the dynamics of hundreds of different content provider's comment sections, nor can all content providers have interesting commenting communities.
We need a standard comment syndication format like RSS so that sites like HN and Reddit can continue to provide good communities for discussion, but content providers can still display comments below their content to show off reader engagement.
I totally understand why they're doing this. Their comments section was terrible. I'm not at all surprised by the findings that a couple hundred to low few thousand commenters are posting the vast majority of comments.
I'm really surprised at the energy that the trolls on NPR spend. You'll see the same account posting rude comments on just about every article. Who has that kind of time?
Even though some consider comments section as toxic, I actually spend more time reading in the comments section than the article itself. Often, I find out more interesting view points from comments section than the one the article's author is advocating for.
There are idiotic commenters but also quite a few of the commenters are more knowledgeable than the author of the article. Very often.
Outsource the commentary to other markets (read: news aggregators), insource the results onto the page. Everyone else takes care of the bullshit of moderation in their own communities, you get the benefit of readers seeing comments and becoming a part of the discussion on whatever platform they please.
I have found comment sections on news websites to be reliably awful. It seems every discussion thread deteriorates into an attack on the other side's politics even when the link to get there is non existent. ("Thanks Obama")
That being said, I love reading all the comments. It has been a guilty pleasure of mine for years. I often spend more time reading NPR's comment section than I do reading the actual content. I don't think this change will cause me to consume less NPR content, but I'll definitely spend less time on their website.
They cite cost as the issue, that only 2600 users represent half of all comments on their entire system.
I don't know how much they are paying for a third party to manage their comment system, but I will bid $500/month to handle those 2600 users comprising half of all comments, or $1000/month for all comments. All inclusive bid. I can get this tiny number of users running on a pretty modest system.
My proposed system will also save money on subpoenas and moderation overhead by not storing ip addresses and not having paid moderation.
Have you factored in that you need to serve the comments to every user, not just ones that comment? I mean, to be fair, you could probably serve staticish chunks of HTML and update-on-new-comment, but it's a consideration.
Comments on blogs or news sites have always been terrible. The only sites that can have good comments are ones like HN which dedicate the purpose of the site toward that end. Even then it is a struggle (obviously.)
It makes no more sense for NPR to have comments sections than it does for NPR to have an image hosting service.
maybe the problem isn't comment moderation, but rather the expectation that anyone should be able to "control" the dissemination of human thought through mediums like the internet?
the sooner news companies etc drop this notion of control the better chance they stand in the future.
>There was the brimming idealism when in 2008 NPR announced it was moving from discussion boards to individual story commenting
Never visited the discussion boards on NPR. But it seems like a far better solution than "infinite comments" or the fragmented social media discussion.
Quite ironically, comments, and especially voting on comments, do not seem perfectly aligned with encouraging thoughtful, civil discussion without completely quenching dissent. The inevitable result is either group-think or scorched earth flame wars.
Can we also use this time to talk about Live chat for Youtube? It is literally filled with scrolling garbage and promotes so much of negativity. What purpose is it solving when there 10K users throwing garbage at each other?
They had some interesting statistics about the authors in their comments, I would love to have seen similar statistics of the comments on the social networks to compare with though.
It's nice irony: This article, about a site removing commenting, now has more comments than upvotes (although such a ratio almost always indicates a flame war).
My first thought reading this is, well, "conversation doesn't scale very well", as David Weinberger said (regards some recent Reddit contretemps).
Looking at NPR's own article, I'm findng the justifications given to be strongly suspect. NPR's focus on how many participants are engaging, and from what platforms, rather misses the boat. It's not that I feel the decision itself isn't without merits, but the merits given are exceedingly poor ones. I'm hoping they're not the ones actually used.
I've done my own measurement of public user activity on large sites.[1] In the case of Google+, my and Stone Temple Consulting's independent analysis[2] showed that 0.3% of all users actually engage in public posting on the site.
The real question isn't "how many users are commenting", but "what is the quality of the comments received?". Following critiques of my first study, I performed a second follow-up looking at where intelligent conversation was happening online, using the Foreign Policy Global 100 Thinkers list as a proxy for intelligent conversation, and the arbitrarily selected string "Kim Kardashina", as its obverse. This gave the world the infamous FP:KK index -- the ratio of mentions of any of the FP Global 100 Thinkers per instance of a spin-off of the OJ Simpson trial fall-out.[2] A few results stood out -- Facebook's scale, Reddit's relatively high quality, Metafilter's phenomenally high S/N ratio, and the amount of high-quality material posted to blogs (though perhaps never seeing the light of day). I'd be interested in some further follow-up along these lines.
One of the problems is that at Internet scale, Sturgeon's Law is far worse than six-sigma compliant.[3] Not only is there an awful lot of crud, but simple mechanics mean that no one person can see more than the tiniest fraction of what is transacted online daily. The simple acquisition cost and assessment of "is this worth reading or not" is absolutely prohibitive.
Or as Clay Shirky says, what we've got isn't content overload but filter failure.[4]
On the problem of idiots, my personal solution is simple and surprisingly effective: block fuckwits.[5] At scale, this evolves to the problem of figuring out who is and isn't a fuckwit. The ability for individual actions against specific authors and publishers to be applied generally strikes me as a useful tool. Not a complete fix (there are controversial voices who do deserve to be heard). But if the cost of being an asshat is being an asshat screaming into the void, part of the problem is addressed.
(Yes, this means some form of 1) persistent reputation, 2) tools for applying reputation as filters, and 3) limitations on newly-created profiles. The idea of vouchers (again, with a reputation penalty applying) to bootstrap new identities may help. There's much in common to approaches against email spam in this.)
NPR in particular are counting the fact that many messages come from desktop users as a failing of their system. I see that as absolute insanity. Desktop systems are hugely more useful than mobile devices for composing content. Especially thoughtful content. I know this because I've been trying, and losing, that battle myself, using a 9" Android tablet and Bluetooth keyboard -- one of the better mobile authoring configurations possible, and it still stinks. The six lines by 45 characters I can see in HN's edit box certainly don't help. I've written a long rant at Reddit on this specific problem.[6] Whilst composing this comment I've had Firefox/Android crap out from under me, continuously popped out of the edit box, and unintentionally navigated from the page. Thankfully persistence of user state in edit dialogs has improved slightly, but the experience is frustrating to say the least.
I'd use a proper editing environment, say, vim, except that under Android, VimTouch doens't interact with the clipboard. I can neither copy content into* it, nor out. Termux's vim client is slightly better -- I can paste through the Termux clipboard, but copying out is virtually impossible, and doesn't capture more than one screen at a time.
The fact that few people are entering thoughtful comments on mobile likely says far more about the state of mobile tech than it does about NPR's audience.
i think a good way to fix toxic comments is to have a "tag as troll" button. if enough people think a particular commenter is a troll then they lose their commenting privilege. it should be pretty easy to detect if a person is creating more accounts.
Thanks for demonstrating better than I possibly could, exactly why this is a positive change for NPR. Their online comments were only a detraction from their stories.
Because you agree with their presentation of opinions. When you don't, you want to call them out, then and there. And they should hear it. They are after all paid in tax money and supposed to represent listeners and readers.
Left leaning sure, but i have heard them blast people on the left plenty of times. If you listen to them regularly you would see that. The problem with most on the right is they hear one news outlet say something negative about the right and they turn off the channel and assume they are all on a right wing bashing spree. NPR is not this way in the least bit. This is coming from a republican.
With NPR it's entirely possibly that your commute in the morning is different content than mine. I can't possibly listen to them all day but I know one thing for sure. In general, they have positions on topics and they run that INTO the ground basically assuming everyone must agree with it. Then it becomes part of their narrative. They are very picky when to put on dissenting voices. Often times when they are needed, you get none. And NPR is horrible to Trump when they'd get respect if they'd buck the trend of the rest of media.
You don't need to be broadcasting lies or bad journalism to be broadcasting a political slant. It can show up in something as mundane as story selection or one-off comments on shows that are not hard news.
There was an interesting study[1] from 2011 where NPR's Twitter social graph correlates with what you'd expect a person with left-leaning political tendencies to have. Probably a bit too broad to draw general conclusions from, but interesting as a data point still.
Journalism is rarely (if ever) objective. Good journalism should, however, be factually-correct and analytical. In other words, the product you're paying for is along the lines of informed opinion.
The claim, as I understood it, was that NPR was making factually incorrect claims (i.e.: reporting things contrary to reality). Either the OP has a confused understanding of journalistic function (evident in the implication that NPR isn't objective, or somehow politically-slanted), or he's making a demonstrably false claim (NPR systematically lies and/or misrepresents truth).
They've done that in the past too, but citing sources is hard because usually it's the right leaning media calling out the left leaning media (and vice versa- c.f. Media Matters against Fox), and it lends itself well to people shooting the messenger by way of "that's just a $slant-ist site" rather than actually evaluating the claims.
An NPR reporter misrepresenting a Dutch report on the downing of MH317 as evidence of Russia firing the missile (while the report only says the missile was Russian-made).
NPR aside, I reject the notion that journalism can't be objective. Well, not completely, sure, in the same way that we can never completely reach the speed of light.
Does that mean we stop trying? No! Does that mean we lower our standards? Hell no! My issue is, I see this concept used more often to excuse misconduct and lazy reporting. "We can't do it, so why try?"
If someone claims to be unbiased, they should be held to their own standard, and when bias is pointed out, expected to change their ways, or else acknowledge that their statement about being unbiased was a lie.
I'm not sure the GP would agree with your sentiments, however, NPR pretty clearly slants to the left. All their hosts lean to the left and the composition of their multi-guest panels usually include a balance of viewpoints that lean more to the left.
Still, among popular news sources on both the left and the right I think they are about as fair as they come. Certainly on NPR the hosts are not hostile towards opposing views and you can tell that they make an effort to let dissenting callers and guests be heard even though its often clear that they disagree. I also find that NPR tends to invite guests that are respected within their community instead of guests that are intended to showcase a weak or poorly supported viewpoint.
I don't think that's exactly what's going on here. As it happens, I agree that the mainstream media tend to be uncritically accepting of anything that seems to fit a neoliberal narrative, and that this is a big problem. However, in this case the comment being criticized, came across as a rather aggressive and jarring pivot in what had been a pleasantly - even surprisingly - balanced discussion, and I think that's the main reason it got so many downvotes.
News articles involve editorial intent, discretion, and several rounds of "filters". Comments sections largely do not, which is why they generally devolve into just arguments. Only with heavily-moderated communities that impose those filters (like Hacker News, or the NYTimes, as the article mentions) can you have any substantive commentary.
It's easy to hitch an underrepresented and specifically suppressed view (like white nationalism, anti-semitism, sexism) into the comments section as a means of piggy-backing off the visibility of the main article. Given that some people go out of their way to ignore the actual content as a means of furthering their underrepresented message means that the comments section is being used to short-circuit the moderation loop most journalistic publications go through.
Or perhaps they just realized that comments sections are awful places for discussion, and that the kind of vitriolic discourse that happens there might actually be harming the reader's experience instead of enhancing it?
The same can be said of HackerNews comments. I'd imagine that it's a given that Reddit is gamed via comments and submissions, but is it a given that HN comments have similar value?
It's certainly happening on major news sites and Reddit, but I don't know how far it goes into smaller communities. But... Consider how many comments you could post if you built the tools to manage accounts, did a little automated comment generation, and had a few dozen writers employed cranking them out.
I'm not trying to bait, I am genuinely curious. I think it's possible to recognize bias in NPR and also recognize bias in Fox News or Breitbart. Maybe they can recommend a news outlet that they view as more impartial, even if not perfect.
I'm pretty sure the trolls are still human, albeit employed by organizations. With that in mind, I wonder if it's possible to create sites that are nothing but trollbait (i.e. articles supporting/critical of Russia) to soak up and waste the trolls' time.
Depends if you take Russian Olgino as example (based on how things appear to be). Profile creation is automated, some of the content posting seams to be identical texts being posted by multiple profiles. None-automated content is posted by different people for identical profiles working in shifts. I would guess they have something like monstrous SMM marketing platform some posts are automated some manual.
I think the main issue is that most comments (about npr news) are happening off nor properties and the only ones left commenting on their properties are a core not representative of their general audience.
I wasn't referring to the OPs opinion, I was referring to the fact that all their comment contained was a generic and useless complaint that we've all heard about 10 million times about "the media."
If you or anyone actually wants to post a comment that has a substantiative content to it, feel free. I'm sure if it's so obvious as to how NPR is slanted that there's plenty of objective resources you can draw on to enlighten us all...
1) Their left leaning narrative can't take facts at this point.
What facts can't they take? What's the narrative that NPR is trying to weave? As others have already mentioned, their coverage is often not favorable to the left. NPR syndicates content from hundreds of independent public radio stations, including other public and for-profit sources such as American Public Media, AP, Reuters, ITN, etc. Are they all conspirators in this narrative?
2) The sheer amount of lean really comes out of their choice of what to cover.
Is OP suggesting that NPR and the hundreds of other public radio stations suppress topics so as not to appear "left"? Also, see #1.
3) They know it's easy to just offload comments onto another service.
They are already offloading comments to Disqus. The problem is not technical but managing and moderating the vast deluge of spam, trolls and other bad actors.
4) Any news source w/o comments is basically propaganda at this point.
As others have already mentioned, their coverage is often not favorable to the left.
And often is, up to and including misleading and misrepresenting facts. I gave two examples earlier, and was unable to find a retraction of either.
Is OP suggesting that NPR and the hundreds of other public radio stations suppress topics so as not to appear "left"? Also, see #1.
Mass uncoordinated action does not imply conspiracy.
Are they all conspirators in this narrative?
As I just said, "conspiracy" is a fundamental misunderstanding. But now that it's on the table, for things that make you go "hmm", look who the major media (news) companies make their political donations to. You'll find an interesting pattern. NPR excluded for obvious financial reasons, but still, there's a pattern.
This is just hyperbole.
I am not so sure. Comment sections serve an important purpose of allowing bullshit in the article to be called out, "trolling" and "bad actors" be damned. I also find it interesting how often those labels are trotted out dismissively against people who disagree with the point trying to be made, and how this "civility" canard is held up as an excuse to silence all discussion.
I think the fact this is a government service (or at least funded by gov) makes this choice far more interesting. It effectively sets a standard for services to avoid providing a open forum where the users can speak to one another, service providers and managing parties. Instead they are not bound to provide anything above and beyond their initial services. I dont think it is necessarilly wrong from them to do mass censorship nor think its right for them to suffer through caustic commentary. But the solution they are setting as a president is one of "Fuck you, gov still pays me". They do not need to adhear to the peoples wants so long as they can fet funding
Edit: I should clarify 'partially funded' but I will keep this comment as is because I believe it has stirred up the emotions which is why this move is controversail.
NPR gets about 90% of its funding from members, grants, foundations, etc. If you have ever listened to NPR, you know how frequently they solicit donations during funding drives.
Only about 5% of individual NPR station funding comes from federal resources, and around 10% of CPB funding.
NPR's mission statement doesn't include "a open forum where the users can speak to one another" though, and removing the comment section doesn't constitute censorship.
The web and many social media services are available for people to publicly and freely speak their mind, unlike some other governments, where censorship does in fact exist. Ultimately, I think moves like this are good for promoting a more decentralized version of the web where people host and manage their own content, e.g.
"Above and beyond" are the key words here. If this was a 100% private sector I would be fine with it because its about the companies choice. But the fact it is associated to the gov opens up many questions.
- how do gov programs want to accept feedback?
- does the government want open forums and discusion places?
- how much anonymity does the gov want to provide in such discussions?
- what types of contracts forve a company by law to abide by government principles of discussion?
We have laws of what you cant do. We also have laws such as "you can not deny a person water" in arizona [0]. Is the internet to be placed under similar laws or can I run a non-profit with my own agenda in mind with no interest in handling feedback from the taxpayers funding it?
Those seem like interesting questions, but is a comment section attached to an NPR article any good for those? Users typically comment on the content of the story, at an unfortunately low signal-to-noise ratio.
I think it's also worth considering the vast number of organizations receiving federal funding directly (and indirectly through tax breaks, subsidies, etc) and how practical a mandated per-organization discussion mechanism would be.
Thats an interesting point actually. Their customer oriented subdomain doesnt accept comments in either. Also is pretty inactive. Perhaps this has been a long time coming and is only now happening to the bigger names
Were NPR engaging in "mass censorship" before they introduced online comments in 2008? They are just returning to the status quo of the rest of their 38 year history. You are still free to comment on their stories in alternative venues, exactly as you are doing here.
"They do not need to provide anything above and beyond". I find it fascinating how polarizing this thread will be. Ill have you know I love many of their shows but I should respect people to only respond to what triggers them
Partially funded by gov according to their own stats[1]. 11% from the corporation for public broadcasting, and another 5% from local governments.
And on a side note, the trend away from open forums in the name of "civility" bothers me on a pretty deep level, but I don't see the connection between that and 16% of their funds.