Hacker News new | past | comments | ask | show | jobs | submit login
Shifting attention to accuracy can reduce misinformation online (nature.com)
133 points by mpweiher on March 17, 2021 | hide | past | favorite | 136 comments



Given that political discourse so often takes the form of "I am right and you are wrong", it's easy to foresee that tools to prevent the sharing of misinformation will in practice end up as tools to prevent the sharing of information from people the wielder of the tool disagrees with. Then, it's just another propaganda arms race between the information sharing and information dissemination technologists of both sides. I think this is not the kind of research that will ultimately lead to a more healthy public debate.


Another part of the problem is that you cannot unring a bell. For example, the WaPo issued a retraction yesterday:

https://www.washingtonpost.com/politics/trump-call-georgia-i...

The headline quotes from their original story were proven wrong when deleted audio of the call was recovered:

https://www.wsj.com/articles/recording-of-trump-phone-call-t...

But the harm was already done, the misinformation was already used, and the false quote can be seen on page 10 of this:

https://judiciary.house.gov/uploadedfiles/house_trial_brief_...

This is a well-respected publication commenting on a matter of great importance that injected false information into it. Information that was somehow 'confirmed' by several other publications relying on their own anonymous sources.


>This is a well-respected publication commenting on a matter of great importance that injected false information into it.

And the [likely politically deliberate] regularity with which this has been happening for years makes me very uneasy to read articles by the same outlets crusading against "misinformation". This movement isn't about combating misinformation, it's about combatting their misinformation, and ensuring that people only see our "true" information


Yes, effectively all US media are propaganda for the elites at this point. After the way the mainstream media threw the election for Joe Biden this past election, I think many people will turn to less traditional sources.

It is simply too much of a coincidence that all these false stories about Donald Trump got published, yet any negative story about Joe Biden, e.g. the Hunter Biden scandal was thrown down the memory hole.


> Yes, effectively all US media are propaganda for the elites at this point.

With you up to here. This is not exactly a new development though.

Everything else in your post is disputable or dubious but I feel that I have little chance of changing your opinion on that.


[flagged]


It shows nothing of the sort, and the site guidelines ask you explicitly not to post like this—and also not to go on about downvotes.

You've unfortunately been breaking them repeatedly, and https://news.ycombinator.com/item?id=26465025 was egregious, so I've banned the account. If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


Which false stories about Trump? This one was wrong in some details; what else do you have?


The real issue isn't the 'false' stories. Most 'false' stories are from hyper-partisan outlets that don't really matter. The mainstream media rarely writes 'false' stories, because lying is dumb. Lying creates blowback, which harms your credibility and hamstrings your ability to shape the narrative. If your goal is to mislead, the best path to tell the truth, the politically convenient truth, but never the whole truth. That way, folks like you can (plausibly!) ask "where's the lie?"

Well, it's hard to put my finger on a specific lie. Even if I did, you'd (plausibly!) argue "That's just one article!" Luckily, there are plenty of hard data out there about how the American people have been misled.

One great example is the Trump tax cuts. 64% of Americans got a tax cut, of whom 40% were convinced that they did not! [0]

Our media is incapable of informing the people about a matter of simple arithmetic!

[0]https://www.nytimes.com/2019/04/14/business/economy/income-t...


> One great example is the Trump tax cuts. 64% of Americans got a tax cut, of whom 40% were convinced that they did not!

40% of 64% is 25%, which doesn't sound like a lot. That's a lot better than how many people don't know that Obamacare and the ACA are the same thing[1].

You're also not mentioning that the individual tax cuts begin to phase out in 2021, and end completely in 2025. Whereas the corporate tax cuts are permanent (until the next tax act, at least).[2] So some of those people may have been taking future taxes into account too (unlikely, but possible).

1. https://www.businessinsider.com/poll-obamacare-affordable-ca...

2. https://en.wikipedia.org/wiki/Tax_Cuts_and_Jobs_Act_of_2017


[flagged]


Thumbs up for Tim Pool. He was my big discovery a year ago.


To be honest, that's exactly the retraction that makes WaPo stand above the pack. They had a source, based the article on it (after verification). Then the primary source got available, certain details were different and WaPo changed the article. The core message, so, still stands. As a matter of fact, didn't the Georgia AG start an investigation into this affaire?


The damage is done though, so I don't know that I see this as a thing to be happy about. I am curious if the source is known or unmasked at a later date what does accountability look like?

If WaPo is the only one who can be held accountable and their only accountability is issuing a retraction, that seems like a gameable system.


What's the alternative? Stop reporting news altogether?

I feel as a news org, the best you can do is just report the data as it becomes available. Obviously later data could contradict prior data or clarify its interpretation. What matters though is the accuracy of the data reported, not in the data being a perfect predictor of the outcome, but that this is accurately the current known data, even if it later comes to be shown incomplete or misleading.


>What's the alternative? Stop reporting news altogether?

Maybe don't report something that one person told you and that you can't otherwise confirm?

That used to be the way that news was done, right?


> That used to be the way that news was done, right?

That seems like a bit of a rose colored view of news in the past.


I'm curious what damage you think was done here?


The damage on discourse primarily. If people are going to be able to have discussions then littering them falsities or non-truths from major media outlets would add needless friction to already difficult conversations. There's probably some amount of distrust each of these organizations gain when they affect a given group with misinformation that is close to them.


idk, this whole scandal ended up making me less trustworthy of WSJ.


If they didn't publish a retraction, they probably could have been sued. They're covering their ass, I wouldn't give them moral credit.


> If they didn't publish a retraction, they probably could have been sued.

Almost certainly not. (Well, anyone can sue, for anything. Winning is another story.)

https://en.wikipedia.org/wiki/New_York_Times_Co._v._Sullivan

https://en.wikipedia.org/wiki/Actual_malice


There is no perfect solution here, but at least they had to publicly acknowledge their error. This is much better than with the sources most people use.

I often see people say 'hey look at this terrible thing WaPo or NYT did' as an excuse for reading whatever garbage source is saying exactly what they want to hear.

It's not all of nothing.


Indeed, you can cherry pick individual examples of inaccurate reporting from literally any media outlet.

It seems suspicious to me that this commenter picked out a story criticising the right that had some innacuracies when it's largely media outlets on the right that have a history of actually repeatedly reporting outright lies in recent history (see Fox, OAN, Newsmax etc).


You can cherry pick single examples of inaccurate reporting from literally any media outlet. It would be far more useful to analyze which ones have a pattern of repeatedly publishing misinformation.


Readers should reward new publisher that take the time to do research from primary sources and properly vet them, rather than be the first to publish a scoop. There ought to be a system that tracks and rates a publishers by their accuracy and retraction rate (and lack there of when proven wrong).


They did not issue a retraction. They issued a correction. The precise language of the call is different, but the core of the reporting is unchanged.

The precise differences are that, what was quoted as "find the fraud" was actually find "dishonesty", and you'll "a national hero" was actually that you have "the most important job in the country right now". This doesn't change the overall story: Trump attempted to reach out to GA election officials and push them to overturn election results.

> But the harm was already done, the misinformation was already used, and the false quote can be seen on page 10 of this:

None of the citations on page 10 reference this reporting or any of the quotes attributed to Trump on this call.


Page 10 says this:

"December 23, for instance, President Trump reportedly called one of Georgia’s lead election investigators, urging him to “find the fraud”" and references, among others:

28 Amy Gardner, ‘Find the Fraud’: Trump Pressured a Georgia Elections Investigator in a Separate Call Legal Experts Say Could Amount to Obstruction, Wash. Post (Jan. 9, 2021).

This specific fake quote was used in the WaPo headline, in a charge of obstruction, and in a criminal referral. The call was secretly recorded, then deleted, so the source knew or should have known this was false.

The wording changes the call from implying some vague kind of quid-pro-quo arrangement to overturn an election to making a complaint because he (wrongly) believed he was a victim of fraud. The faked quotes are relevant to the legal charges brought against Trump. Their source had a recording, so there was no excuse for being inaccurate on something this important.


> Page 10 says this:

Ah, my mistake, was looking at the 10th page of the document, not page 10 of the document (no blame on your part here, I just hate that this is an ambiguity).

> The call was secretly recorded, then deleted, so the source knew or should have known this was false.

This doesn't follow. We don't know how many people were on the call, nor do we know that they all knew the call was recorded. We don't know who WaPo's source was. There are all kinds of possibilities where the person either was a second hand source (e.g. they weren't on the call, but were in the room when someone on the call, aghast, described it), or was recalling from memory having been on the call, but unaware someone recorded it.

> The wording changes the call from implying some vague kind of quid-pro-quo arrangement to overturn an election to making a complaint because he (wrongly) believed he was a victim of fraud.

It does not. Nothing was directly offered in either case. If you believed it was a quid-pro-quo before, you should still believe it. If you didn't before, than the headline doesn't matter.

Just to clarify, here are some things Trump said on the recording.

- “When the right answer comes out, you’ll be praised.” - “whatever you can do, Frances, it would be — it’s a great thing. It’s an important thing for the country. So important. You’ve no idea. So important. And I very much appreciate it.”

If you thought there was a quid pro quo originally, Trump's words, as spoken, still support it. If you're looking for a way to excuse his actions, yes, this gives you a scapegoat. You can blame the WaPo for bad reporting, instead of Trump for committing a crime.

Notably however, WaPo didn't contend that Trump tried to negotiate a quid-pro-quo. They simply reported the facts, as best they could.

> Their source had a recording

This is, again, untrue. There's nothing to suggest that Frances Watson was WaPo's source. There's nothing to suggest that anyone who is not Frances Watson knew there was a recording.

> and in a criminal referral

And in the course of that criminal referral, the actual recording, which, to reiterate, isn't exculpatory, was found. What's your point.


This is what I mean by "unringing a bell" though. Due to the initial (false) framing, people end up with trapped priors: https://astralcodexten.substack.com/p/trapped-priors-as-a-ba...

In the end, it doesn't matter whether this is accidental or not, people are sharing the most salacious headlines and those are often significantly undercut by the story beneath them even at the time of writing because they get selected for "engagement" and A-B tested or similar.

So circling back to the story we're discussing, the problem goes deeper than just people trying to ensure that they share only the most accurate stories, there are significant problems with the supply even from the most trusted outlets.


I think you're more guilty here than WaPo. You're choosing to frame this as a larger mistake than it was, calling it a "retraction" is both factually wrong (WaPo didn't call it that, they called it a correction), and misleading (the change doesn't affect the article's main point[0]).

I'll reiterate: the framing didn't change. The precise quotes changed, but they don't affect the framing, and WaPo didn't alter the framing. This is consistent with a correction. The article headline, using only quotes that Trump actually said, could still have been "Trump told investigator that 'you'll be praised' for finding that I 'Won by hundreds of thousands of votes'."

That's a headline supported by the both the intent, and the precise words of the actual call.

You're trying to reframe a correction of precise quotes as a retraction, or a completely misleading story on the part of Wapo. But that's you doing that. Like I said, the correction is a great excuse if you already wanted the story to be wrong, but that's entirely on you. Similarly, you're framing of this as a "fake quote", which implies that it was a fabrication, as opposed to, what the evidence points to, an first or possibly second hand account, from memory, further charges a reader of your editorialization. But that's entirely your framing (and your assumptions), not a fact. I can't tell if you're intending to point out your own cognitive bias, but yes, you are showing that you're guilty of holding trapped priors. You're not demonstrating that anyone else is.

[0]: https://www.dmlp.org/legal-guide/correcting-or-retracting-yo....


If they weren't the main point of the article, why was that exact false quote used in the impeachment?


Because that's generally how you reference things.

Do you think the point of the article was

1. Trump said these exact words.

2. Trump attempted to convince GA election officials to sway votes, by suggesting that they'd be rewarded for marking democratic ballots as fraudulent?

If 1, why was this important to people? If 2, how does the precise wording change impact the overall narrative?


You've changed a fuzzy and incoherent quote about looking for evidence that the vote was tampered with into a clear directive. This is central, or it wouldn't be the thing that everyone uses to reference the event.

And to be clear, I am specifically saying they're wrong about it being a "correction" instead of a retraction as well. They can call it what they like, but when the central quote that they based the narrative on is false, the whole thing collapses.


There is still as much of clear directive in the recording. I'm not sure why you don't believe that.


Trump is seemingly incapable of being clear, as your own quotes of him being incoherent establish:

- “When the right answer comes out, you’ll be praised.” - “whatever you can do, Frances, it would be — it’s a great thing. It’s an important thing for the country. So important. You’ve no idea. So important. And I very much appreciate it.”


Not sure why you're being downvoted, the differences from the reporting and the actual call are exactly the sorts of differences you'd expect if their anonymous source was someone who had heard the call once and was conveying the wording based off of memory. There's no significant difference in meaning as far as I can tell - Trump was still instructing them to find him votes.


Bombholing.

> The innovation was to use banner headlines to saturate news cycles, often to the exclusion of nearly any other news, before moving to the next controversy so quickly that mistakes, errors, or rhetorical letdowns were memory-holed.

> As George Orwell understood when he created the “memory hole” concept in 1984, an institution that can obliterate memory can control history.

> The innovation of the Trump era was companies learned they could operate on a sort of editorial margin, borrowing credibility for unproven stories from audiences themselves, who gave permission to play loose with facts by gobbling up anonymously-sourced exposes that tickled their outrage centers.

> Mistakes became irrelevant. In a way, they were no longer understood as mistakes.

https://taibbi.substack.com/p/the-bombhole-era-0cb

https://www.youtube.com/watch?v=eIAWggqZQmE


Here's a video where an ex-lawyer goes through a NYT article on election fraud and absolutely tears apart the misleading language they use to build the case that there was "no" election fraud:

https://youtu.be/TmgMu5sefzA

The beauty of this style of reporting is that the writer's skill with words allows them to plant specific ideas into reader's minds, but if anyone was to call them out on the carpet, they can completely truthfully say that nothing in the article is untrue, or explicitly asserts conclusions that any typical reader would naturally draw.

The NYT is arguably one of the best news outlets going, so I'm not sure how one could hope that this situation will ever improve.


Oh, it's an ex-lawyer? Not just some random guy sitting in his car? Thanks for including that tidbit, so I know this arbitrary YouTube video is credible.

Or maybe, just maybe, you're doing the same thing you complain about, and cherry-picking your sources and relying on stories that support your existing biases.

The answer is not to reason every sentence from first principles, like the HHG2G character who is pleasantly surprised every morning to discover that a pencil makes marks on paper. At every point, new information builds on previous information, and that usually works out reasonably well. There are exceptions, perhaps most notably in the case of the NYT, the post-9/11 Iraq war, but most of the time it works better than any other solution, and more importantly, any alternative would work less well.


But the tool in the article is to ask people whether they think a certain headline is accurate and then looking at the effect that has on their sharing of other articles. If someone with the "I am right and you are wrong" mindset shares fewer articles they think are not accurate, they'll actually be right more often. (At least in their own opinion.)

If this gets weaponized to make people on "both sides" pay more attention to the accuracy of what they're sharing, I'd expect that to result in a more healthy public debate.


The researchers anticipate that concern, and specifically checked whether "information vs. misinformation" just boiled down to partisan disputes about what's true. They found that "our participants were more than twice as likely to consider sharing false but politically concordant headlines (37.4%) as they were to rate such headlines as accurate", suggesting that there really is a problem of people sharing things they know aren't strictly accurate.


This article focuses entirely on people regulating their own decisions to share an article, not any attempt by a "tool wielder" to selectively block any content.

To put their study into context, it suggests that if Facebook asked users to rate whether an unrelated article was accurate before allowing you to share their own, many people would pay attention to whether their own articles were accurate, and less misinformation would be shared. This could be applied universally to any news article, both for detecting people sharing it, and populating the pre-share accuracy judgement. Instead, Facebook streamlines the process to increase engagement, and people blitz through it without stopping to consider the article's accuracy at all, distracted by thinking of all the likes they'll get for posting it. According to the study, when users are already primed to judge the accuracy of an article, they are more likely to self-regulate their own misinformation.


Absolutely, what more, and I'm culpable of this too, often things are re-shared where you only read the headline, or even just skimmed through the article barely reading it or spending any time to think through it.


You should carefully read the paper linked. Your comment may be correct but it's not a valid criticism of the methods the study used.


What does this have to do with the article?


I don't think debate is the issue. The issue is people lying to each other to motivate certain problematic behavior like riots and elections. Moreover the echo chamber effect created by the information sharing technologists to increase clicks, makes having a meaningful debate moot.


Can it though?

The bigger issue is removal of important details that provide context. Everything else can be factually accurate, but tell a completely different story. Unless you have first hand knowledge you won’t know what’s missing. You see this all the time if you read a story on a subject where you have expertise.

“Tim punched Bob

Bob punched Tim

Tim and Bob shook hands”

Is a very different story than if only “Bob punched Tim” is reported. It’s accurate...but it changes the story perception.


Every story has a million ways to tell it, they are all biased, there is no universal truth even if every word of the story is true and nothing factual has been left out. Facts can be misleading also, context not to do with the story can matter a lot.

I am not even sure how I would go about determining most political news is true or factual, even the speeches are usually cut down to snippets for the news. Bias is everywhere and to determine something is even largely true is an enormous amount of work going back to the original source.

A recent example is a Green party female member in the House of Lords who said that all men should be under curfew to protect women from being attacked at mogjt. Literally what she said, but not what she meant at all because the context was after the police said women should stay home at night for their own safety. Without the context its true, but its also misleading. As is my portrayal of the story, as was every news article about it. No idea how you fix it, language needs to be more facts based as does our culture and even then you are chasing more correct not literally true.


Universal truth would require creating quorum with billions of people simultaneously.


This is an extremely dangerous conception of truth. Even when practically everyone believed that the sun revolved around the earth, the one guy that didn't was still correct. Yet he was pretty likely to be oppressed or killed for having heterodox views and had best keep his mouth shut. I'm sure there are many popular ideas now that are believed by a supermajority (or even 95%+) of the population which are actually incorrect, although I can't say what the correct ideas are or whether they will ever become popular.


Truth is inherently human, we cannot conceive of a truth that isn't humancentric.


Perhaps you cannot, but I can.


Even that might not be universal truth.

Billions of Hindus would disagree with billions of Muslims who would disagree with billions of Christians, who would disagree with billions of unaffiliated people/atheists.


Right, I was making the assumption this problem was solvable and that the contradictory ideas could be hashed out. You can imagine that happening with a single Muslim and a single Christian, it happens all the time that people have conversions with a close friend. The critique I pose is of scale.


That's silly. Something that is universal and true requires no consensus.


Should we assume that "fake news" is anything where the opinion of the other side is not provided?


For historical context, this used to be regulated in the US under the Fairness Doctrine[0] until it was rolled back by the FCC as a violation of free speech. Reagan then vetoed a congressional attempt to bring it back.

[0] https://en.wikipedia.org/wiki/FCC_fairness_doctrine


Downvoting folks - please keep downvoting, but I'm sincerely curious to know what's wrong with including other's part comment?


I didn't downvote, but my issue with your comment is that it assumes that all issues have two valid sides. Some issues don't. The earth is round, and the alternative view that it is flat should not be brought up in the discussion.


Well... because there aren't only two sides? (Even if you only look at D&R, there can be a wide range of views expressed by the members of each party)

Because they would pick whatever the worst option is to present from the 'other side'?

What 'other side' should they present when reporting over "grab 'er by the p****" or *sniff sniff*?


So to all the people working in social media, can you add a slider at the bottom of every share widget asking people to rate how accurate what they're sharing is on a scale of 1 to 10? You don't even need to store the result, it looks like forcing people to think will solve a lot of the world's problems.


Has that ever worked? Rating schemes turn into approval scales for whatever people want them to, not what they're called.

The Slashdot moderation system might work up to a point. In the end, subscribing to moderators you trust (instead of subscribing to killfiles like Mastodon and Twitter groups use) may be the only way to evaluate comments.


> Has that ever worked?

From section "Priming accuracy improves sharing":

"[...] sharing discernment (the difference in sharing intentions for true versus false headlines) was 2.0 times larger in the treatment relative to the control group in study 3, and 2.4 times larger in study 4. Furthermore, there was no evidence of a backfire effect, as the treatment effect was actually significantly larger for politically concordant headlines than for politically discordant headlines"

In other words, people who were primed to pay attention to accuracy were significantly less likely to share misinformation. Seems to me like an example of it working quite well.


Yes, I don't take the studies seriously because I've worked on Mechanical Turk. The people primed to pay attention are maximizing their workflow, because taskmasters set unreasonable time limits, incomprehensible instructions, and every action is necessarily out of context. Attention there is totally divorced from anyone browsing a news feed.


Did you see when they tested it on Twitter users?


Yes. The last study looks even worse.


Not a rating scheme, the value would the thrown away and never shown to anyone. The point isn't to collect information, only to force the sharer to think.


And the sharer is just going to trust you to throw away that information? I don't think so.


Most people aren't careful about what information is collected on them in the first place. Are people going to stop sharing things on Twitter because they're afraid of giving up one more tidbit of information to big data? I don't think so.

It's besides the point anyway. Any mechanism that primes or prompts the sharer to be mindful of accuracy before sharing could help reduce sharing of misinformation, according to the results from the article.


Really surprised that no social media with nano-payment-funded distributed and syndicated moderation exists. Centralization of moderation is taken as de facto, but I'm skeptical that it has to be this way.


Who moderates the moderators in such a system? Would it be structured like the Wikipedia editor hierarchy?


The customers ultimately. Moderators should produce moderations which result in something people think is worth paying for. I'm thinking of a multi-rooted aggregation system. Wikipedia is an example of a single-rooted system.


What qualities do you think that would incentivize in a moderation system? I'm not feeling that tendency to share less misinformation is one of them, and I'd fear that richer people would have more power to control the narrative in such a system by effectively bribing moderators.

Apologies if there is something about "nano-payment-funded distributed and syndicated moderation" that I'm not getting.


Commodifying and creating a market for accuracy-driven moderation among other types of moderation shifts the conversation toward the question "Is misinformation a market failure?" If that's the case then we have a much more interesting discussion on our hands.


But much like ratings systems are actually used as approval/disapproval systems, such a system would instead come to represent ‘what is the most profitable moderation’. Such a market would optimise towards profit, not quality.


That's a critique of all markets. I'm down.


100%. The insight there I guess being a need to choose carefully which areas you allow market dynamics to be introduced, or at least make sure you know what the medium of exchange really is because that's what the market players will optimize towards.


What are "Twitter groups"?


Seems to me that the results of the article suggest that such a slider would help. But I wonder if it wouldn't lose its effect after a while as people get exposed to it and begin filtering it out, a la cigarette warning labels and cookie consent banners? Regardless, it seems like there's a lot of potential here for social media sites to make a significant and positive change without turning to stricter moderation which can in itself be problematic.

: Does anyone know if that's actually the case?


It would become a meme to share obviously fake stuff with the slider maxed out, and we're right back in Poe's law.


Why? No one would see the slider maxed out, so there's no point in playing the fool. The study makes an interesting point, that putting a slider there would force the person to make decision. Either they're sharing something they think is nonsense, so they think twice — or they have reason to believe stuff is accurate, which the study is saying people are actually pretty good judges of.

No one want to say something is accurate and be thought of a fool later. Making this choice forces you to be a scientist, in that you're now making a statement that's falsifiable. Even better when it's probabilistic.


Or they'd just ignore it. It's like upvotes/downvotes, I'd guess most people never use them. I rarely upvote or downvote anything here, compare to how many posts I read.


The existence of a slider won't force anyone to do anything. People will ignore your slider. Some people will only use it in one direction, and others will use it only in the other direction.

You aren't accounting for human nature. People aren't robots.


Seems to me that the mere existence of the slider would force people to share less misinformation, regardless of whether they seriously consider the rating they give or just ignore the slider altogether the second time they see it. In the language of the article, it would work by priming people to be more aware of accuracy.


You're making the assumption that people would care about a slider and that it would be part of a behavior loop.

People might very well laugh at the slider and not be intimidated into respecting its power of graduated informational judgement. As of now, after two hours, the only two replies to the GP are saying they would ignore the slider.

Why do you believe a slider would force people not to share misinformation? There must be some basis for that opinion.


> Why do you believe a slider would force people not to share misinformation? There must be some basis for that opinion.

"Force" is a strawman, but there is a clear basis for thinking that an accuracy slider could cause people to be more thoughtful about sharing: the results of this study, which found that "subtly inducing people to think about accuracy" resulted in "participants in the treatment group were significantly less likely to consider sharing false headlines compared to those in the control group, but equally likely to consider sharing true headlines". The effect found was quite significant.


> "Force" is a strawman

No. It was the literal word that was used not a strawman, which is an intentionally misrepresented proposition meant to be debunked.

"Seems to me that the mere existence of the slider would _force_ people to share less misinformation"


Oh, you are correct. It seems that mcBesse chose their wording extremely poorly there, as "force" is indeed indefensible. I apologize for misreading that.


Truth isn't a popularity contest.


What I find both amusing and frustrating is that the linked article uses the word "accuracy" 110 times but never explains what is meant by it. For example in the headline "Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests", what is meant by rating for accuracy? Is it the number 500 and whether it is low, exact, or high? Is it whether the people were migrants? Is it whether the vests were explosive? Is it whether the story is in any way truthful? I really have no idea. Is "accuracy" an alias for "truthful" or something else? The article doesn't explain.


From the article:

Participants were randomly assigned to then either judge the veracity of each headline (accuracy condition) or indicate whether they would consider sharing each headline online (sharing condition)

So they use "accuracy" as an alias for "veracity". But they don't really need to define it, because the study is about what the participants think is accurate, not the researchers.

EDIT: Now that I've made it down to the methods section, I see that the wording they actually used was "We are interested in whether you think these headlines describe an event that actually happened in an accurate and unbiased way." So, their measurements refer to whatever the participants interpreted that question to mean.


Truthiness, or lack thereof, is a second order effect.

Algorithms boosting viral content and inauthentic speech are first order effects.

--

No, bots, trolls, socketpuppet accounts are not authentic speech. I didn't say censorship. I didn't say truthiness. I said inauthentic.

No, I didn't say ban inauthentic speech. I'm saying create infrastructure for authentic speech. Because currently we have very little. So that consumers have a choice.

Yes, keep your pseudonym account. For your all-important courageous deep undercover reporting which will definitely be recognized with a Pulitzer. To be accepted anonymously, natch.


>create infrastructure for authentic speech.

What would that look like?


Thanks for asking. It starts with verified identity. eg metafilter.com requires photo id and $5. (Parler trued to do something similar. Very interesting. Alas, I didn't scout that before they shut down.)

There are many potential IRL "root authorities". Like issuers of business licenses could serve as root for online presences.


That is the same logical fallacy as "we can solve deepfakes with blockchain in cameras!". Authenicating the source is not synonymous with validating the truth of the content.

David Icke goes by his real name and he pushes insand bullshit. I don't know why people think real identity is a sign of truth when liars have been making careers doing so for millenia.


I apologize.

Authenticity != truthiness. They are orthogonal.

How do I rephrase my points to make this crystal clear?


If we had this in place in 1950 we'd still be lynching blacks and turning homosexuals into vegetables.


Please elaborate.


'Close to half' could be also 'majority' or 'minority', depending on whose side you are on.

'Overwhelming majority' better be more than 75% but it's often less.

'Small percentage'...of 300mln people is still a substantial number.

Add to that much of arguments we use daily are based on sampling which has a varied degree of accuracy.

I think all we can do is to agree to disagree...vote and respect voting results.


I can appreciate this study putting some numbers and metrics to a phenomena that I'm sure a lot of people would agree exists.

Something that is probably controversial - I think it is just far too easy to share articles, blogs, etc through social media. If you were to put in a small hurdle, even just needing to copy-paste the actual link to the article instead of clicking a share button, I would imagine a significant volume of sharing (admittedly everything, including good / accurate information) would disappear. If it was just a little bit harder to share misinformation, I think it would overall be a benefit.


Presuming that people who don't block third-party scripts to add social media widgets to click "share" without copying and pasting are automatically posting "misinformation"?


> Merely reading false news posts [makes inaccurate beliefs] subsequently seem more true.

>the widespread sharing of misinformation on social media is also surprising, given the outlandishness of much of this content.

Emotional reactions plausibly drive the first observation, so the second observation shouldn't be surprising. If it really is, then they don't see the emotional component.

And yet:

>Our results suggest that the current design of social media platforms—in which users scroll quickly through a mixture of serious news and emotionally engaging content, and receive instantaneous quantified social feedback on their sharing—may discourage people from reflecting on accuracy.

They conclude that distraction is the cause, and that nagging users will solve it ("reminding them about accuracy in a subtle way that should avoid reactance"). Yet if the study is flawed, and the distraction tied up in emotional outrage, wounded identity, and spite, then those nags may only insense the users more.

Of course, considering already-distracted Mechanical Turk workers and people who link to "right-leaning sites that professional fact-checkers have rated as highly untrustworthy" shows that the authors of the study failed to consider much more than emotional actors.


It won't matter. People are immune to being corrected.

They have to correct themselves.

"An extensive literature addresses citizen ignorance, but very little research focuses on misperceptions. Can these false or unsubstantiated beliefs about politics be corrected? Previous studies have not tested the efficacy of corrections in a realistic format. We conducted four experiments in which subjects read mock news articles that included either a misleading claim from a politician, or a misleading claim and a correction. Results indicate that corrections frequently fail to reduce misperceptions among the targeted ideological group. We also document several instances of a “backfire effect” in which corrections actually increase misperceptions among the group in question."

https://link.springer.com/article/10.1007/s11109-010-9112-2?...


Of course it's going to backfire when partisan groups like Politifact and the Washington Post are running the entire operation, too.

Especially around politics, there's basically no such thing as fact. If politician X says that plan Y reduces problem Z by 13% but actually it was only reduced by 11.5%, technically he's wrong but the classification of his statement becomes subjective.

This is where bias seeps in. If a right wing politician is off by 1% in a statement it's "mostly false". If a left wing politician is off by 1% it's "mostly true".

I don't trust ANY of these organizations to "fact check" anything because they're utterly discredited.


Nature retracted a peer-reviewed paper last November after pressure from those with a particular political/ideological bent. You can read the details here and decide for yourself if their reasoning is valid for the retraction. Did not seem very valid to me personally, especially given they did not retract the authors' prior work using similar methods (but with a conclusion the aforementioned political actors would find most pleasing, rather than upsetting): https://retractionwatch.com/2020/12/21/nature-communications...


Why are you grinding your axe in this thread? Totally irrelevant to TFA


I think skepticism of the source's authoritativeness on the subject of information-filtering is perfectly relevant to TFA. But you are free to disagree and downvote if you want to punish me further.


"Therefore, shifting attention to the concept of accuracy can cause people to improve the quality of the news that they share."

It will not have a significant enough impact on the overall collapse of sense-making that we're seeing. Neil Postman wrote really well about it in Technopoly, but since the book isn't available online, I can only direct you here:

https://youtu.be/QqxgCoHv_aE?t=928


> the overall collapse of sense-making

I suspect this is caused more by the collapse of (false, but easily comprehensible) narratives than an actual reduction in coherent reasoning. Most "normies", so to speak, have always believed a huge amount of bullshit, and the forces that stabilized schelling points in bullshit-space have been diminished.


Interesting, but long term probably irrelevant. I think we are almost at the end of the era of un-curated online information. As the technology to deploy bots that are indistinguishable from humans becomes more widespread, the proportion of online conversations that even involve humans is going to trend towards zero. Eventually, it's all going to be such hyperbolic noise that nobody will even pay attention.


In general, allowing people to misinform themselves and the people in their circle is a good thing. It applies competitive pressure so that superior information extractors can have an advantage over inferior information extractors. Long-term human survival depends on superior information extraction, so it is better that poor information extractors are currently culled or weakened.

For instance, I love that Bloomberg News is beloved by many Americans, notably (to this audience) HN readers. That's how I made money off the SuperMicro news. I, through pre-existing experience, knew that they were a low quality news source incapable of technology reporting of any calibre. Others, since they lacked this knowledge, sold SMCI. I bought at a discount as a result, and made money.

This is part of why so many tech luminaries were ahead of the curve in detecting the COVID-19 problem¹: they are secular information extractors for the most part, untainted by petty identity.

¹ I am neither a tech luminary nor one of these people because I had this false picture of the competence of the CDC - an error that cost me six figures.


Your conclusion doesn't at all follow from your premise.

It's at best a modern-malthusian position, that those who are strongest at information extraction should survive. (As opposed to Malthus's original proposition, that only the wealthy should survive).

But this begs the question: what makes "information extraction" the thing that we should optimize for at social scale? If we can make it so that everyone has access to correct information, there will no longer be a need to compete on information extraction.

This similarly tracks with Malthus's wrongness: we don't actually need to compete on food prices, as we've got enough to go around, there isn't a shortage of food for only the wealthy. And this is true at world scale. Barring some kind of massive catastrophe, we aren't at risk of global starvation where a Malthusian approach would make sense.

> This is part of why so many tech luminaries were ahead of the curve in detecting the COVID-19 problem¹: they are secular information extractors for the most part, untainted by petty identity.

Taking this to its logical conclusion, you are saying that it is a better outcome that the CDC was wrong, as some tech luminaries made money, than if they had not made money and the CDC had provided superior initial guidance, saving thousands of lives.


Yes, of course. The arbitrary human life has a value less than $1k and I suspect is equivalent to almost zero dollars.

Revealing HHS incompetence is worth far more than some few human lives. This virus was harmless but a true pandemic would have destroyed us.

Now America knows whom to listen to. If a second supervirus hits us, one that is actually deadly, then when the HHS lies to the populace to save who it seems the chosen few, the populace won't listen, and society will correctly handle things.


This nice sounding theory is trivial to disprove.

Salem Witch Trials.

It’s often not advantageous to know the truth, especially when the truth is complicated or uncomfortable. It will be rejected or ignored.

You frequently have far more of a competitive advantage if you can make others believe something, whether it’s true or not.

Outside of some very specific domains (like investing, but even markets are irrational), I don’t think it’s accuracy that makes information powerful, it’s emotion.


This is true. I suppose you do need some sort of protection for exploitation of advantageous information.

Though one could argue that there is a useful information extractor necessarily knows to kill others as witches to preserve themselves. Places one on weak footing, though.


What does it matter if you have a manager, or your manager has a manager, who will approve or disapprove a new disinfo policy based on whether or not it affects specific political groups? A fish rots from the head.


I submit clarity would improve even more.

By clarity, I mean fact and opinion very clearly differentiated.

Facts are common.

For a given story:

There are facts.

Some of those may be disputed. Fine. Great discussion to have.

At some point, we arrive at opinion, and this generally is what the authors believe the facts mean.

Bias colors all of that. There is always bias.

Always.

The emphasis on clarity gets at the influence of bias by making it easy to understand what the bias actually is.

Currently, we largely sidestep bias with various, low clarity arguments:

Official

Fair and Balanced

Objective (this actually takes a number of us working together over a sustained time to do. You can be 1000 percent sure it does not ever happen in a cable news cycle. Ever.)

Size, with 3 and up to few letter big players attempting to be reputable because reasons.

You get the idea.

Clarity is powerful. It implies accuracy, helps people to understand bias and when bias is different from stated intent or claims of authority, or publication of record, note, stature.

Did I mention there is always bias?

Always?

A great example in the US might be coverage from the labor point of view versus big business point of view.

Larger, established organizations rarely publish or broadcast from the labor point of view. Low clarity material makes understanding that as well as what is fact and what is opinion very hard to discern.

Small orgs, indie media, often does more from the labor point of view, and lower clarity beings the same difficulty.

Many more examples abound!

Low clarity facilitates endless meta too:

Who is objective? (Nobody in reality, but they all say "the other people" are "a problem" or "are untrustworthy" somehow.

This is all expensive and useless.

I could go on.

Greater clarity. It will help. By nature, it is more accurate. It makes bias more easily understood.

And on that point, bias is OK! It takes too many of us too long to be objective. We can benefit from well understood bias and seek multiple points of view to help us be informed and form our own opinions.

Clarity requires disclosure.

It will also force truth in branding.

"News you can trust" means nothing when it is an unclear mess of fact, opinion, bias all mashed together to get a reaction, is it?

Nope.

I submit a push for clarity does the most good.

I say that, because we can't solve the thinking for people problem. We can Empower them to think better, and we can give them more clear material to think with, and we can foster discussion that encourages common ground, but we can't actually think for them.

And just as an example, when I was a little kid in primary school we had a media class.

That class covered the basic types of propaganda, with advertising as a vehicle to demonstrate all of them. It had a topic on bias, identifying the point of view from which the piece was written and why that matters.

It even talked about clarity. In any given piece, are fact and opinion well differentiated?

Where they aren't, that piece isn't very reputable, or useful, or the minimum you need to do is seek more information if you're intrigued at all.

We read articles from the labor point of view, we read articles written from overseas, ones from the local newspaper, from big business, and we talked about them and we found the common facts and we found out how to think for ourselves.

That was seventh grade. In a small backward town no less!

I am regularly shocked at how poor our media is today, and how ill-equipped so many of us are to deal with it.

All of these rule based schemes avoid both the clarity and trust problems inherent in all media.

Secondly, they sidestep the intended discourse!

Not only are we supposed to be thinking for ourselves, but we are to be understanding one another well enough for policy discussions to make better overall sense!

All of this is why we used to teach critical thinking in primary education.

Today, we do not do that very well and look at the mess!

Software won't fix this. Humans doing the work to improve will.


[flagged]


We all heard the actual audio of that call, and he absolutely was pressuring the secretary of state to find votes. So while WAPO got inaccurate quotes the content of the call was absolutely "insurrectionist".


Do you see what you are doing right here? You're leaving out the context to make a political point.

Yes, he was asking the SoS to "find votes". However, the claim was: "We believe that there may be over 100k votes which could fail signature verification, and we only need you to find several thousand of them, a small fraction of what we believe you will find."

That is very different than what both you and the WaPo implied.

The claim is related to signature verification. So argue with that if you don't agree with it. But that isn't what happened.


He didn't just want her to find votes that can fail signature verification, he wanted her to look for them in specific areas that vote heavily Democratic. Which means that he was not asking her to validate that people voted accurately, he was asking her to slip her thumbs on the scales and make the vote come out his way.

That is why he "needed" her to do so. Because he wanted the vote to come to the result that he wanted, and not the result that reflected the actual will of the voters.


> That is very different than what both you and the WaPo implied.

It is not. There was no evidence to support the belief that there were any votes that failed signature verification.

Put another way, Trump was asking the SoS to apply a higher than usual level of scrutiny only to Democrat-leaning ballots in order to win him a state that he lost.


That is absolutely within the bounds of the generic "find me the votes".

There is always some pretext for finding those votes.


I'm blown away there are people on HN peddling the Fox/Newsmax/OAN cinematic universe.


The irony of this comment on this thread is incredible. The washington post is now "Fox/Newsmax/OAN" cinematic universe? Like...what?


I think you misunderstand. The person was responding to the person suggesting that Trump was doing something very legal and very cool. He wasn't talking about the Washington Post, or suggesting that it's part of the right wing misinformation machine.


The Washington Post are the ones the published the retraction.


Oh, you're the person he's responding to.

Ok. Weird.

Trump was pressuring Georgia to overturn and invalidate votes. He was interfering with the election.

To suggest that he wasn't doing that is what knowaveragejoe means by peddling the Fox/OANN/Newsmax line.

So while the Washington Post may not have gotten the exact words right. It's not like they were wrong about what was occuring. The worst that could be said is that they were paraphrasing.

It's like you're suggesting Trump should be let off the hook because of a technicality.


I guess I’m not following you here because it seems like you’re making a pretty wild claim. Are you saying that the original story was correct, but it was the retraction which was somehow wrong?

That just seems...odd.

Or are you just not seeing the story or something?

> Correction: Two months after publication of this story, the Georgia secretary of state released an audio recording of President Donald Trump’s December phone call with the state’s top elections investigator. The recording revealed that The Post misquoted Trump’s comments on the call, based on information provided by a source. Trump did not tell the investigator to “find the fraud” or say she would be “a national hero” if she did so. Instead, Trump urged the investigator to scrutinize ballots in Fulton County, Ga., asserting she would find “dishonesty” there. He also told her that she had “the most important job in the country right now.” A story about the recording can be found here. The headline and text of this story have been corrected to remove quotes misattributed to Trump.

The story was wrong. It wasn’t that they just misquoted him. The quote was the predicate for the entire story which they are now acknowledging was made up.

>pressuring to overturn and invalidate votes

Yes, but if those votes were cast illegally, shouldnt they be invalidated? That is the core debate here.

“If votes are cast illegally, they should not be counted” does not seem like it should be a controversial claim, and the absolutely absurd amount of spin and misinformation that people are putting out to try and claim that it should even be up for debate is incredibly damaging to having a functioning democracy.

And in fact I would say that some of the comments here are doing a good job of illustrating the damage that this sort of misinformation can have.


If I'm understanding this correctly, you're conflating two different phone calls.

> The Washington Post recently had to issue a high-profile correction to January reporting about one of two known calls in which Donald Trump urged Georgia officials to find evidence to overturn the state’s election results.

> The newspaper initially said Trump told an elections investigator she should “find the fraud” and that she would be a “national hero” if she did so. But a newly surfaced recording of the call shows he didn’t use those words. Instead, he told her to uncover the “dishonesty” and that “when the right answer comes out, you’ll be praised.”

> The Post did not retract its story, as some people on social media claimed. The correction did not involve its Jan. 3 reporting about what Trump told Georgia Secretary of State Brad Raffensperger during another phone call. The Post published the full audio and transcript of that call.

https://www.politifact.com/article/2021/mar/16/what-trump-to...

Anyways, this never rose to the level of the sort of Fake News being pushed by the right-wing grift-o-sphere, and to pretend it's in the same league is ignorant at best. We're literally talking about issuing a retraction, something none of these conservative grifting outlets are known to do when they are routinely shown to be pushing misinformation.


They weren't and there was no reason to think that they were. And he was only asking them to "find the dishonesty" in specific places.

While he may not have uttered the exact syllables reported, the core of the story, the central message, was correct. Trump tried to pressure Georgia into overturning an election.

Trying to paint it as a complete refutation of the story is misinformation. You are the one attempting to do damage here.


Same, I have also encountered QAnon believers on here.


Why are people downvoting this? Are you happy that the washington post gave you fake news that made you feel good?


What's Fake News about it? Fake News is intentional misinformation, such as "the Pope supports Trump" or the idea that Putin didn't do everything he could to install Trump twice.


The quotes in thier headline were entirely fabricated, and several other news organizations who claimed to have had "independently verified" the quotes also spread the lie for months. So yes, the exact organizations who are arguing for censorship are themselves lying routinely and with no accountability.


These takes are beyond parody at this point. We're talking about them literally issuing a retraction and holding themselves accountable. This is something essentially unheard of from the right-wing grift-o-sphere.


Articles that carry misinformation can be easily found out because of the thought schemes they are using. The problem is that such language is also used by governments, so that's why people are not taught how to spot this. I have a couple of friends who seem to be susceptible to fake news and they come to me with different stories how something is bad or some weird conspiracy theories that I find tedious to debunk for them, but even if I do they still have that sense that because someone has authority, even if what that person saying or writing is not true, it is true for them. When I try to show them these schemes, they don't want to hear it as they think "they got me". This is insane and probably they need to find out how they are getting this wrong by themselves, just as I did.


Misinformation is a made up problem. With the amount of information we have now it’s trivial to push any narrative just by amplifying the “right” true information. It’s why every politician has a source saying they’re a sex offender. Most of the accusations are real but which one you believe depends on where you look. Or just look at all the articles that quote nobodies on Twitter.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: