Hacker News new | past | comments | ask | show | jobs | submit login

I don’t use either Facebook or X so I have no personal experience. But the New York Times cited this meta-analysis for the proposition that they’re not ineffective:

Fact-checker warning labels are effective even for those who distrust fact-checkers

https://www.nature.com/articles/s41562-024-01973-x

They also cited this paper for the proposition that Community Notes doesn’t work well because it takes too long for the notes to appear (though I don’t know whether centralized fact checks are any better on this front, and they might easily be worse):

Future Challenges for Online, Crowdsourced Content Moderation: Evidence from Twitter’s Community Notes

https://tsjournal.org/index.php/jots/article/view/139/57




Here's the Community Notes whitepaper [1], for how it all works. Previous discussion [2].

[1] Birdwatch: Crowd Wisdom and Bridging Algorithms can Inform Understanding and Reduce the Spread of Misinformation, https://arxiv.org/abs/2210.15723

[2] https://news.ycombinator.com/item?id=33478845


Thanks for pushing for clarity here. So: I'm not saying that fact-checker warnings are ineffective because people just click through and ignore them. I doubt that they do; I assume the warnings "work". The problem is, only a tiny, tiny fraction of bogus Facebook posts get the warnings in the first place. To make matters worse, on Facebook, unlike on Twitter, a huge amount of communication happens inside (often very large) private groups, where fact-checker warnings have no hope of penetrating.

The end-user experience of Facebook's moderation is that amidst a sea of advertisements, AI slop, the rare update from a distant acquaintance, and other engagement-bait, you get sporadic warnings that Facebook is about to show you something that it thinks you shouldn't see. It's like they're going out of their way to make the user experience worse.

A lot of us here probably have the experience of reporting posts to Facebook for violating this or that clearly-stated rule. By contrast, I think very few of us have the experience of Facebook actually taking any of them down. But they'll still flash weird fact-checker posts. It's all very silly.


So, why wasn't a mixed approach taken? That's the obvious question you should be asking. Paid fact checkers are leaps in quality and depth of research, meanwhile Jonny Twoblokes doesn't have the willingness to research such topic, nor the means to provide a nuanced context to the information. You are saying that the impact was limited, but it was not because it was low quality. If you do both, where the first draft id done by crowdsource with the professional fact checker to give the final version, I don't think you would have a good reason to not do it.


I've answered elsewhere on the thread why I think the warning-label approach Facebook took was doomed to failure, as a result of the social dynamics of Facebook.


Notably Zuckerberg did not cite any data for his assertions that community notes are effective.


A way to quantify this doesn't immediately come to my mind. Maybe reasonable metrics would be:

1. What % misleading/false posts are flagged

2. What % of those flagged are given meaningful context/corrections that are accurate.

It seems there's circular logic of first determining truth with 1, and then maybe something to do with a "trust"/quality poll with 2. I suspect a good measurement would be very similar to the actual community notes implementation, since both of those are the goal of the system [1].

[1] https://arxiv.org/abs/2210.15723


> Fact-checker warning labels are effective even for those who distrust fact-checkers

Yes, but are they true?


Haha yeah indeed, I was also reading this thinking: "uhm, ok, how can they be 'effective' if they're false in the first place?"

Lol sometimes people just have no logic




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: