>For claims that don’t directly result in physical harm, like conspiracy theories about the origin of the virus, we continue to work with our network of over 55 fact-checking partners covering over 45 languages to debunk these claims... Once a post is rated false by a fact-checker, we reduce its distribution so fewer people see it, and we show strong warning labels and notifications to people who still come across it, try to share it or already have.
Does this pass the mask test? As we all remember, the WHO said to not wear masks early during the outbreak. Twitter users, in particular, was way ahead on urging people TO wear a mask. Would Facebook now block that or reduce its distribution and potentially risk lives?
I think it is naive to believe there will be any sort of consistently applied rule. It will be based on the whims of whoever manages this initiative.
They are only using fact checkers because it gives them someone to point to and say "Look we didn't make this call, they did". This is like a money laundering scheme but with brand reputation risk instead of money.
Facebook will retain absolute control and the people running this initiative will do whatever the brass tells them to, this program is just for deflecting complaints.
A principled commitment to fact-checking is more important than whether or not any particular fact-check is ultimately correct. Mistakes happen. If the effort results in more good calls than bad calls, it is a good initiative overall.
What would you have Facebook do, fact check using Twitter because the CDC made this one bad call? Do you think the modal CDC consensus is worse than that of Twitter?
I would have them not make a blanket commitment to fact-checking, because I don't agree with the perspective that it's a good initiative if it makes mostly good calls. A "fake news suppressor" that's allowed to make any significant number of bad calls will inevitably be abused by bad actors who have specific topics they'd prefer people not to discuss.
Facebook should just not use fact checkers at all. The sites were fine before that and "fact checkers" shoot themselves in the feet so frequently that ever fewer people take them seriously. Most such self-proclaimed checkers are far more interested in what authority figures say than what is true, hence why they're such a mess - the moment authority figures disagree it sends them into a tailspin of confusion and random, arbitrary decisions.
Really, Facebook and others need to just give up on this. There is no principled commitment to anything here. "Misinformation" isn't even a meaningful concept. It's arrogant and abusive for randos at FB or other social media firms to believe they are so smart they can sift the entire world for what's true and what's not. Nobody can do that. Nobody.
What do you suggest Facebook should have done when WHO said to not wear masks? If they cannot rely on WHO for health expertise, whose expertise should they count on?
The mask test is an interesting one. Please don’t fry me for this, but the WHO has a report on mask usage from December that notes some inconclusiveness in community settings. It can be very hard to prove something with a rather small effect compared to real-world complexity. This is in contrast to vaccines that have a rather strong signal.
Please read the whole section because I cherry picked just one paragraph without full context. I just wanted to demonstrate that nothing is actually all that obvious when it comes to the pandemic.
> Evidence on the protective effect of mask use in community settings
> At present there is only limited and inconsistent scientific evidence to support the effectiveness of masking of healthy people in the community to prevent infection with respiratory viruses, including SARS-CoV-2 (75). A large randomized community-based trial in which 4862 healthy participants were divided into a group wearing medical/surgical masks and a control group found no difference in infection with SARS-CoV-2 (76). A recent systematic review found nine trials (of which eight were cluster-randomized controlled trials in which clusters of people, versus individuals, were randomized) comparing medical/surgical masks versus no masks to prevent the spread of viral respiratory illness. Two trials were with healthcare workers and seven in the community. The review concluded that wearing a mask may make little or no difference to the prevention of influenza-like illness (ILI) (RR 0.99, 95%CI 0.82 to 1.18) or laboratory confirmed illness (LCI) (RR 0.91, 95%CI 0.66-1.26) (44); the certainty of the evidence was low for ILI, moderate for LCI.
The problem we have is that the WHO made a mistake early on in the pandemic and then changed course. As humans, we should be allowed to make mistakes, acknowledge the mistake when new information is presented, and then change course.
The problem is with individuals and organizations that knowingly continue to promote misinformation like vaccines are bad for you.
yah, "mask test" as used here is most certainly a dog whistle for people who believe in masks religiously and righteously fight "anti-maskers". in reality both of these opposing factions have fallen for mediopolitical misinformation (i.e., propaganda), including from the CDC.
but the passing criteria for a more rigorous mask test would be wearing one around non-household friends and family in tight quarters for prolonged periods of contact, and not wearing them in places where they do no good above and beyond distancing or environmental circumstances (like being outside or in low density/circulation areas indoors). an overwhelming majority of people fail this test, mostly because it defies our social instincts/conditioning promoting trust and closeness among the in-group and distrusting and distancing the out-group, but also because many have also fallen victim to pervasive and oft-repeated propaganda via social (and other) media.
Who decides the truth? I don’t want to muddy the waters too much but i am concerned with current trend to appeal to science as an authority of truth to justify whatever public policy is in play. Scientific publishing can be a chaotic process where a healthy skepticism should be maintained.
The truth is never easy to ascertain. In fact, one of the highest cited papers from an epidemiologist is titled « Why Most Published Research Findings Are False ». I found it to be a good meta-analysis that could easily challenge my previous field of computer vision where often fantastical results were obtained on a data set but could not be reproduced elsewhere.
Online discourse would be vastly improved if everyone meticulously cited peer reviewed papers and used no other substantiation. It still wouldn't be perfect, but at least there'd be a commitment to rigor. The current status quo is telling people they're wrong, citing nothing, and telling them they "just haven't done enough research."
Stepping away from the specifics of Facebook's history, does anyone have solid ideas for how globe-spanning communities should function?
It seems next to impossible to both allow individuals to share things and make sure that you only see "true" things in that community. And it seems that given the numbers involved, there will always be some amount of "accidental" misinformation that has a real impact, and deliberate misinformation from bad actors looking to exploit the system...
> does anyone have solid ideas for how globe-spanning communities should function?
Your mistake is thinking the problem is in the communities, as opposed to the technology platforms themselves.
Personally, as a starting point I'd want them to:
1. Kill algorithmic curation that is based on "engagement". These digital skinner boxes are a cancer on society.
2. Introduce much stronger privacy laws that discourage or prevent data collection used to drive targeted advertising. Reduce or eliminate the financial incentives driving #1.
3. Provide complete and open transparency around how these platforms engage in their moderation decisions. Reporting leads one to suspect Zuck stepped into to help out right-wing agitators. Twitter is making their own arbitrary decisions. We all need to know if these decisions are happening and how they're being made.
I don't think I'm falling for the Nirvana fallacy, or mistaking the platform and the communities.
Yes, FB has been pretty egregious in certain areas, but the question of how to prevent misinformation in online communities is a tricky one, that goes beyond the details of a particular platform and into human nature. Government can (should?) help in certain ways, but even in the best scenarios, that will have limitations, as well.
To answer your question, you must begin with the metaphysics i.e. our worldview. Our civilization currently operates under the assumption that there exists one truth and everyone must adhere to that truth. It worked out fine when we were in small towns, but now that literally billions of people are connected, we're realizing that not everyone can be in sync.
To cut the long story short, there are many people who believe that we need to have a fundamentally different approach to how we form communities. Decentralization is one of those trends in tech that could help us get there.
I don't know if we currently operate under the assumption of a single truth.
There's also a big difference between different perspectives and things that are blatantly wrong.
Decentralization is interesting, but 20 years ago, there were many small "social networks", even if they didn't go by that name. FB and it's portfolio are the first ones to create 1B+ member "communities". There are very strong economic incentives for smaller communities to grow into larger communities (FB illustrates the possibilities and perils of focusing on growth).
>I don't know if we currently operate under the assumption of a single truth.
This is perhaps too big a topic to discuss on HN comments, but the idea that there are absolute & unchanging truths goes all the way back to Plato. The Western Civilization is founded on top of a very specific worldview that is "centered" on these absolute truths. This is the reason why a lot of traditionalists see postmodernism as the root of all evil.
>There's also a big difference between different perspectives and things that are blatantly wrong.
I don't disagree. What I'd like to focus on is not basic logic like 1 + 1 = 2, but rather the implications of these. That some people are born with XY or XX chromosomes is a fact. What we do with those facts is entirely context dependent.
>Decentralization is interesting... There are very strong economic incentives for smaller communities to grow into larger communities
Recent decentralization trends are designed to combat this exactly at the protocol level.
> This is perhaps too big a topic to discuss on HN comments,
Ha ha-- so true. But thanks for making a valiant effort. ;-)
> but the idea that there are absolute & unchanging truths goes all the way back to Plato. The Western Civilization is founded on top of a very specific worldview that is "centered" on these absolute truths. This is the reason why a lot of traditionalists see postmodernism as the root of all evil.
You could argue that it predates Plato... But even before postmodernism, people seemed to have trouble agreeing on one truth.
>There's also a big difference between different perspectives and things that are blatantly wrong.
>> I don't disagree. What I'd like to focus on is not basic logic like 1 + 1 = 2, but rather the implications of these. That some people are born with XY or XX chromosomes is a fact. What we do with those facts is entirely context dependent.
This is a great example of an issue that I've seen on FB. Most people do have XY or XX, but a small percentage of people have something different. Beyond that, the way those genes get expressed can lead to different phenotype outcomes, which is a scientific "fact", but I've seen a lot of people, including from folks who claim to have MDs, who say that it's just XX or XY. Is this "misinformation"? Probably depends on who you ask...
>Decentralization is interesting... There are very strong economic incentives for smaller communities to grow into larger communities
>> Recent decentralization trends are designed to combat this exactly at the protocol level.
Please explain more. I don't dispute that their are notions of smaller communities, I'm just thinking that there are structural reasons why those smaller communities will get eclipsed by communities focused on growth.
You might as well ask to shut down the whole Internet. Even if Facebook disappears tomorrow, there are plenty of other social platforms to allow the spread of disinformation.
Maybe this should link to [0] instead. It talks about a change in their newsfeed algorithm resulting in a 5% decrease in usage time. Think about that. Their algorithms are so good, they influence how billions of people spend a lot of time. And they know how to increase that time as they wish. They have no incentive to decrease usage further and even if they commit to something like that now, as soon as their revenues decline Wall Street will force them to ramp up usage.
The problem with misinformation, ultimately, is that it is given a platform. We've reached an inflection point where people can truly believe these mistruths so deeply, that any fact checking is received as a personal attack -- and they get even more defensive.
Facebook (or YouTube, or Twitter, or whoever) can try to come up with even more infoboxes or disclaimers, but it will continue to exist as long as it is given a platform.
The original sin of social media comes down to the idea that anyone can post, and that virality begets virality. Shocking content is presented on the same level as traditional media -- and in many cases, can exceed those traditional news sources' reach.
Our reporters follow an Editorial Policy, that comes with consequences if they break the guidelines. Virtually every respectable news org has something like this -- we're just making ours public. https://www.forthapp.com/docs/policy.html
Until we hold the reporting produced by professional reporters -- reviewed by editors, fact checked, and held in check by an editorial process -- at a higher esteem than what Firstname Bunchanumbers says, misinformation will continue.
> Until we hold the reporting produced by professional reporters -- reviewed by editors, fact checked, and held in check by an editorial process -- at a higher esteem
Having worked in a news organization, I wish I had more faith in this statement. Most reporters are woefully out of touch or depth when covering complex topics. As a finance major, I'm often surprised how often the explanations for basic topics are wrong or presented without any nuance.
The ONLY way to fight misinformation, intentional or not, is to allow a free exchange of ideas. The stupid ones, in the end would be shown as stupid and sink. But in todays world people seem to think that other people, "being the dumb ones", need to be protected from their stupidity, and "us being the sage ones" need to hide the dumb ideias from the dumb ones. Such arrogance.
> The ONLY way to fight misinformation, intentional or not, is to allow a free exchange of ideas.
That implies all ideas are given equal visibility, but we already know that social networks prioritize content that's outrageous or controversial over content that isn't.
You want a free exchange of ideas? Start by fixing the algorithms.
I don't agree. It is trivial to spread bullshit and nontrivial to show that it's bullshit. Your philosophy allows misinformation to spread because of how much harder it is to refute than create.
What you call misinformation may be truth to others. Having Facebook and a cabal of biased fact checkers control what truth is for everyone simply allows no real diversity of thought. That echo chamber will produce false information.
The very first sentence on the page is how they are stopping it. If their ambition is to work towards something vaguely better, then sure, seems reasonable.
I think focusing on the literal semantics of the first sentence rather than taking the blog post holistically is too harsh. Obviously it's still a problem; they are communicating that they're working on it.
I did look at the whole thing and assumed that they meant that doing those things added up to stopping it. However I can see your point of view as well.
No,but that is exactly the point.How can you trust a company that make their money by advertising to users. They will always respond to the interest of the poeple with money. And the fact their are attempting to «be better» it has more to do with backslash and the fact that they will lose more money if they do not do something. The truth is pennyless cannot pay for ads! I was just adding a bit of humor to the conversation with the Coke story. Coke is definetely better than Pepsi.
Does this pass the mask test? As we all remember, the WHO said to not wear masks early during the outbreak. Twitter users, in particular, was way ahead on urging people TO wear a mask. Would Facebook now block that or reduce its distribution and potentially risk lives?