Four years ago, Meta blocked Donald Trump from posting on Facebook after his false claims of election fraud helped foment the January 6 assault on the US Capitol. Now Meta’s founder, Mark Zuckerberg, is “getting rid of fact-checkers” and switching to a system of X-style “Community Notes” in a bid to end “censorship”. “Fact-checkers have just been too politically biased and have destroyed more trust than they’ve created,” Zuckerberg declared.
This alleged bias was evident in the choices external fact-checkers made about which posts on Meta to fact-check and how, Zuckerberg said. “Our system then attached real consequences... A programme intended to inform too often became a tool to censor.” The Community Notes system instead relies on establishing a consensus between “both sides” in a debate to verify contested claims. This “could be a better way of achieving our original intention of providing people with information about what they’re seeing—and one that’s less prone to bias,” Zuckerberg said.
Accusations of partisan bias in political fact-checking are commonplace on the right in the United States; the main complaint being that fact-checkers there focus more on claims from conservatives than liberals. The accusations that the selection of claims is a result of “bias” are unfair. And there is now a large body of academic work that shows what fact-checkers do and how they have a positive effect. But factchecking is of course an imperfect craft.
I have been involved in fact-checking since 2012 and in 2016 was a co-author of the code of principles of the International Fact-Checking Network (IFCN), the standards-setting body for the sector. The principles, to which all IFCN fact-checkers must adhere, are non-partisanship and fairness, quality sourcing, rigorous methodology, transparency on staffing and funding, and operation of a fair corrections policy. Independent assessors—prominent academics or senior journalists—rigorously examine all would-be fact-checkers on 31 different criteria before bids are voted on by a 12-member board of experienced fact-checkers from around the world. Once they’ve passed, they can work for organisations such as Meta.
The loudest noises about the way this evidence-based and fully transparent assessment system works have come in recent years from the right in the United States, but it also produced howls from the left in 2019 when the IFCN approved an application from Check Your Fact, a fact-checking team set up by the Daily Caller, owned by right-wing US media personality Tucker Carlson. The evidence showed that its fact-checking processes met the required standards, but this did not satisfy left-wing critics.
When we designed the fact-checkers’ code, the initial notion that fact-checkers should check claims “equally from both sides” quickly ran into two rather obvious obstacles. First, in most countries, political debates are messy and won’t neatly divide into “two sides”. The UK parliament hosts 13 political parties. Bosnia-Hercegovina has a tri-partite presidency and around 10 major parties in both government and opposition at the same time. Second, political parties the world over disobligingly refuse to communicate, whether what they say is true or false, in neatly equal volumes. Contrast the relative omertà of the Biden presidency with the flurry of media appearances by President Trump since his inauguration on 20th January. The “balance” sought by the critics of fact-checking is politically convenient false equivalence.
Fact-checkers around the world recognise the need to improve how they operate. This has been a subject of my own research since 2019. In a trial last year with three fact-checking operations, we tested a model for identifying false information that has substantive potential to cause substantive consequences, regardless of its political focus.
One goal of such work is to help fact-checkers set out their range of reasons for verifying claims, including the potential consequences if the claims are false. This could help counter politically contrived accusations of bias. At the end of last month, a fascinating study found that on X’s Community Notes, Republicans are flagged for spreading false claims more than Democrats. But is that bias, or because there is more to find?
Clearly, Community Notes, where users with different perspectives agree to add contextual information to a contested claim, could help audiences to a better understanding of online information. At the same time, the way they work on X shows they are not a solution in themselves.
After deadly floods in Spain last year more than 90 per cent of online hoaxes on X that were debunked by a fact-checker had no Community Note, because there was not enough of a consensus to show them. Imagine how difficult it must be to find adequate consensus in more divided communities—from Myanmar to Syria.
The answer may be to combine the two approaches. Spanish fact-checker Maldita has pointed out fact-checkers need the assistance of the community in their work, and vice versa. In 2024, thousands of Community Notes on X cited the work of fact-checkers as their source of evidence. According to one poll, a majority of US Republicans believe social media content should be fact-checked and the best approach for platforms will be to use both community notes and fact-checks.
But neither system would tackle the problems of misinformation and disinformation at scale. Information disorder—the disruption to our news and information system that affects how we understand the world—operates on three levels, and a range of solutions are needed to tackle it effectively.
Specific false claims, which may or may not have a substantive potential to cause some sort of real world harm—from persuading someone to take a fake medicine to provoking unrest and violence—are relatively easily addressed by fact-checking, community notes or a combination of the two. In the first half of 2024 alone, thousands of fact-checking articles were used to tag 31m posts on Meta’s platforms in the EU as in some way false or misleading, for example.
Broad false narratives are harder to address but can be countered by fact-checkers bringing them to audiences’ attention. This is made easier with AI tools, such as those developed by the UK’s Full Fact, to identify false claims at scale.
A flood of low quality information (or “shit”, as Steve Bannon put it), unleashed by a group or state not to persuade but to confuse audiences is beyond fact-checks or community notes and perhaps best dealt with through teaching of media literacy.
None of this works, however, if people are encouraged by figures like Zuckerberg to distrust fair and accurate information due to false accusations of bias.
If Meta is genuinely concerned by both protecting its community from real world harms and enabling free speech, Zuckerberg needs to walk back his accusations of bias and promote an approach supporting both Community Notes and the work being done by fact-checkers striving for ever higher standards.