Move fast and break things. If there is one saying about the ethos of the new tribe of digital entrepreneurs—sometimes known as “information oligarchs”—it’s this one, credited to Mark Zuckerberg, CEO and founder of Meta. These guys don’t hang around.
And, boy, did they move fast once they knew that the new president of the United States would be one Donald Trump. First the courtesy visits to Mar-a-Lago, then the donations. Then the re-arranging of the boards and top executive teams. And—as we saw this week—a stream of announcements about revised policies and processes to be more aligned with the thinking of the incoming administration.
No-one is pretending that Meta’s decision to think again about its approach to content moderation and fact-checking was other than political. Trump is surrounded by people who think of themselves as free speech absolutists. They genuinely believe that many, if not most, attempts to temper words, pictures or videos on social media amount to politically motivated censorship. And so, barely two weeks before Trump takes office, Zuckerberg recorded a video which felt as if it was intended for an audience of one.
In the US, at least, there will be no more attempts to check facts: in future Meta will imitate Elon Musk’s use of “community notes” whereby other users can contest facts, or even create alternative ones. Almost comically, Californians can no longer be relied upon to run the trust and safety teams: that work will now be entrusted to Texans, who are famously without bias.
If this model of community notes had applied in Britain at the time of the Southport riots last year, there would have been no-one to arbitrate on wild rumours about the identity or religion of the alleged assailant: the truth would take second place to the competing claims of whoever wished to pitch in.
Imagine how such community notes would work in raging conflicts where communities are torn apart, extremely polarised or under unprecedented existential pressure—such as the ongoing very fragile transition in Syria, where sectarian and vindictive violence could flare up courtesy of incitement and disinformation on pervasive social media platforms.
In the recent elections in Georgia, it took some courage from on the ground fact-checkers to uncover fake content, disinformation and propaganda. No surprise to read a warm welcome from the ruling populist pro-Russia Georgian party for this week’s announcement: “GD rejoices at news of Meta rolling back factchecking.”
But that’s to get ahead of ourselves, for there is no suggestion yet that these measures—designed to placate people living in the very polarised society where Meta is based—will be pushed out across the rest of the world. Insiders believe it could take Meta up to a year to design and engineer a community notes system for the United States. By which time the country will be gearing up for the 2026 mid-term elections which could conceivably see a very changed political environment from the one in which Trump has effective control of the White House, Senate and Congress, along with a politically sympathetic Supreme Court. Who knows if, by then, it will be time to move fast and break things again?
But there is a bigger problem. The hastily introduced changes to major policies governing content on Meta platforms are in danger of rebranding Meta as an American rather than a global company whose majority of users (not to mention revenue) are outside the US. That may please the incoming US administration, but Zuckerberg must now consider how he can reconcile his evident wish to please Trump with the way much of the rest of the world is going.
In Europe and the UK, for instance , the mood is (rightly, in our view) pulling in the opposite direction, with a growing legal and regulatory consensus to force the big tech giants to demonstrate that they have systems and processes in place to mitigate the harms that can flow from unfettered speech on social media platforms.
It is worth reading Meta’s 95-page analysis of systemic risk assessment and mitigation which it submitted only four months ago as part of its obligation under the EU’s Digital Services Act (DSA). It is a very comprehensive document showing a company apparently sincerely trying to grapple with the burden of being responsible for the speech of 2.5bn people on the planet. It boasts that it has assembled a cross-functional team of more than 1,000 people to meet the safety and integrity requirements of the EU.
Cut to January 2025 and Mark Zuckerberg is promising to work with Donald Trump to “push back on governments around the world [who are] going after American companies and pushing to censor more.”
As a global company with global rules, Meta cannot—at least as it presently operates—have radically varying systems in different geographies. Technically this would be a rather daunting challenge. Would professional fact-checking by a French company be accepted when it affects an American user posting from Tunisia on transgender issues in Texas?
There will be technical and jurisdictional challenges, many of which seem to have not been well-considered before the hurried changes announced by Zuckerberg. Content policy changes affecting billions of users are usually the outcome of a deliberative process that includes stakeholders such as governments, experts and civil society organisations. It is very doubtful this was the case with the changes to the very complex policies on hate speech that Meta introduced this week—hours after Zuckerberg posted his video.
As journalists, we of course cherish freedom of expression. But we also understand the absolute need for information integrity and credible sourcing, as well as concerns around impact. Misinformation, disinformation and hate speech—including dehumanisation—can very much kill as we have seen in Rwanda, Myanmar (Meta was implicated here) and now in Gaza; lead to wide civil disturbance as we have seen in the US and the UK; or exacerbate conflicts as has been evident in Ethiopia and the Sudan.
If left to proliferate unchecked, misinformation and disinformation also undermine public trust in general and allow all forms of conspiratorial views to dominate. We know how the vilification of whole groups of people based on their national, ethnic, religious or class background can lead to real-world harm. Meta knows that from its years of experience.
It’s too early to predict how these changes will play out. In line with its company mantra, Meta has moved fast to redraft its policies and community standards to be more permissive of certain kinds of speech and to change the classification of content deemed to be violating the rules. This is a technical challenge as much as an ethical one—and the devil may well be in the engineering detail.
Where does this leave Meta’s Oversight Board, of which we are members? You can make the case that this is where its role gets really interesting. Many of us became involved because it was patently unsatisfactory for one man—Mark Zuckerberg—to have unfettered control over the speech of billions of people. At the same time few living in democratic societies, not to mention authoritarian regimes, would wish to hand that responsibility over to politicians.
Meta hasn’t gifted control of its platform to Donald Trump, but its behaviour this week suggests a company which is extremely keen not to displease him. All the more important, then, that there is an independent, empowered body that can push Meta publicly with binding decisions and advisory opinions over many contentious issues. We welcome the challenge.
No-one would pretend that the current policies and standards—never mind fact-checking and moderation processes—are perfect. They’re clearly not. But the danger of breakages by moving fast is that what is at risk is not “things” but people—their safety, the credibility of information they rely on, societal trust and other valuable public goods. Such things are not easily fixable when broken and the price could be exorbitant.
Khaled Mansour is a writer and novelist. Alan Rusbridger is editor of Prospect and a former editor-in-chief of the Guardian. They both serve on Meta’s independent Oversight Board but this article represents their own views.