The potential access to knowledge and vast digital networks of like-minded people seemed limitless in the 1990s. But this promise has soured as technology is increasingly harnessed to sow division and peddle conspiracy theories. Parallel tribes are incentivised to adopt extreme behaviours by the very design of the digital platforms they use—and are becoming increasingly distrustful, not only of each other but also of the very idea of objective “truth”.
According to analysis by Buzzfeed, the top 20 fake US election stories published on Facebook in 2016 generated more engagement than the equivalent stories from mainstream outlets. The issue is similarly widespread in the UK: one survey conducted by Loughborough University found that almost half (46 per cent) of respondents had come across news in the past month on social media that they thought was not fully accurate, while nearly 43 per cent of those sharing news admitted to having passed on content that they knew was fake.
Worse, the task of differentiating between authentic content and fabricated material is likely to get harder, as the technological means of creating convincing deepfakes with AI go mainstream. But technology can also be used in the fightback; for example, journalists and fact-checkers are testing the potential for AI to be deployed as means of flagging unreliable information.
The knock-on effects of disinformation for democracy, social cohesion and public health (as evidenced by Covid vaccine scepticism) have been well documented. The integrity of democracy depends on voters being able to make informed choices and trusting the outcome of elections. Protecting a journalistic ecosystem that strives to report the truth is therefore not just a policy priority, but a national security one too.
Our contributors explicitly acknowledge the national security risk posed by disinformation: Elisabeth Braw suggests that just as those on the domestic front in the Second World War were warned against irresponsible information-sharing, we need a modern “pre-bunking” service to protect citizens. Ethan Zuckerman looks ahead to a future where policymakers might get one step ahead of conspiracy theorists by using AI to simulate the more paranoid corners of the internet. David Halpern sets out an ambitious agenda for proper democratic governance of social media platforms—with rights, processes and user representation properly codified.
This article first appeared in Minister for the future, a special report produced in association with Nesta.