If you’re finding the hype and doom about artificial intelligence bewildering, you’re not alone. Hundreds of billions are pouring into technologies which, if you believe Bill Gates, could solve climate change or, if you listen to AI “godfather” Geoffrey Hinton, could destroy humanity.
Amid all this uncertainty, one thing is clear: AI is upending journalism—changing everything from how news is gathered, created and distributed, to what the word “news” itself even means. AI is imperfect but it can read and write, listen and speak, consume, produce, synthesise and contextualise information with startling speed and efficacy, outpacing human journalists, editors and analysts.
It is now the primary topic of discussion in the boardrooms of newspapers and broadcasters. Some are proceeding with great caution, while others are piling in. “Ask FT” provides premium subscribers with answers to questions about decades of Financial Times journalism; NBC cloned the voice of legendary Superbowl broadcaster Al Michaels to deliver its Paris Olympics coverage. “It was not only close, it was almost 2% off perfect,” Michaels told Vanity Fair.
That said, most efforts to integrate AI have so far been focused on relatively mundane tasks—copy-editing, summarising, generating headlines—that would be familiar to editors of the 18th century. Very few address the fact that AI might fundamentally transform how societies and citizens understand the world. And this will matter for all of us, not just for the news industry.
Take “generative search”, rolled out last year by Google and others. It poses grave threats to news media, because it all but eliminates the need to visit many news sites by offering internet users short, AI-generated answers to questions such as “who started the First World War?” Handled badly, these new capacities could help untruths and distortions spread at a scale never previously possible. Recall the tech titans and Donald Trump’s inauguration and then imagine these tools applied to queries such as: “Who was responsible for the January 6th attack on the Capitol?”
Rather than producing news articles or videos broadcast to large audiences, AI-driven systems can create news items that are personalised to every individual. A good analogy is if every piece of music on the streaming service Spotify was synthetically created for each listener, tailored to their personal preferences, rather than produced by an artist for many listeners. Ultra-personalised news might tackle growing news avoidance, but it would also be ephemeral, leaving behind no record. Unchecked, this would quickly vanquish what’s left of shared facts. The public square would be gone, leaving our world dramatically more fragmented and polarised.
So, what can be done—particularly at a moment when so much that underpins democratic societies is at stake? The news industry has an unimpressive track record on adapting to major disruptions. In the late 1990s and early 2000s, when tech companies were eagerly grasping the obvious long-term opportunities of the fledgling internet, newspapers, magazine publishers and broadcasters largely remained unimpressed by bloggers, online communities, search engines and audio- and video-on-demand. Social media was treated as a minor marketing channel best staffed by junior journalists and interns.
By the time the immense relevance of these new forms of media was established, it was too late for most news publishers to participate in anything other than a subservient capacity to the tech platforms. Large swathes of the industry were decimated, and “news deserts”—where people have no access to local information—formed, along with the many attendant downsides for democracy. More than half the counties in the United States have little or no local news coverage; in the UK, more than 320 local titles closed between 2009 and 2019, as their advertising revenue fell 70 per cent across that decade.
Generative AI, propelled by a multi-billion-dollar investment surge, will likely lead to even more dramatic changes. News organisations face a clear and unavoidable decision: will they use AI to automate the work that they already do, continuing to cling to familiar processes and products, or will they learn the mistakes of the last major technology-driven disruption, and reimagine journalism altogether?
Last April we were part of a team from the Open Society Foundations that convened more than 60 experts from across media, technology, business, policy and law to identify how AI might reshape the media in the next 10 to 15 years—and to help news leaders and -investors act with ambition.
One scenario is that AI might enable new kinds of news and journalism delivered entirely without the involvement of journalists. Many examples are already emerging, the most comprehensive and well-funded from a technology company we already know well: X.
Much has been written, including in this magazine, about the transformation of Twitter into X by Elon Musk. Most of the Sturm und Drang has centred on the decline and politicisation of content moderation, but Musk has ambitions that extend further.
X.AI is a sister company of X that -produces the “Grok” family of large -language models, trained on the platform’s archive. Unlike ChatGPT, Grok is continuously improved using “community notes”—a kind of collective fact-checking by users of X. Mark Zuckerberg recently announced that Meta will replace its factcheckers with a similar feature. The purpose of Grok seems to be to facilitate a new AI-assisted form of “citizen journalism” as an alternative to legacy media. It is earnestly described by Musk as “a maximum truth-seeking AI”.
These capacities have been publicly operated since April 2024 as “X Stories”, a news service for paying subscribers. Using Grok, Stories continuously reads every new post on X (almost 500m each day) to identify emerging stories that are likely to personally interest each -individual user. It then creates an up-to-the-minute written narrative and headline for each story.
When it works well (which is not often), using Stories feels like having a small newsroom to constantly read the entire platform on your behalf, and present freshly written stories to you in an ever-updating briefing. It is like the kind of news service a billionaire or a president might use. It can also feel quite alien, as if the “journalists” producing it had never read a newspaper. Recent stories offered to us included “A Day of Hope and Positivity on Social Media” and “Global Headlines Reveal Diverse Concerns”.
Stories remains patchy, but an improved version—or something like it—could soon produce a fundamentally new form of news, entirely without human reporters and with none of the editorial norms and practices of traditional -newsrooms. Stories performs no real verification. It produces none of the permanent “artefacts” of traditional news: nothing to link to beyond the original source posts, nothing to correct, nothing to place in an archive, and no shared experience between many people and over many days.
In the right hands, such products might help societies become better informed—they could cover every parliamentary session and council meeting, every court case and speech, something that traditional local journalism has been unable to afford in recent years. Petabytes of government data could be transformed into accessible and personalised stories about your child’s school or your GP, much as if you had your own personal reporter.
At best, these systems could put enormous information-gathering capacity into the hands of ordinary people who live far from the centres of political power. But they also present substantial threats—not only to journalists who fear for their jobs, or to beliefs about human exceptionalism, or even of widespread disinformation from AI models that follow the agendas of owners such as Musk (or even of their own). Perhaps the most disturbing possibility is that these systems might directly threaten the centuries-old model of news as a self-correcting “first draft of history”.
If open, democratic societies are to survive, they will need something akin to public service intelligence
The practice of public-interest journalism involves, among other things, establishing the veracity of information before publishing it, and correcting the record if it turns out to be incorrect. Through this process, day after day, journalism accumulates an imperfect but permanent record, archived as articles, audio and video, of what has happened. This record has been verified; it can be referenced, contested, corrected, explained and shared by many people. Its origin is known and accountable. AI-mediated news systems like X -Stories provide none of those things.
This is not to say that AI systems cannot be deployed to serve the public. In Colombia, the news outlet Cuestión Pública has prototyped a fine-tuned language model based on its high-quality investigative journalism and structured data, which delivers fact-checked breaking news in a fraction of the time it takes a human to do so. Instead of just replicating traditional news formats, this newsroom is attracting younger audiences using games: riffing on Hollywood and Netflix blockbusters (“Game of Votes” and “I Know What You Did Last Legislative Term”) to expose corruption in politics. In southern Africa, the news outlet Scrolla has deployed an AI tool that helps community-based reporters to report news in articulate and accessible formats.
There are countless prototypes and experiments like these under way around the world. But much of today’s media lacks the imagination and risk tolerance needed to reimagine for the AI era journalism’s ability to self-correct and seek accuracy at scale—and that are not owned and driven by erratic billionaires such as Musk.
What would AI-assisted news look like if it were designed to serve the public interest? It would allow for cultural differences between nations, regions and towns, while providing shared, relevant narratives and stories assembled from the highest-quality information. It would be impartial to political parties and ideologies, and free from the influence of interest groups. It would likely need to be publicly funded, with a mandate to serve everyone with relevant and accurate information. As it happens, the UK already possesses a flawed institution that largely fits these criteria, and which is about to undergo a major review into the next 10 years of its future.
It may feel surprising today, but the BBC’s early years were defined by breathtaking innovation: the organisation invented the technology and editorial grammar of broadcast news. In the 1940s, it pioneered the adaptation of broadcasting to the new medium of television. By the 1990s it dominated British culture and journalism, embracing the early internet with an enthusiasm uncommon among its journalistic peers. It launched its news website in 1997, setting up the world’s first news production pipeline designed solely for digital text in the process. And it repeated that ambition a decade later by launching one of the first news apps for the iPhone and pioneering streaming video with its very own iPlayer. For most of its existence, the BBC led the world in the pragmatic transformation of media at scale, continually learning how to inform, educate and entertain its audiences using the technology of the day.
Yet the BBC has fumbled AI, after embracing it early as an opportunity. Half a decade before ChatGPT, the BBC had set up a group with millions of pounds in funding and dozens of technical and editorial staff, established a strategic relationship with Microsoft and begun building one system for creating and managing multi-format content produced partly using AI, and another for serving highly personalised webpages enabled by AI. It had more AI experts in its R&D and product departments than most other news organisations, and it was building dozens of AI-based prototypes each year.
By the time ChatGPT launched in November 2022, all of this was gone. Tim Davie’s appointment as the BBC’s director general in 2020 brought a change in direction. The BBC’s AI brain trust dispersed to senior roles at Netflix, Spotify, Google DeepMind, the Government Digital Service and other competitors. The R&D department shrank and AI experimentation slowed to a crawl.
The Corporation turned to a more familiar strategy. News content and the ways we consume it were optimised for page views and engagement; a news app for the US market was prioritised; advertising and content deals were sought. Vision was out and spreadsheets were in. By the end of 2022, Davie’s BBC focused on optimising its legacy content and products, and statements about its long-term future were increasingly vague.
At the very moment when society most needs the essential functions of journalism, one of the organisations best placed to invest in them has turned away. The upcoming BBC review is unlikely to change its course. So who might build the kind of news that uses AI but serves the public interest?
Open, democratic societies have always needed public service journalism to survive and flourish. In future, they will need something akin to public service intelligence. The organisations providing this intelligence may need to be independent and publicly funded, tasked with maintaining the foundations of a self-correcting journalistic record—and, most likely, providing a public service alternative to other AI media driven by commercial or political motivations.
Based on current AI capabilities, we might safely predict that such services could, for instance, use AI to dramatically extend access to information. What could the BBC report on with the AI equivalent of 100,000 reporters? How many people could the BBC serve with resources comparable to 100,000 additional writers, videographers, podcasters and other media producers? Imagine how relevant BBC journalism would be for our lives if it could not only produce work for the UK, the nations and the regions, but also for the smallest towns and villages—or even for individuals? And how accessible might that information be if it were adapted by AI to the preferred medium, style, language, reading level, length and disability needs of every individual?
If important governance and accountability questions were thoughtfully worked through, the results could be transformative. A public service intelligence organisation could focus less on gathering, producing and presenting information—all tasks that AI will do well—and more on understanding the information needs of its audiences, becoming a “listener” rather than a “broadcaster”. It could, working with the next generation of human journalists in every community, become the intermediary between those audiences and the AI media ecosystem.
Critically, a public service intelligence organisation could, with the help of AI, gather, verify and maintain a permanent, trusted record of events. It could manage this centralised record on behalf of society, enabling others to contribute to it (much as Wikipedia does today), and deploy it to create new user experiences—be they games, or stories rendered in new languages and dialects, or entirely new formats yet to be invented. In a world that may well otherwise be dominated by news avoidance and propaganda, the value of relevant, accurate information, packaged into an experience that people actually seek out, engage with and help shape could be inestimable.
Could the BBC fulfil this role? Perhaps. Other public broadcasters are certainly taking a more proactive approach: ARD in Germany, and SVT in Sweden, for instance.
But to seize the opportunities of the AI era would require fundamental changes in the BBC’s mandate, governance, vision, strategy and management. If it is to lead the transformation of media, as it did during its early years under John Reith, the first director general, its assumptions and culture would have to move away from those of personality and establishment and towards those of technology, entrepreneurship and a participatory, democratised vision of public service news. This may be too much to ask. But the alternative—creating a new institution from scratch—is likely an even bigger task.
Will the media of the future look more like X’s Stories or an AI-native BBC? These are not the only two choices, of course: both may be possible, and much more besides. The type of public service intelligence organisations we imagine here could only thrive in places with broadly independent institutions, unbeholden to state or other interests. Nowhere, including the UK, has a perfect record on this, and such environments can change very fast—but these are the kinds of questions governments and societies face about our future.
Just as the free-market liberalism that shaped the early internet helped establish the crushing dominance of tech billionaires today, so actions (or inaction) in this pivotal moment will shape generations to come. You don’t have to be an AI evangelist to appreciate its potential, or a doomsayer to realise that much depends on our ability to verify, reference, debate and continually improve our “first drafts of history”. If we lose that, we might well lose everything else.