Sophie Dannreuther works to mitigate extreme risks: pandemics, artificial intelligence and the like. This opens her up to another risk: being a buzzkill at parties. “It can go either way,” she says. “You can get a room that is fascinated and is firing questions. Or you can get a slightly long pause.”
Dannreuther is the director and co-founder of the Centre for Long-Term Resilience (CLTR), a thinktank that helps governments prepare themselves—and us—for shocks and crises that would make the Covid-19 pandemic look like a chickenpox party.
We are chatting on a Friday morning. Dannreuther, 31, is working from her home in southeast London, and her two-year-old Jack Russell—another source of peril—is asleep on the sofa. He slept through Dannreuther’s previous meeting, in which CLTR’s team members updated each other on what they’re doing. One of them is off to Geneva to attend the UN’s Biological Weapons Convention; two of them, including Dannreuther, are working out what CLTR’s next contribution might be to the government’s National Resilience Strategy, which will be released in the coming months.
The UK has not always been so proactive about extreme risk. It was in November 2019, just before Covid reached British shores, that Dannreuther and Angus Mercer, CLTR’s co-founder, left senior jobs in Whitehall to set up their thinktank (Mercer is now chief executive). Back then, says Dannreuther, “it was often the case that people didn’t know all that much about these topics. And I think the pandemic, sadly, has meant that many, many more people have personally experienced the impact of an extreme risk.”
Covid-19 has made it easier to imagine worse pandemics, including those created in laboratories and malevolently released. But the world holds other terrors. Some seem as though they’ve troubled us for decades (nuclear war) while others belong to the future (AI).
Artificial intelligence could set up a system of the world that could become pretty irreversible
AI presents three kinds of risk, Dannreuther explains. The first is the risk of accidents—on a small scale (a driverless car running into a pedestrian) or a large one (a national transport network going down, or worse). Scientists worry about the problem of programming messy and contradictory human values into AIs that will follow instructions to the letter.
The second risk involves deliberate misuse. “If AI has been built into things like critical national infrastructure, then malicious actors could cause huge and very sudden harm to many people.”
The third kind of risk, Dannreuther explains, is structural: AI could bake in bad societal values. “If you are able to use AI to set up a system of the world that works for you… then it could mean that a certain set of values, or a certain way of being, could be pretty irreversible.”
In June 2021, CLTR published a landmark report, “Future Proof”, that explained how the UK could, via relatively small changes, drastically enhance its resilience to AI, engineered pandemics and the rest of the rogue’s gallery. The report was co-authored by Toby Ord, an Oxford academic who is one of the most influential voices in the study of existential risk. Ord puts humanity’s chances of being wiped out within 100 years at one in six. Dannreuther hesitates to make a similar calculation, but counts herself as an optimist. Burnout remains a risk for her and her team, though; Dannreuther unwinds with low-stakes TV (think Queer Eye rather than Mad Max) and weekend trips in her yellow camper van.
What does Dannreuther’s family make of her work? “You can quote my mum—‘I’ve no idea what Sophie does.’” That might be a cheering sign that extreme risk remains at bay. At least for now.