The public inquiry into the handling of the Covid-19 pandemic in the UK will have a lot of ground to cover. Certainly there were specific errors of political and scientific judgment that left the UK, by the end of June, with the second highest per capita mortality rate in the world, the highest number of excess deaths in all of Europe, and on course to take the worst economic hit in the developed world. The crisis was mismanaged throughout, from the obsessive focus on “getting Brexit done” even while the warning signs were emerging from China in early January, the woeful public messaging, the scandalous abandonment of testing and neglect of care homes, and the collapse of trust in government after Dominic Cummings showed he could flout lockdown rules with impunity.
There were also failures over the longer term: the lack of preparedness for a long-expected lethal pandemic; indifference to the alarming outcome of the 2016 “Exercise Cygnus” flu simulation; no capacity-building for widespread testing despite that being recommended 20 years ago in the Phillips report on the BSE outbreak; the slow strangulation of the NHS during the austerity years.
All of these issues and more deserve scrutiny. But if we want to avoid another catastrophic outcome like this—for no one doubts there will be other pandemics—the inquiry will need to look closely at something else: the mechanisms for feeding scientific advice into government policy-making. Are these fit for purpose?
This is the question examined in my BBC radio documentary “Led by the science,” for which I spoke to many key figures involved in science policy advice during the pandemic and before, and to academic experts on the subject. As expected, the answer is not a simple one—but it is nonetheless vital that we heed it.
At face value we should have done so much better on “the science.” World-beating is a tarnished phrase, but the UK has undeniably some of the top experts in infectious disease and epidemiology on the planet. The key advisory infrastructure has been in place for some time—the post of government chief scientific adviser (GCSA) was formalised by Harold Wilson in 1964, the process of rapidly convening experts to deal with an epidemic was honed to good effect during the foot-and-mouth outbreak in 2001 under Tony Blair, and the Scientific Advisory Group on Emergencies (Sage) was introduced by the GCSA John Beddington a decade ago. What’s more, the current GCSA Patrick Vallance has years of experience in public health as former director of R&D at GlaxoSmithKline, while the Chief Medical Officer (CMO) Chris Whitty is an epidemiologist with experience in fighting the spread of the Ebola virus in Sierra Leone.
All the same, the coronavirus stress-tested this apparatus as never before. “The Sage committee is normally a backroom entity,” says James Wilsdon, professor of research policy at Sheffield University. “It was never set up to deal with this degree of intense scrutiny from the public and media.”
Julia Pearce, a social psychologist at King’s College London who advises the Cabinet Office on behavioural aspects of public-health emergencies, says that in principle the government’s mantra—repeated with drilled precision by ministers—of “following the science” made sense for developing the trust needed to guide public behaviour. “The primary route to cooperation is trust in the people who are communicating the message,” she says. “It’s helpful that you have medical personnel on hand who have the credibility to support those messages.”
But problems with that slogan loomed from the outset. “From the communications perspective there’s a clarity and simplicity to the idea that ‘the science’ is a singular thing,” says Wilsdon, “[but] it was problematic to allow that language to emerge early on.” It traduces the real nature of science, especially in a situation fraught with so much uncertainty: about the nature of the virus and the respiratory disease it caused, about how it spread and who was most at risk, and about the extent of infection already in the country. As Paul Nurse, a biology Nobel laureate and head of the Crick Institute for biomedical research in London, says, “We are taught science at school like it’s been chiseled in granite. But science that we do today in the case of a pandemic isn’t that—it is tentative and will change as knowledge accumulates and ideas are tested.”
The daily press briefings, clearly designed to reinforce the appearance of being “science-led,” were performative governance—precisely the style that Boris Johnson had developed to “get Brexit done.” “From the very first moment I saw the chief scientists flanking the prime minister,” says Wilsdon, “it rang all sorts of alarm bells, with the blurring of the distinction between advice and decision-making. You remove any accountability of the politicians.”
It was the look that has become all too familiar: look confident and unified, admit no regrets, stay on message, stick to snappy slogans. In this political mode of presentation, mistakes—and there would inevitably be mistakes in a situation like this—will look like weakness. We should not be surprised that a government and prime minister willing to fake or deny facts (in the face of documented evidence) would distort the figures on testing, advertise useless initiatives with inflated nationalistic rhetoric, concoct absurd stories about missed emails or eyesight tests, and—perhaps most shamelessly of all—attempt to rewrite the history of a crucial stage in the pandemic response with Matt Hancock’s assertion, contradicted by his own words, that lockdown began a full week before 23rd March.
It was, in retrospect, naïve of the chief scientists to imagine that a government that had already demonstrated this modus operandi would accommodate it to the contingency, uncertainty and fallibility that good science must acknowledge. Indeed, the scientists might have foreseen that they too would be drawn into the same habits. “The relationship between the scientific and medical advisers and the politicians as we were managing the early phase of the epidemic was strangely collusive,” says Richard Horton, editor of The Lancet. “The scientists were acting together with the politicians to support the political response”—for example, by repeating the highly questionable claims that the UK was well prepared, that there was adequate protective equipment for the NHS and no problem with shaking hands. “Why did we have our scientific and medical advisers telling untruths to the public in support of politicians?” Horton asks. Whether or not one accepts these allegations, it’s alarming that someone of Horton’s standing feels they must be made.
The tragic irony is that we had been here before, and again failed to learn the lesson. During the BSE epidemic in the early 1990s, John Major’s minister of agriculture John Gummer notoriously asserted to the press that science had established British beef as entirely safe to eat, trying to encourage his (non-compliant) daughter to demonstrate as much. In fact science was not sure about that at all, and it turned out that this neurodegenerative condition was fatally transmissible to humans. It is a measure of how far we have now fallen that the 177 potentially avoidable deaths from eating infected beef were seen then as a great scandal.
The Phillips Inquiry in the wake of that episode stressed the importance of transparency and independence in the scientific advice given to government. The GCSA at the time, Robert May, made the point with his characteristic and refreshing bluntness: “You can see the temptation… to hold the facts close… My view is strongly that that temptation must be resisted, and that the full messy process whereby scientific understanding is arrived at, with all its problems, has to be spilled out into the open.”
Transparency was notably absent from the early workings of the Sage group as the Covid-19 pandemic took hold. The (somewhat fluid) membership of the committee, and its minutes, were only made public after it was revealed that Cummings was attending and contributing to this supposedly independent process. It was in response to those concerns that David King, GCSA during the foot-and-mouth outbreak in 2001, took the unprecedented step of creating the Independent Sage group in early May, largely to provide a forum of expert advice that would be open and accessible to the public. He was motivated too by a wish to escape the crushingly simplistic notion of “the science” that speaks with one voice.
Perhaps the most alarming failure of transparency in the scientific advice concerns two issues that seem to have been central to the delay in the government’s decision to enter full lockdown, which happened (regardless of what Matt Hancock now says) on 23rd March. As late as 13th March, Vallance was still talking about the need to develop immunity in the population at large—evidently the inspiration for Johnson’s comment eight days earlier that one alternative to locking down was to “take it on the chin.” The idea of letting the virus spread to around 60 per cent of the population to achieve “herd immunity” was greeted with astonishment by experts outside the UK, but was only disowned (with a denial that it had ever been considered) when epidemiological models showed what already seemed obvious for a disease with this mortality rate: the deaths would be catastrophic. Yet now no one knows, or will say, how this discussion of immunity ever got into the conversation—indeed, before it was even clear whether infected individuals developed immunity to Covid-19 at all.
The second scientific mis-step was Whitty’s assertion in early March that if you “went too early” into lockdown, people would get fatigued and stop complying with the rules. Again, the genesis of the suggestion is unclear, but both the Sage behavioural scientists and the government’s “Nudge Unit,” which also advised on public behaviour, categorically deny it came from them. Stephen Reicher, a member of Sage’s behavioural subgroup, says his experience suggests the opposite expectation: that on the whole people tend to be measured and compliant in a crisis.
So not only were these two critical pieces of “advice”—presumably catnip to a libertarian government eager to keep the pubs open—wrong, but their provenance is a mystery. In the event, Sage modelers now concur that a lockdown a week earlier might have halved the current death toll.
The compromising of the chief scientists to stay on message was most disturbingly exposed in the press briefing of 28th May after the news of Cumming’s Durham trip broke. First Johnson intervened to prevent Whitty and Vallance from answering questions on the matter; then after persistent probing from the press, they showed that their silence was not imposed but voluntary. That it was, as they claimed, a purely “political” issue was untrue. As Reicher explains, public compliance with lockdown measures or track-and-trace schemes depends vitally on trust—and studies now confirm that trust in the government plummeted after the Cummings affair.
Errors of judgment must not be unduly castigated with the benefit of hindsight. The challenges of this crisis for those charged with navigating it were and are enormous; given the uncertainties, mistakes are unavoidable. What can't be tolerated, though, is silence, untruths, cover-up, dissembling, and a refusal both to confront the hard facts of the outcome and to take some responsibility for them.
There is surely room for improvement in the structures governing scientific advice for policy-making, but it’s not clear that the system was so flawed that this result was inevitable. Rather, the government failed in its much-repeated promise “to take the right decisions at the right time.” Instead, it actually delegated too much of the decision-making to science, waiting to be told what to do rather than taking swift action. The divergence of science and policy we now see with the relaxation of distancing rules and plans to open schools, while concerning in some ways, at least means science is no longer made a shield or an alibi for political decisions that should incur political responsibility.
At root the problem is that the fundamental principles of how science, at its best, is done are incompatible with what politics seems to have become. Science must be pluralist and internationalist, especially in dealing with new problems. It must be allowed to err and to admit to error. It cannot make premature claims to certainty, or make its case with cherry-picked data. Neither should it pretend to be free of human bias or hesitate to identify where that occurs. For this reason, science cannot accept the master-servant role implicit in Winston Churchill’s much-quoted notion of scientists being “on tap but not on top.” However painful it might be to politicians today, science will serve them best when it is socially distant and not beholden to political constraints and obligations.