Sending and receiving communications securely through encryption used to be the near-exclusive preserve of national security agencies of the state. Now everyone—the bad as well as the good—can send and receive protected messages and much else besides with some degree of confidence about security and privacy. And governments are scared.
States worry that vital law enforcement and intelligence capabilities to thwart terrorism, online child abuse and other digital harms are disappearing. As a result, in a long-running and toxic stand-off, Big Tech is vilified for putting profit before safety, and privacy advocates for being indifferent to horrific online harm. The response is just as heated: opponents accuse governments of using online crime—and particularly the highly emotive horrors of child abuse—as cover for a desire to restore the so-called “mass surveillance” capabilities they have begun to lose, and of talking nonsense about technology they don’t understand.
The British government has adopted one of the most strident tones in this tussle over online security (though it is far from isolated in its concerns). Yet the actual substance and detail of UK policy is in fact a more subtle search for compromise. But can it work? And if not, what happens? The answer matters hugely for the future of internet security.
Half a century of tensions between governments and the tech sector over cryptography have come to a head over the now commonly used service of end-to-end encryption, or E2E. This is where a message is stored only by the sender and receiver: the communications provider does not store it and has no access to it. Since the beginning of regulated communications, there has always been some sort of central store of information that governments could require, or covertly gain, lawful access to. Think of the traditional sorting office for letters. With E2E, the problem is that there is, in effect, no “sorting office” to inspect. So intelligence and law enforcement agencies worry that they have no way of combing data to find the bad stuff.
This is a genuine and serious problem. However, companies and consumers have voted with their clicks. E2E is hugely popular. Users want it. Apple has long boasted of it on its services. Signal is a hugely successful not-for-profit service designed solely around E2E. The laggard in adopting E2E is Facebook (excluding WhatsApp, which was proudly marketing itself as E2E-enabled before Mark Zuckerberg bought it). The company has pledged to catch up with the rest of the industry over the course of 2022. But Facebook’s tardiness, and its unpopularity, make it an easy target for political leaders.
A UK government legal power acquired in 2016 hangs over the company. If used it could, in effect, make it unlawful for Facebook to provide E2E on its Messenger and other platforms in the UK. For now, the government has used the bully pulpit. It frequently quotes figures that over 90 per cent of referrals to the US’s child protection centre come via Facebook’s platforms (it is lower, but more than half, in the UK).
But these figures don’t tell you what the government wants you to think they tell you. All they indicate is that virtually all of Facebook’s competitors are already end-to-end encrypted. Nonetheless, the government’s onslaught against Facebook continues. In September, a well-briefed Daily Telegraph reported that home secretary “Priti Patel has launched a worldwide hunt for tech wizards to crack Facebook’s encryption so Britons are protected from child abusers and terrorists.”
As well as choosing language designed to inflame even the most moderate opponent, this framing was also technically ridiculous. There was no proposal to “break” this near-unbreakable encryption, which hasn’t even been implemented yet by Facebook. In fact, the government was putting fresh detail on a more imaginative approach. The “hunt” in question was the launch of a competition offering a research prize of £85,000 to develop “innovative technologies which demonstrate how tech companies could continue to detect images or videos showing sexual abuse of children while ensuring end-to-end encryption is not compromised” (italics mine).
In other words, the policy is technological “cakeism”—the government is trying to eat its lawful access cake while having end-to-end encrypted protection for citizens more generally. It is a laudable policy objective, building on a set of principles published by two GCHQ technical experts in 2018 which showed a generosity of spirit all too lacking in most exchanges about encryption, privacy, security and online harms.
But like all “cakeist” policies, the test is whether it actually works—or is just an attempt to do the impossible to avoid hard choices. Most experts are highly doubtful, and believe the government is searching for the digital equivalent of alchemy. The objections are many and technical, but at their heart is a simple concept: you cannot build an entry point only for the law enforcement and intelligence arms of democratic governments. If they can get in, so too can hostile states, cyber criminals and anyone else with the know-how.
Earlier this year, Apple shelved a proposal to scan devices for child abuse while protecting E2E after a deluge of criticism. This included a detailed technical rebuttal from a stellar international cast of cryptographers in a paper called “Bugs in our Pockets?” Susan Landau, one of the authors, put the argument simply elsewhere: “If we build it, they will break in.” And by they, she means anybody with the technical nous, not just our police and spies.
There is also justified scepticism that E2E really is the “cliff-edge” in national security and law enforcement capabilities that the government claims. After Edward Snowden’s 2013 leaks triggered a wave of tightening of encryption practices by Big Tech and communications service providers, intelligence officials warned grimly of their digital access “going dark.” The reality was more that they were “going spotty.” They still had abundant data, but had lost access to some key sources. But that is the story of interception through the ages. Technology changes, the bad guys use the new technology, and the state catches up. And despite all the changes to online security over the past ten years, intelligence and law enforcement agencies are not short of data, or successful operations against difficult targets.
That’s because it’s often, though not always, possible to find other ways of finding crucial information needed to protect the public. In a different case not involving E2E five years ago, Apple resisted a demand from the FBI to write a programme to unlock the phone of a dead terrorist who had murdered civilians in San Bernardino, arguing that if it was to create a new way to open one phone, every Apple device would be less safe. The FBI eventually dropped its legal case, not because it came to agree with Apple, but because it found a different way in. It’s hard to argue now that weakening the security of every Apple device on the planet would have been preferable to forcing the FBI to find another method of entry. The same principle holds for end-to-end encryption.
That said, too many opponents of the UK government’s approach dismiss the very legitimate concerns of law enforcement and child protection agencies too lightly, reflecting the astounding lack of trust and goodwill in this public policy tussle. Some willingness to listen, and appreciate the fears of the other side, would be a welcome change in this discussion.
So the government deserves a chance to put its case that a “cakeist” approach can work. But a major flaw in its approach is that it hasn’t tried to do so. When cryptographic experts tell them that they’re arguing not with Silicon Valley but with maths, the government simply retorts that they’re wrong, and tells them to go and find a way. If the government really believes, in the face of such well-evidenced objections, that safe, lawful access is possible without compromising the everyday security and privacy of millions—surely the onus is on them to publish a detailed technical proposal to convince enough doubters?
If the government’s plans work, they should be cheered to the rafters. And whoever designs the solution is likely to be worth significantly more than the £85,000 on offer from Her Majesty’s Treasury. But if it can’t, online privacy and security must win the day. Banning Facebook from doing what the rest of the industry has already done makes no sense. And retrospective restrictions, or an outright ban, on iMessages’s protection seems inconceivable, never mind blocking Signal—which it must be said is the favoured app of countless senior policymakers and national security officials across the world—from the UK market.
The government must accept that the interests of free, open and digitally advanced societies are best served by the strongest possible and most widely available online security. So if it can’t demonstrate that its contentious plans are technically feasible, the government must instead focus on the many other ways of investigating and preventing harm, rather than weakening the ability of the rest of us to protect ourselves online, as well as our trust in our digital communications.
This article is a summary of a lecture given at the London offices of Jones Day, organised by the Bingham Centre for the Rule of Law. The full lecture can be found here