Last year, hackers linked to the Chinese government infiltrated at least nine major US telecommunication companies. The security breach was so big that it was given its own name—“Salt Typhoon”. Soon after, the US government finally got behind end-to-end-encryption, and in December the FBI and the Cybersecurity and Infrastructure Security Agency urged Americans to use encrypted messaging apps, like WhatsApp and Signal.
For digital privacy advocates who have long highlighted the virtues of encryption—a security measure to ensure no one other than the sender and recipient can read a message—it was a “watershed moment”. The US government is finally listening. If only the UK government and Ofcom would listen too.
Ofcom, the communications regulator, has been tasked with implementing the Online Safety Act, a gargantuan piece of legislation with child safety at its core. When the act was passed in October 2023, Conservative ministers gave assurances that encryption would not be broken. This was considered a win for organisations such as Index on Censorship that had campaigned for just that (and for whom I am CEO). Since then it has been hard to find such assurances. On the contrary, Ofcom has drafted regulations that will go before parliament early this year in which encryption is characterised as a “risk” factor. The implication is that tech companies should weaken encryption on their messaging services in order to mitigate risks of illegal harms.
I reached out to Ofcom to query these lines. An Ofcom spokesperson said in response that the act “doesn’t ban platforms from using encryption”, just that “end-to-end encrypted services are in scope of the Act, which means they’ll need to assess risks to their users and take steps to protect them from those risks.” In essense Ofcom won’t force companies to break it, but in highlighting it as a risk the suggestion is it might be worth breaking if companies want to fulfil their obligations to the act.
The Online Safety Act sets out many priority offences, from terrorism and financial crime to child sexual exploitation and intimate image abuse. We all want to mitigate these risks and to create a safer internet, especially for children. The problem is that breaking encryption cannot rid the internet of criminal activity and instead risks opening the door to even more bad guys.
Index has published censored artists and writers globally since 1972. Today, we use encrypted messaging apps to communicate with internal staff and to keep in touch with correspondents around the world. After the fall of Kabul in 2021, for example, we set up a group on Signal to aid Afghan journalists. For almost four years now people have discussed their situations in this group. They’ve shared stories, suggested resources for help and have provided each other with solidarity in their darkest hour, confident in the knowledge that their messages cannot be easily accessed. That group—a lifeline for many—would be gone in an instant if encryption were broken.
I could sing a paean for the merits of encryption from a human rights perspective. I could add to the Afghanistan example the fact that privacy is a human right we should all be granted, that just as we don’t want our phonelines hacked we shouldn’t tolerate our online messages being read. I could highlight how much personal data we have stored on encrypted sites—bank details, medical notes, voting habits, sexual preferences—and how that can be wielded against us by those with malicious intent.
It’s not just about ethics. There are financial implications too. The tech company Signal has said that if encryption were undermined it would abandon UK shores. Britain’s tech sector has been hailed as a success story; according to recent statistics it employs over 1.7m people and adds more than £150bn to the UK economy every year. Do we want to risk alienating that sector?
Data breaches are also expensive for all the businesses they hit. Between March 2023 and February 2024 the average cost of a data breach rose to £3.58m, a 5 per cent increase on the previous year.
For further proof that an insecure internet is not good for business look to Hong Kong. There, locals joke that the city is the newest Unesco world heritage site. Security is one of the many factors that have contributed to Hong Kong losing its powerhouse status. Businesses are spooked. Stories of hacks and data leaks add to the picture of a city that cares less about the security of its own people.
Index recently consulted with the leading human rights lawyer Phillippa Kaufmann KC about Ofcom’s plans. She urged Ofcom to review a landmark European Court of Human Rights judgment, Podchasov v Russia, which established in 2024 that a “statutory requirement to decrypt communications” violated the fundamental right to privacy enshrined in Article 8 of the European Convention on Human Rights, and hence was not lawful. She also noted that breaking encryption would breach GDPR. Kaufmann has this advice for Ofcom: it must “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk”. In other words, Ofcom’s guidance mistakenly identifies encryption as a risk when in fact the major risks could arise from breaking encryption.
Whether we are NGOs and journalists or companies working on financial data, it’s clear we need better measures to protect ourselves. Encryption cannot stop breaches altogether but it can frustrate bad actors enough to ensure that breaches are minimal in their reach and impact. Despite the admirable intentions of those who seek it, breaking encryption, on the other hand, would not stamp out illegal harms on the internet. It passes neither a human rights test nor a business test and it won’t make our internet safer for children either.