A year on from the passage of the Online Safety Act 2023, it isn’t clear whether the legislation can fulfil its promises. Billed as a “landmark law”, the act seeks to protect children and other vulnerable groups as they navigate social media platforms. It holds tech companies responsible for preventing and removing illegal content, while enabling media regulator Ofcom to hold them to account.
With the codes of practice due to come into force next year, the UK should in theory become “the safest place to be online”. However, the act has come under attack from several angles. Some campaigners have argued it could threaten privacy or free expression, while others have stated it doesn’t go far enough.
These topics were in focus at a recent roundtable event, chaired by Alan Rusbridger, Prospect editor, and Alex Towers, director of policy and public affairs at BT Group. The event brought together policymakers, campaigners and experts to discuss the complex relationship between regulation and technology.
Alex Towers began by noting that our new digital era comes with dangers that require a sharper focus, not least on protecting children and older people. He argued that the Online Safety Act is a major step forward but that one piece of legislation can’t be expected to change everything overnight. Rather, solving the issues at stake will need continued collective action between industry, politicians and regulators—all of whom have a responsibility to do more.
A campaigner remarked that, at a time when democracy itself is being challenged by “the excesses of the digital world”, vulnerable children don’t stand much chance. He cited the galling statistic that, “every week in the UK, at least one young person ends their life with online content playing a part” and suggested that the whole system—industry, academia, government—had played a role.
In common with other panellists, he felt the main problem was not so much individual posts as “systematic amplification by algorithms or AI”. He argued that platforms should be compelled to inform the public whenever they discover unsafe uses of their products, and to provide greater transparency to researchers and regulators. An academic argued further that Ofcom’s evidence base should be open to public scrutiny.
A policymaker discussed the proliferation of content that sits below the threshold for criminality, but which nonetheless plays a role in inciting violence. She argued that Ofcom has been too cautious in applying its powers and could go further in addressing online harms. However, an Ofcom spokesperson disputed that idea. He pointed out that, while the codes will be refined over time, this needs to be done without creating too much legal uncertainty for companies.
If companies just wait to be told what to do, they’re going to fall behind
An MP argued that “we need a societal agreement on the age of access to social media”, and expressed concerns that the Online Safety Act isn’t going to realise its vision. She added that because companies self-assess whether content is illegal, it’s hard to say whether that is being done effectively. The Ofcom spokesperson reassured her that the regulator will have a supervision team in force by the end of the year. But he noted that companies need to think through these problems systematically: if they just wait to be told what to do, they are going to fall behind.
Next, the conversation moved onto the recent violence fuelled by online misinformation. The panellists agreed that industry’s responses have been mixed. While most services had taken immediate steps to strengthen their moderation, others hadn’t, and the “broader spreading of misinformation” remained a legal grey area.
One speaker commented that there were gaps in the act, especially when it came to harmful content going viral. Another speaker countered that trying to put everything in the first set of codes would risk delays—it is better to have some protections in place now and then “have that debate around where they need to go further”.
A third speaker remarked that trying to regulate this space “won’t work”. He argued instead for a culture change across the tech firms, for instance by creating online safety league tables that allowed them to harness their competitive instincts. However, an industry expert opined that there is a “lot of value in the Online Safety Act”, adding that “it absolutely is one of the instruments that drives cultural change”.
The panellists moved on to child protection and digital wellbeing. A child protection expert argued that the Online Safety Act will make it harder to share child sexual abuse material online. Labour MP Josh MacAlister’s private members’ bill to tackle addictive phone use in children was mentioned in similarly approving tones.
An industry expert said that innocuous-seeming content, such as diet advice, can become harmful if watched “non-stop”. She stated that industry is working to change the algorithms, so that vulnerable users don’t get this content recommended to them. But she emphasised the role wider society had to play, for instance through media literacy programmes in schools.
The panellists agreed that tech firms should be clearer about their parental controls, which differ from one platform to the next. One speaker argued that a “consistent, understood process… would lift an enormous amount of problems very quickly”. Another speaker said platforms need to be more transparent about their existing age assurance measures, with a view to avoiding “an almighty row from those who are concerned about privacy”.
Summing up the discussion, Rusbridger pointed to the “surprising gaps” in legislation and noted that the jury is still out on whether Ofcom is being too cautious. While highlighting the range of opinions around the table, he said there had been “common agreement… that regulation will always struggle to keep up with technology”. He closed by highlighting a campaigner’s point that, to date, we have all failed. “We should probably all go away bearing this challenge in mind,” he said.
This conversation took place under the Chatham House Rule