Legislators in the United States are apparently competing to see who can be the fiercest opponent of social media. In mid-March, the House of Representatives overwhelmingly passed a law that would force Chinese company ByteDance to sell video-sharing platform TikTok to an American company, or another company not based in an “adversary nation”. Should ByteDance fail to sell TikTok—and it’s hard to see who would be able to absorb such a big business without facing antitrust concerns—a platform with 170m US users might suddenly become inaccessible.
This does not yet constitute a TikTok ban. The Senate would need to pass the legislation, and a lengthy appeals process would follow any ban. But the proposed prohibition has been surprisingly popular on both sides of the political aisle. Joe Biden has said that he would sign the legislation, though former president Donald Trump—who initially proposed a ban on TikTok—now opposes a ban, with speculation that this is because Biden has begun taking fire on the platform from progressive youth opposed to his inaction on Gaza.
It’s not hard to spot the contradictions when a country that celebrates free markets and freedom of expression is considering the forced sale or wholesale ban of a popular video-sharing platform. Indeed, a blanket ban on TikTok would put the US in the company of increasingly authoritarian India, which has banned the platform on grounds of national sovereignty, and Afghanistan, which has banned the platform for being anti-Islamic. The argument that TikTok represents a national security risk—the justification for blocking it from government devices in the US, UK and several other European countries—is more theoretical than practical. Yes, TikTok’s powerful algorithms have the potential to influence public opinion, but there is a distinct lack of evidence for any concerted attempt at manipulation.
Government officials worry that China could use information collected on users to spy on their activities or movements, but experts point out that TikTok collects more or less the same data as, say, Facebook. Countering this behaviour through an application-specific ban makes a lot less sense than passing comprehensive privacy legislation. It is possible to take on some of the serious concerns about TikTok—its addictiveness, its possible adverse effects on young people—in far less radical ways.
Yet the US seems to be on a social media banning spree. States including Florida, Louisiana and Utah have passed legislation to block users under 16 from using social media without parental permission. Civil rights groups have pointed out that these bans are harmful to LGBT+ youth who may find support in digital communities (particularly if they face parental hostility) and that such bans affect young people’s ability to organise and protest online. Yet it seems likely that some of these restrictions may become national in scope. The most serious of these efforts is “KOSA”—the Kids Online Safety Act—which has bipartisan support, and would demand that platforms actively work to prevent young people from encountering content that could be exploitative or detrimental to mental health.
There are at least three forces at work behind these social media restrictions. One is the fact that so many different groups see big tech as a villain, meaning all sides can agree to gang up on it. The right believe that social media companies have conspired to silence their voices through algorithmic discrimination and blocking content. Left-leaning politicians baulk at the reckless wielding of concentrated corporate power. Even long-term internet boosters like myself have trouble defending companies who have paid lip service towards platform safety while building lucrative and invasive businesses around the collection of user data.
The second force pushing social media bans is a movement for “parents’ rights” that has gained traction with the Christian right in the US, where parents argue that they have a right to constrain which books children can encounter in school and which ideas they can learn in the classroom. If we accept the idea that parents should be able to veto curricular decisions, which affect school systems as a whole, it’s easy to see how the argument that parents should be able to control children’s social media behaviour follows.
Even long-term internet boosters like myself have trouble defending social media companies
Finally, there’s a fierce argument raging in academic circles about what effects social media is having on young people as a group. Social psychologist Jonathan Haidt has become a proponent of the argument that social media is “rewiring” young people to be more anxious and depressed. Most academic studies of social media do not find negative effects for most young people, though there are indications that young women, in particular, may be experiencing negative effects around body image. While there is an increase in young people reporting anxiety and depression, it’s difficult to argue that this is caused primarily by social media, given a widespread change in social norms destigmatising mental illness and making it more common for young people to seek help.
Even with the convergence of these forces, blocking young people from social media is fraught with difficulties. All such proposals rely on systems to authenticate the age of a given user. Previous attempts to require age verification using documents such as a government-issued ID have been complicated by challenges in US federal courts: showing government ID to access the internet is uncomfortable for people in a country where a strong mistrust of authority is entrenched as a founding principle. Furthermore, speech protections in the US make blanket bans on content very difficult to implement, and most first amendment protections of speech apply to minors as well as adults.
It’s instructive to contrast the flurry of US legislation with legislative developments in the European Union, where two major pieces of legislation, the Digital Services Act and the Digital Markets Act, have recently come into effect. These are broad pieces of legislation designed to limit the power of six “gatekeepers”—five American, one Chinese—who control large swathes of the internet market. The full implications of these laws will not be immediately clear until we see implementation, enforcement and challenges to the legislation in question. But the initial expectation is that Google and Apple may need to offer alternatives to their proprietary app stores and to give users the choice of which default web browsers they use on their mobile phones. Similar prohibitions may force Amazon to stop favouring its own brands in its search results. For researchers like me, the most interesting element may be a promising, if vague, assurance that researchers can access data needed to apply external scrutiny—a set of accommodations that might reverse the current trend towards opacity within major social media platforms.
It’s worth asking what vision of the internet is embodied in these different packages of legislation. What do these laws tell us about how we see the internet, 30 years after it escaped the physics lab and took over popular culture?
US legislation suggests that we see the internet as dangerous and out of control: not fit for children, despite the fact that they often understand and use these tools more fluently than their parents. The US accepts the idea that social media is a powerful political force which must be kept out of the hands of foreign adversaries—an idea that’s in tension with the reality that we don’t trust American companies either, at least when it comes to the welfare of children. It’s a vision that suggests panic and regret: after a decade of near-total inaction on constraining the influence of these new media companies, there’s a desperate rush to protect Americans from the extremes of social media.
The EU stance is less alarmist, though more explicitly protectionist. Powerful companies from outside the EU are operating in ways that violate European values, the thinking runs. The new standards are built so that no EU company is identified as a gatekeeper, and therefore none are explicitly targeted for increased transparency and regulation—which is quite the coincidence. But one hope implicit in the EU’s vision is the idea that a more transparent and competitive internet could make space for new platforms to arise from within Europe itself.
Both visions of the internet are negative, inasmuch as they focus on saying “no” to contemporary excesses. American leaders demand an internet that’s safer for children, yet we’re not suddenly being inundated with government funding for research on what kinds of digital communities are healthy for young people, or funding to experiment with new networks that could help them cope with anxiety and social pressure.
The EU’s vision imagines an internet where US and Chinese powers are no longer the central actors, but the vision isn’t accompanied by the investment we might hope for in experiments to build platforms centred around the European values of privacy and user autonomy. Small-scale experiments in community-led campaigns and platforms, such as PublicSpaces and PubHubs in the Netherlands—and the recent interest in Mastodon, built by a German developer—suggest a future for more open and inclusive platforms with European roots. Unfortunately, there’s little indication yet of a sustained challenge to the centrality of existing multibillion-dollar platforms.
At such a moment, it may be worth looking towards the man who got us into this mess in the first place, Tim Berners-Lee. In an open letter to the web on the 35th anniversary of his invention, Berners-Lee pleads with us to address the “extent of power concentration, which contradicts the decentralised spirit I originally envisioned.” He offers a simple diagnosis for how we’ve gotten to this point: “Leadership, hindered by a lack of diversity, has steered away from a tool for public good [towards] one that is instead subject to capitalist forces resulting in monopolisation.”
Berners-Lee offers his new piece of digital architecture, Solid, as a possible framework for a new internet based around the idea that users should own and control their data, rather than having it locked in the servers of big businesses. Whether Solid is a solution to the many problems—real and imagined—of the current internet is a longer discussion. What’s encouraging is that, 35 years after the technological changes that brought the web into homes and workplaces around the world, its creator is still able to advance positive visions for what it could be. As we look to address the excesses of the modern web, we should challenge ourselves to imagine alternatives as broadly and optimistically as Berners-Lee does.