Jessica Burgess sent her teenage daughter Celeste a message on Facebook in April 2022: “Hey we can get the show on the road the stuff came in… The 1 pill stops the hormones and then you gotta wait 24 hour 2 take the other.” Her daughter, around 23 weeks pregnant, took the pills and aborted her foetus.
After a tipoff, local police investigated and both Burgess and her daughter are being charged with felony crimes in the US state of Nebraska. The evidence against them: the messages mother and daughter sent on Facebook, obtained by serving its parent company, Meta, with a warrant.
The Burgess case is receiving attention in the US owing to the Supreme Court decision that eliminated the constitutional right to an abortion, allowing individual states to limit or prohibit the termination of a pregnancy. While Burgess is being prosecuted under an earlier Nebraska law, which banned most abortions after 20 weeks of pregnancy, her case raises the spectre that digital data may be used to prosecute women seeking abortions in jurisdictions that have since introduced near-blanket bans. US social media has been filled with warnings to women to delete period and fertility tracking tools for fear this data could be used against them. Experts warn that other digital traces—texts to a friend about an unexpected pregnancy, searches for information about pharmaceutical abortion—are likely to be incriminating.
Facebook, the whipping boy of the global techlash over the past few years, is facing widespread criticism for its co-operation with Nebraskan law enforcement. It is not clear whether Facebook’s behaviour is especially egregious. Facebook/Meta’s internal transparency reports indicate that the company receives law enforcement requests from US authorities for customer information more than 100,000 times a year, and releases at least some information in around 90 per cent of cases.
Google/Alphabet, in a similar transparency report, said it released data in response to 82 per cent of US law enforcement requests in the first half of 2021. Both companies say they fight overly broad requests for information, and reports indicate that requests from outside the US are often handled differently—Facebook complied with less than 60 per cent of the 100,000 requests for information from Indian authorities in 2021.
Given that platforms like Facebook and Google would face serious consequences were they to systemically refuse to honour legal requests for information in the countries where they operate, how should privacy-conscious individuals respond? The obvious answer is that they should begin using end-to-end encryption (or E2EE).
Meta gives user information to US law enforcement 90 per cent of the times it is asked
This is a form of communication popularised by American cryptographer Moxie Marlinspike, who created the messaging service Signal. When two people communicate on Signal, their calls or text messages are encrypted on their phones or laptops, and can only be decrypted by the recipient’s device, which contains a secret decryption key. Telephone companies and internet service providers involved in transmitting the message cannot decipher its contents, nor can Signal itself. Documents revealed by whistleblower Edward Snowden indicate that even leading intelligence agencies find it difficult or impossible to decrypt messages sent using Signal and similar messaging services—Signal’s website features an advertisement from Snowden, saying: “I use Signal every day.”
End-to-end encryption found a massive new audience in 2016, when Marlinspike worked with the founders of WhatsApp to make chats and phone calls end-to-end encrypted by default. The messages Burgess and her daughter sent to each other were transmitted using Facebook Messenger, which supports end-to-end encryption but does not turn it on by default. Had Burgess and her daughter co-ordinated over WhatsApp, Meta would likely have had no data to turn over to police—the conversation would be unreadable by the company or law enforcement.
So why aren’t all online conversations end-to-end encrypted by default? It’s possible that many more soon will be—public pressure from the Nebraska case seems likely to accelerate Facebook’s use of E2EE in its Messenger tool. But widespread adoption of end-to-end encryption might have an additional—and very complicated—side effect. It might let companies like Facebook off the hook for their role in enabling the spread of mis- and disinformation.
WhatsApp, which now has over two billion users worldwide, is popular both for allowing conversations between individuals and groups. The group conversations are often important community spaces, allowing extended families to remain in touch across national borders, or providing a space for people living in the same apartment building or neighbourhood to exchange help and advice. Unfortunately, these spaces—which bridge the gap between private messaging and traditional social media—are powerful vectors for the spread of mis- and disinformation. Misinformation linking 5G towers to Covid-19 infections spread widely on WhatsApp, leading to a rash of arson attacks in the UK. In India, WhatsApp groups have been linked to religious and ethnic violence, leading WhatsApp/Meta to limit users’ ability to forward messages in a bid to slow the spread of misinformation.
Many of the techniques Meta uses to limit the spread of mis- and disinformation on Facebook cannot work on WhatsApp. Facebook uses algorithms to detect whether a post to an individual’s timeline or community group contains false claims about coronavirus, and either place a warning message on it or prohibit it from being posted. On Whats-App, Meta will only know the contents of a post if someone who receives it forwards it to WhatsApp’s help centre.
Scholars of mis- and disinformation have been wrestling with the problem of how to study encrypted online spaces. Unlike studying Twitter or YouTube, where the vast majority of material posted is viewable by the general public, researchers studying misinformation on WhatsApp must either join the groups where misinformation spreads (an ethically complex proposition) or rely on third-party “tip lines,” where users who believe they’ve received misinformation can forward messages or images for debunking. One particularly successful tip line was Verificado, a platform run by technology company Meedan in partnership with a coalition of Mexican journalism organisations. Verificado investigated WhatsApp messages spread during Mexico’s 2018 election and published debunkings of misinformation through their news outlets. Meedan has gone on to run similar tip lines at other moments when disinformation was likely to spread, such as India’s 2019 general election.
Should social media platforms work to protect user privacy and make end-to-end encryption the default? Should they limit encryption to interpersonal (rather than group) messaging in order to better fight misinformation? A cynic might predict that Facebook and others would embrace end-to-end encryption as a way of avoiding responsibility for problematic content on their platforms. (Indeed, cynics said precisely that when Mark Zuckerberg announced in a 2019 blog post that Facebook was pivoting to “A Privacy-Focused Vision for Social Networking.”)
Perhaps the most helpful advice in resolving the tensions between privacy and the proliferation of false information comes from philosopher Helen Nissenbaum. Nissenbaum argues that people experience online privacy in terms of “contextual integrity”—when we expect certain levels of privacy in certain types of interactions. When those expectations aren’t met, we feel our privacy has been violated. It’s easy to understand why giving up information on a mother and daughter messaging about abortion feels like a violation of privacy. It’s less clear whether a platform factchecking a problematic meme in a community WhatsApp group is a public good or public surveillance. Privacy advocates and legislators will need to wrestle with these questions to navigate a landscape in which privacy violations can see people sent to prison, and misinformation can threaten the health of democracies.