Politics

The ballot bots: as our politics goes online, can our laws keep up?

They didn't swing Brexit. In fact, most of them seem to be German. But make no mistake: they’re real, they’re over here and they’re spreading poison

January 19, 2018
From Russia with love: foreign interference with our politics on online channels could become a big issue for our time. Photo: Prospect composite
From Russia with love: foreign interference with our politics on online channels could become a big issue for our time. Photo: Prospect composite

The UK is trying to govern 21st-century election campaigns with 20th-century laws and regulations. Election rules in the UK spell out in excruciating detail how spending should be allocated between national and local campaigns on leaflets, billboards and more. They describe how office space should be allocated as a campaign cost and set precise limits on local campaign spending, at a rate of pennies per voter.

The rules are meticulous. But they belong to a pre-internet era and so fail to properly tackle the most important activity of modern political campaigns—engaging voters on social media. In 2015, the Conservative Party spent £1.2m on highly-targeted Facebook adverts. In 2016’s referendum, Vote Leave directly spent £2.7m on targeted Facebook adverts. A further £800,000 was spent on Facebook adverts via third-party campaigners—donations which are now subject to a formal Electoral Commission investigation.

The regulator has in recent times found itself busy, having probed 30 Tory MPs about 2015 spending, it is currently investigating the Labour sub-group Momentum over its activity in 2017’s elections.

The UK’s political battles have moved on to the web. The reach of any given online advert is impossible to gauge, except for the organisation promoting it. But according to those who worked on Labour’s 2017 online advertising strategy, it reached eight million voters in a day. Its huge online supporter base reached millions more with hyper-partisan pro-Corbyn, anti-Tory messages.

The intended target of social media ads is difficult for outsiders to know, and the message itself will often be invisible to anyone who isn’t targeted. This allows the worst kind of “dog-whistle” campaigning. Defining what counts as an advert, too, can be impossible: if a Facebook page builds up a huge following through paid promotion, then releases (unpaid) political messages during a campaign, that doesn’t count as paid advertising under existing rules. This enables much propaganda to wriggle around the rules.

That is just one of many dozens of legal loopholes. Another is the use of “bots” (a type of software capable of autonomously performing actions such as tweeting, retweeting, liking or direct messaging other accounts) or other fake social media accounts, to amplify messages.

Given how often tweets and other social media comments make it into mainstream media outlets, the influence of such accounts can reach into the offline world.

Fake accounts spreading fake news during elections could be an issue at home. But the chief current concern relates to their orchestration from overseas. The Russian state is in the spotlight, and increasingly it is being asked whether Moscow attempted to destabilise the Brexit referendum as it is charged with interfering in the 2016 US presidential election. One Democrat-dominated Senate investigation suggested as much in January.




Listen: James Ball talks to Tom Clark for Prospect's monthly podcast "Headspace"




Thus far, however, Facebook and Twitter are claiming that they did not. In evidence submitted to parliament’s Culture, Media and Sport Committee, Facebook said that the total value of Russian-backed spending relating to the European Union referendum was 75p and suggested the results had been seen by less than 200 people. Twitter said there had been just six Russian-funded promoted tweets, all from the television channel RT, formerly Russia Today. Needless to say, Russia denies tampering with Brexit.

Case closed? Not quite. Damian Collins, the Chair of the parliamentary committee, lambasted the social media giants’ response to his queries as “completely inadequate” and accused them of doing “no work” to track down whether Russia’s Internet Research Agency—its overseas propaganda unit—was influencing votes without paying for advertising.

This is where the social networks’ complacent response begins to fall apart. Researchers at the University of Edinburgh analysed tweets from a list of almost 3,000 accounts submitted by Twitter to Congress, accounts that had all been shut down as suspected Russian-backed bots targeting the US election. The researchers examined whether any of those same accounts had also tweeted about Brexit, and found that 419 had done so. By only focusing on accounts that had paid to promote tweets, Twitter’s response to the inquiry excluded these entirely.
"Almost all the real evidence that we have relates to Twitter. We have no information about Facebook"
A further analysis was conducted by BuzzFeed News working with researchers at the University of Sheffield. It identified dozens more bot accounts connected to the 3,000 identified by Twitter but which had not yet been shut down and remained active. Thus Twitter’s evidence, and its list of accounts, begins to look far from comprehensive. There are likely still extensive numbers of suspect accounts active on the site.

Examining what the bots tweeted about, and when, revealed much about how the networks of influence operate. Many of the accounts tweeted primarily in German about German politics, rather than events in the US or UK. They would typically join in with a hashtag game, sending a number of innocuous or banal tweets, and follow them up with an anti-Merkel message.

In the months before the EU referendum, the accounts would mention Brexit rarely, if at all—the full network of thousands of Twitter bots would generally have no more than 20 Brexit tweets most days.

On the day of the referendum, though, this surged to 1,200, with a strong anti-EU bias. The accounts shared doctored photographs of David Cameron kissing the boots of someone swathed in an EU flag; his face overlaid with “THIS IS THE FACE OF A TRAITOR AND LIAR”; a laughing Cameron stating how “they actually believe the scaremongering crap we’ve been putting out,” and even a meme with Peppa Pig fleeing in terror from Cameron.

Depending which country was closest to an election, the accounts would switch their focus from German politics to UK politics, to US politics and share messages denigrating establishment figures and supporting right-wing insurgents.

The accounts also had success in influencing media coverage of specific events. After the Westminster Bridge terror attack, an account named @SouthLoneStar tweeted “Muslim woman pays no mind to the terror attack, casually walks by a dying man while checking phone” alongside a photograph of a woman in a hijab during the aftermath of the attack. In reality, the woman was visibly shocked by the incident and had offered help to those around.

The divisive tweet was widely shared, being retweeted more than 1,600 times and was picked up by several media outlets. It turned out that @SouthLoneStar was one of the bot accounts run by Russia’s Internet Research Agency. The evidence is clear: Russia operated thousands of fake accounts aimed at influencing western politics, and they had at least some measure of success in reaching an audience and achieving some media pick-up.

But the influence and the power of Russia’s propaganda efforts is easy to overstate. The University of Sheffield research suggested that the average bot account tweet was only retweeted 0.3 times—so less than once—and that a typical bot tweet would only attract 0.4 likes. This was a dismally low level of engagement, which prompted the project’s lead researcher to call the propaganda efforts a “failed experiment.”

A separate study conducted by academics at City, University of London also found that bots had been ineffective. Looking at 13,000 suspicious accounts which had been tweeting pro-Brexit messages in the run-up to the election, the team found that the messaging was largely ineffective. There were very few retweets or shares, and any interaction they did garner was often with other suspected bot accounts.

The breadth and depth of Russian misinformation efforts, then, is far larger than the Kremlin—or Twitter and Facebook—would suggest. But it’s also less significant and less effective than some headlines threaten.

And importantly, almost all the real evidence that we have relates to Twitter; the relatively open nature of the platform allowed this information to be gathered with little or no official help from the company itself. We have no information about Facebook, which remains opaque for both researchers and the authorities.

What we have seen on Twitter is, oddly, reassuring in the sense that, although there was clearly an attempt to meddle, it only made limited waves. Facebook is a larger social network, so you’d expect there to be more influence there but it is far harder to see what has happened. Facebook is now being asked for more evidence and information by the US Senate’s Intelligence Committee in the US, and by the Electoral Commission and the Culture, Media and Sport Committee in the UK. It faces tough questions.

Donald Trump secured the presidency by the very thinnest of margins: had 50,000 voters across three states voted differently, Hillary Clinton would be in the Oval Office. It is conceivable that Russian efforts could have affected that margin of victory.

Brexit is different: the “Leave” campaign won by 1.3m votes, predominantly those of older and poorer voters, both of whom are less likely to be active online. As such, even if Facebook propaganda efforts were better than the largely ineffectual tweets, Russia’s efforts were highly unlikely to swing the EU referendum.

That doesn’t mean those efforts should be ignored. Instead, Russia’s propaganda efforts should serve as a warning that the UK’s rules on political messaging, campaigning and propaganda are outdated and ineffective. Many of the same tools which are needed to track and regulate domestic political campaigning are the same ones that would help tackle the propaganda efforts from abroad.

In the digitally-connected social world, advertising blends into messaging, and into propaganda. The old distinctions don’t count. Russia, perhaps, serves as a better foil to help us realise this than plain old party politics. If the Kremlin’s information operations prompt the UK to get to grips with these issues, we could end up being not resentful towards Russia, but grateful.