After two full decades of our data being harvested, algorithms analysing our habits and humanity ceding control to machines, it is only in the last 12 months that we finally started to see the very best—and worst—of a future entangled with technology. Artificial Intelligence hype went into overdrive, at the same time as fears about the algorithmic manipulation of politics and the smothering of privacy reached fever pitch.
The Cambridge Analytica scandal in March revealed how a private company had been using Facebook newsfeeds to spread misinformation, manipulate emotions and advantage one political candidate over another—and a highly controversial candidate at that. It was a wake-up call for two reasons. “Algorithm” used to be a word you’d associate with computer scientists or mathematicians like me. But here was a story which brought home, first, just how far algorithms have permeated the fragile structure of our society; and, secondly, just how dramatic the unintended consequences of a badly thought through algorithm can be—when millions of people are under its spell.
Facebook wasn’t the only tech giant to be panned this year for failing to think through the implications of its inventions. Back in 2016, Amazon started selling Rekognition, a facial recognition tool, to police forces. It was the same technology used by Sky News during its coverage of the 2018 Royal Wedding to spot famous faces in the crowd, but when deployed by law enforcement in a city like Orlando, an early adopter of the tech, it could use the feed from a network of cameras to track an individual across the city.
This summer, the American Civil Liberties Union (ACLU) led two dozen civil rights groups to call for Amazon to cease providing “surveillance systems” to the government. Cue everyone with an interest in Amazon’s future, from shareholders to employees, calling for the tech giant to stay away from law enforcement.
There are serious concerns about this kind of technology—and it’s not just the question of snooping, it’s the question of how well it actually works. To illustrate, in July, ACLU used the software on the faces of 535 members of Congress and checked them against a database of 25,000 criminal mugshots. If the algorithm was working perfectly, it would have uncovered the truth—that none of the congressmen were in the criminal database. Instead, it found false matches for 28 of them.
On our own shores too, there are similar tales. A facial recognition tool used by the police to search for wanted criminals at events like the Notting Hill Carnival has as much as a 98 per cent error rate. And yet, this summer, the Metropolitan Police Commissioner, Cressida Dick, said she was “completely comfortable” using such a technology on our streets.
That kind of accuracy rate might be good enough when you’re spotting celebs at a wedding, but when an algorithm going awry can lead to someone’s arrest, we need to think very carefully about whether it’s a good idea to unleash it into the world.
“There’s certainly not been any revolution in wisdom—the values and judgments embedded in algorithms still grow out of the same old crooked timber of humanity”
There have been all sorts of other examples of algorithms not working the way their designers envisaged over the last 12 months. IBM’s Watson was found to be recommending “unsafe and incorrect” cancer treatments to patients. Uber’s self-driving car killed a pedestrian. A recruitment tool that penalised CVs containing the word “women’s” had to be abandoned.
There’s an important point in here about all this newfangled tech. Science fiction hasn’t become fact. Under the hood, AI isn’t a clever artificial being making perfect decisions. What we’re witnessing is not, in fact, a revolution in intelligence. And there’s certainly not been any revolution in wisdom—the values and judgments embedded in algorithms still grow out of the same old crooked timber of humanity. It is, instead, merely a revolution in computational statistics.
This assessment might sell a lot less home assistants and self-driving cars, but it will at least give us a better idea of how things stand. Because statistics is all about uncertainty. Nothing is ever guaranteed—there’s always the chance of error. And while that is the case, we’d do well to tread a bit more carefully with our tech as we enter 2019.
Read more new thinking for 2019