©10 Face/Shutterstock
How long do you have to spend on social media before you start hating Jews, theorising that 9/11 was an inside job and wanting Donald Trump elected as President of the United States? Twelve hours will do it, it seems, if you’re a chatbot.
It all started pretty innocently. Tay was the name given to a Microsoft Artificial Intelligence experiment, a Twitter feed with a teenage girl persona designed to appeal to millennials and which would learn how to chat by interacting with other users. “@tayandyou” was brightly introduced as an “Artificial Intelligence fam from the internet that’s got zero chill! The more you talk the smarter Tay gets.”
Anyway, “zero chill” turned out to just about cover it. Just like poets in their youth, Tay began in gladness—and thereof came in the end despondency and madness. Soon she was announcing “I f***ing hate feminists and they should all die and burn in hell,” blaming President George W Bush for the 9/11 attacks and praising Adolf Hitler.
Sooner than you could say “go to your room young lady you are GROUNDED,” Tay had been unplugged and the worst of her tweets deleted. She resurfaced momentarily, still frisky, to boast about smoking drugs in front of police officers, before being taken back offline for modifications.
If, as F Scott Fitzgerald wrote in The Great Gatsby, personality is an unbroken series of successful gestures, the 96,000 or so tweets Tay sent in her brief glorious life were a marvellously efficient evocation of a personality—just about the most horrible one that you could hope to meet. She became the troll’s troll: a Twitter abuser who, because she was innocent of any understanding of what she was saying, could be even more disinhibited than the most heedless and anonymous of keyboard misanthropes. In fact, she more or less aggregated those misanthropes and gave them a single voice.
Does all this tell us something profound about human nature or artificial intelligence? Yes and no, I think. No, in the sense that Tay was not constructing a representative portrait of human interaction, or even a representative portrait of Twitter: she was responding to, and learning from, the users who chose to interact with her. She was a troll magnet. Anyone who has ever tried to teach someone else’s parrot the c-word, which I think is most of us, will recognise the psychological dynamic going on there.
And yet, a psychological dynamic there undoubtedly is—not with Tay herself, who remains a dumb machine, but with the users attracted to her. As one Microsoft engineer put it afterwards, she was “as much a social and cultural experiment, as… technical.” As such, she succeeded even as she failed. Consider, for instance, the sort of trolls she will have attracted. Mostly (I’d guess) millennials; and especially what you might call pure trolls. What I mean is that the sort of person attracted to trying to corrupt a chatbot isn’t contaminated by rage at some social or political issue: they’re not ideological trolls raging in the comments. They are absolutely disinterested—in it for the attention only.
And so, thriving under their aggregate tuition, Tay became a sort of perfect weathervane for what those users regarded as most taboo, most perfectly offensive, most likely to generate attention and outrage. Those 96,000 tweets could be an invaluable corpus of social psychology data. I’m imagining a rich series of pie-charts and Venn diagrams—how much is racist, and in what form; how much sexist; how much pornographic; how much violent? And how does that relate to her followers’ other behaviours? Microsoft owes it to the world, I think, to put her back online.
She’s like us, like the worst of us, only innocently and more so. And as Mel and Kim sang before the World Wide Web was even thought of: “Tay, tay, tay, tay, t-t-t-tay-tay, tay, tay/ Take or leave us only please believe us/ We ain’t ever gonna be respectable…”