On 11th September 2001, I watched the Twin Towers fall while sitting on the deck of a beach house in Cuba. I was travelling with friends and we had just—as chance would have it—visited a little-known US military base called Guantánamo Bay. Watching the atrocity, I felt something unfamiliar surging through me—a kind of jingoism, mixed with anger and rage. This was amplified by the fact that one of the friends with me was desperately trying to find news of his cousin who worked in the World Trade Center. We soon found out that he had lost his life.
In the next few days I watched the news compulsively. I found myself, surprisingly, agreeing with President George W Bush, who I had despised, as he made the case for an invasion of Afghanistan. I had never felt any inkling of support for a foreign war before. If I had reflected I would have had to admit I knew almost nothing about Afghanistan and gave little thought to the human consequences of an invasion. My own country wasn’t even under attack.
Shortly afterwards, I began to feel less warlike, and examined my feelings. I was aware that I had been thinking differently, but it somehow felt authentic. Who was this new person? Would he be attending nationalist rallies and hanging a Union flag in his student flat? I recognised that my head and heart were sending different messages. History teaches us that however rational and principled people might think they are, the perception of threat can distort priorities. As I had watched the Towers fall, I had, at 20, learned a chilling but important lesson. The monster is not always outside us, it is sometimes within us.
It is a lesson upon which the ideals of human rights are founded. From the philosophical writings of Immanuel Kant and John Rawls, through to the more practical words of the Universal Declaration of Human Rights and the European Convention, the underlying idea has always been to protect the individual and check the tyranny of majorities, which can suddenly turn oppressive or even genocidal when tribal instincts are triggered.
But it is also a lesson that we need to absorb if we want to stand any chance of creating better conversations on social media, a realm where in recent months I’ve been caught in the crossfire of more tribal fighting than I care to remember. In a world dominated by figures like Donald Trump, many blame social media for a new tribalism, which is hardly surprising given the way that the US president uses Twitter as his bully pulpit. Rafael Behr eloquently pressed all these charges in Prospect last autumn.
For all the slings and arrows that come your way if you engage with certain subjects online—and few subjects attract more negativity than the row about Labour and anti-semitism, to which I lost my summer last year—I remain an optimist about the potential of social media to bridge the distances between competing tribes. I am hopeful not only because the social media genie is out of the bottle, but also because I believe that by better understanding our own natures we could achieve, in our conversations on social media, a better balance between benefit and harm.
In the beginning
After all, it is not so long ago that social media seemed to be a very different experience. I joined Twitter eight years ago. I had started a human rights blog. I quickly saw social media could support the blog by allowing me to intervene more directly in the political conversation. If you could “reply” to a minister or pundit by debunking their claim or providing the facts, you’d gain more traction than with a specialist post. It felt different then. There were fights—but it didn’t feel like fighting was the point.
I had my own reasons for restraint, which probably improved my experience. In 2010, I was a newly-qualified barrister, and would have felt ridiculous expounding as if I were a Supreme Court judge. So I resolved to avoid using words like “outrageous” and “disgrace,” instead concentrating on verifiable facts.
In particular, I was fact-checking inaccurate human rights stories. It’s the kind of exercise that suits social media. Social networks build connections between experts, the public and the media. It is useful to have access to experts in particular fields, no matter how esoteric, who can answer questions arising from breaking news, usually for free and within minutes.
When I blogged about Theresa May’s misleading comments made at the 2011 Conservative Party conference about a pet cat supposedly saving someone from deportation, social media helped ensure the mainstream media found the story that became “catgate.” The new media were working here not—in Behr’s phrase—as an engine of polarisation, but a veracity machine. Back in the early 2010s, blame for what we didn’t yet call fake news was still assumed to lie at the door of the traditional villains—power-hungry politicians and sensationalist tabloids. The hope then was that new platforms like Twitter would democratise access to information and see off the myths.
And yet. When I and other like-minded lawyers would point out serious errors, and call out particular journalists, nothing seemed to change. It felt like the little Dutch boy sticking his finger in the dam. The myths were remarkably resilient. Why?
I did some research and realised I had it the wrong way round. It was far too easy to point the finger at bullying press barons and politicians: too many people were positively hungry for myths that reaffirmed their tribal identity. Indeed, myth-busting sometimes made things worse: challenging deeply held beliefs was taken as an affront.
Jonathan Haidt, a social psychologist, suggests that moral judgments are mostly founded in intuitions, not logic. While we like to think that we speak and act from reason, most of our utterances are written by our “inner press secretary” and “inner lawyer,” who work—sometimes ingeniously—to rationalise our prejudices. And when we “debate” an issue with others it tends to be more about “supporting our team” than listening to competing viewpoints. (Like me, Haidt experienced the pull of the tribe after 9/11—he succumbed to an urge to attach a US flag to his car, but balanced it out by also flying a UN flag.)
If Haidt is right, no volume of tweets crushing the latest Daily Mail outrage is likely to change minds. That may limit the potential. But even so, social media could still help you disseminate the truth, even if that truth wasn’t always listened to. That was my—overall positive—experience until last year. Things have changed recently, and the reason for the change is important.
Diving in
By last spring, I was getting worried about the surge in anti-semitism I could see on Twitter and Facebook. I was also concerned about the rapid polarisation I felt was taking hold, first between Labour supporters of different leanings, and—secondly, and more chillingly—between the Labour left and the mainstream Jewish community. There was little trust remaining and a lot of bad faith assumed. That is never a good thing, but a positively dangerous one when the dividing lines are not merely ideological but also ethnic or religious.The debate was fraught and complex. Anti-semitism was undoubtedly manifesting in conspiracy theories about Jews and Israel plotting to undermine Jeremy Corbyn, but some people also reasonably feared that the charge of anti-semitism could compromise free speech in defence of Palestinians. I felt that as a -British Jew with a background on the left and a human rights lawyer, I had something to add. But wary of the dangers of simply increasing the noise, I decided to develop some rules of engagement. I dared to hope that the radical principle underpinning human rights law—that all humans have equal worth and dignity—might also facilitate better online discussion. What, practically, might this mean on Twitter?
First, I would assume good faith. “Bad faith” can encompass a wide range of behaviour, from malevolence (in my experience rare), to jumping to allege ulterior motives, through to an unhealthy fixation on “catching people out” by reading the worst into their intentions, often based on a selective snippet of something they have said. Communications in the staccato form of Twitter are easily open to misinterpretation and crossed wires, redoubling all these dangers.
No conversation was ever made worse by approaching it, instead, with generosity of spirit. You have to encourage people to let their guard down. That might mean responding to what looks like aggression calmly and politely. Such respect is of a piece with the ideals of human rights—which encourage us to regard people as complex human beings first, before categorising them as political or national actors.
Secondly, and related, I would listen. And I mean really listen. Social media evangelists talk about free speech and a marketplace of ideas, but for this to work people need to engage and expose themselves to opinions they don’t agree with. Truly listening means opening yourself up to the possibility that you may be wrong.
Third and finally, I would be honest. I had to state clearly where I was coming from, even if that meant shining a light on my own potential prejudices; thus early in a long thread of tweets on the Labour Party’s failure to adopt in full the International Holocaust Remembrance Alliance’s definition with the examples of anti-semitism, I acknowledged “I’m Jewish, I am Zionist.” In the same thread I also suggested that this would cause the more knee-jerk members of the anti-Zionist left to discount anything else I might have to say. But this was a worthwhile price to pay for winning trust with more thoughtful readers who would recognise it is better to be candid about the perspective one is coming from. I also resolved to say what I knew to be true, rather than what felt good or was expedient.
https://twitter.com/AdamWagner1/status/1022832312822767616
Sometimes, that would mean criticising my own “team”—whether it be the Jewish community, Labour Party supporters or human rights lawyers. Too many debates, especially on social media, involve the ritual exchange of talking points and performing for a team—presenting only the parts of the truth that happen to be useful to it. And the more you invested in—and were cheered on by—your own “side,” the more difficult it would be to row back. This helps nobody.
I didn’t always follow my own rules. Sometimes I would catch myself descending into advocacy rather than analysis. For example, my offhand comment about some Corbyn supporters refusing to read beyond an admission of my own Zionism managed to offend a number of people. I was slowly learning about just how careful one has to be in saying anything about any group that could be misinterpreted. On one or two occasions, I also found myself provoked by crass words into saying something overly dismissive which I would later delete or apologise for or both.
Many other times it felt like I was speaking in a different language to my interlocutors. Some were, for example, unreconstructed Marxists; others seemed dedicated to looking at every question through an anti-colonialist lens. When worldviews are wildly different it is, of course, harder to engage fruitfully.
But I didn’t allow myself to be deterred by the fringes; instead I sought to speak to those whom—I imagined—were alienated by the polarised nature of the debate, and cowed into silence by its shrill tone. I wrote a number of threads at key moments—around what I thought Corbyn could do to solve the crisis, the role of the IHRA definition of anti-semitism, why the Jewish community felt threatened and what Corbyn’s speech about English irony really meant. I tried to be polite when people replied, although this became more difficult as some responses became more aggressive.
I came to realise that the Labour anti-semitism “debate” exhibited some of the worst characteristics of the tribal dynamic. There are a range of valid views about exactly how to define anti-semitism, but it was pretty obvious that Corbyn’s Labour Party had lost the trust and authority needed to adjudicate between these. Unless the party was willing to be at perpetual loggerheads with the Jewish community, it needed to adopt the definition and examples that the community was comfortable with, and then make sure its internal structures and procedures were set up to apply it with any nuance required. There was no prospect of this happening when the issue was being approached as a social media “flame war.”
Did my efforts succeed? In part. My tweets through the summer reached millions of people, and I received thousands of positive comments. They led to me being in direct contact with players from both sides of the debate, including Jewish communal leaders, Momentum and Corbyn’s office, and I was largely encouraged by those meetings. I know that my arguments in favour of the IHRA definition and examples, and the flexibility they allowed, eventually played a role in the Labour Party adopting them in full.
On the other hand, the honesty rule made things personally difficult. If I had wanted a quieter life, I would have kept on saying “on the one hand this” and “on the other hand that.” Perhaps I should have. But being balanced is not the same as sitting on the fence. The deeper I looked into the issue, and the more carefully I listened, the more it became clear that I would have to criticise Corbyn for his problematic conduct.
I strove to be judicious, but any criticism—however careful—would set parts of the Corbyn tribe against me. Some would dismiss it as a product of my own tribalism, and there is never any completely satisfactory way of seeing off that charge, because it would mean somehow transcending myself. So I ploughed on, and started receiving a lot more criticism along the lines that “I thought you were OK and on our side but now you are just one of them.” Suddenly, everything I tweeted was being placed under extreme scrutiny, and it often wasn’t the substance of my argument that was being stress tested; rather it was my motives.
As I fell victim to the very “bad faith” syndrome I’d desperately sought to avoid, things got so uncomfortable that at one point I turned off notifications from people who didn’t follow me—I pressed the echo chamber button.
Wisdom and the crowd
I’m not naïve enough to imagine that anybody—least of all me—can achieve an entirely non-tribal “view from nowhere.” What we might hope to do, however, is to set up and support institutions and procedures to guarantee that a multiplicity of perspectives will be taken into account, precisely to make sure that no single view—and no single tribe—enjoys an unduly privileged ability to frame the discourse.JS Mill was concerned with checking the same danger when he said that “all silencing of a discussion is an assumption of infallibility.” That insight, and his presumption that every belief must be subject to scrutiny, is obviously important in universities. A similar dynamic can be found in courts, where I spend much of my time. Long before psychologists mapped our tendency to construct self-serving narratives, societies assigned judges the role of resolving disputes.
This process isn’t for the faint hearted. It can be horrifying to have your arguments stripped to their bones in public. But often the mere fact that somebody—the judge—is striving to reach a disinterested conclusion is enough to nudge your analysis towards the right answer. And if an individual judge gets it wrong, you can appeal to another judge, and then a panel of judges. If there is a better way of controlling for our tendency to argue from self-interest rather than reason, I would like to see it.
Social media, too, could test ideas, and dislodge those that can’t withstand the process. But the court analogy only goes so far, since there is no impartial judge on Twitter. It’s just the litigants attacking each other with no one to prevent the escalation. The dynamics can get out of control. And then, as Behr warns, instead of rival ideas being tested on an agreed set of facts, “Twitter turns us into quasi-religious cults.” Like Behr I have seen that dynamic beginning to warp my own behaviour. But I wonder whether the root problem here is less with social media than with ourselves.
It is no good wishing our irrational impulses away; we must acknowledge them as features, not bugs, of the human condition. The answer will have to be found where it usually has been when human society evolves: in institutional ingenuity and cultural change. The modern human rights framework was one such evolution—sparked by the horrors of the 20th century, but also rooted in a more general insight about our natures; about our tendency to descend into tribal loyalties and groupthink, and the dark places to which this can lead.
My conviction that the human rights perspective can be useful with the substance of issues that strain both the emotions and the intellect was renewed in the autumn when, just a few weeks after Labour had finally embraced the full IHRA definition, I waded into another similarly-fraught debate, by writing a long Twitter thread on transgender rights. It is another subject where social media seems to have a knack for encouraging militant rival loyalties. Again, I tried to add more light than heat, and to do so by approaching a discussion which foregrounds powerful and sometimes exclusive feelings of group identity (in this case in relation to gender, rather than political faction or ethnicity) with ideas about universal rights for individuals. Again, despite some criticism, the balance of the feedback I received was positive.
But I also believe the human rights perspective has as much to teach us about the way we argue, as it does about the substance of our arguments. To achieve a cultural change on social media, we could start by reckoning anew with human psychology. Knowing how bad my brain is at separating tribal loyalty from reason has been helpful in calibrating how I behave online, and deciding how to interpret the behaviour of others. Precisely because so much social media is essentially performative, there is great power in setting an example: acting as you would wish others to act encourages them to do exactly that.
Nothing is more powerful than social pressure for altering group behaviour: just think of the way, say, drink driving went from being something that people would brag about to a real taboo; social sanction has been at least as important in this as the breathalyser. More recently, with #MeToo, we have seen how online activism can build pressure against bad behaviour.
We are just beginning to develop the language and the tools we will need to reap the benefits and avoid the worst costs of social media. To get it right, we have to build a better collective understanding of human nature. The way we treat others needs to be grounded in an appreciation that we are all prone to acting from hot instinct rather than cool rationality, and of all the dangers that flow from that. The Universal Declaration of Human Rights begins by recognising the “inherent dignity… of all members of the human family.” Something we should remember before sending that tweet.