Photo: Sebastian Gollnow/DPA/PA Images

Driverless cars: when a crash is unavoidable, who should they save?

The answers people give to this question can be both fascinating and disturbing
November 13, 2018

The other day, someone confidently told me that we were heading for a golden age for philosophy. With the growth of AI, he explained, there’s going to be a hell of a lot of work figuring out the ethical rules to govern machine behaviour.

Since it is something of a mantra for me that there is no algorithm for ethics, this filled me with despair. Whether I like it or not, though, we are going to have to devise ethical algorithms to serve as imperfect proxies for the messy moral judgments that until now it has fallen to humans to take.

With this in mind, an international team of researchers has been gathering data from 233 countries on what people believe the life-saving priorities of autonomous vehicles should be, in situations where some death is unavoidable. The decisions of over two million people were collected and analysed; the results make for fascinating and disturbing reading.

There are few surprises at the top of the list of people whose lives should be prioritised: babies, children and pregnant mothers. More worrying is that the lives of athletic men and women are valued more highly than those of their overweight counterparts. The homeless also count for less than executives but more than old people. The least-valued humans are criminals, who come after dogs in the list of the world’s priorities, with only cats more dispensable.

What to make of this? The pessimistic response is that it just confirms that moral judgments are not based on thought-through principles but knee-jerk reactions. Worse, these reactions betray morally indefensible prejudice. Nor can we hope that philosophy can help us do better because, as the work of Joshua Greene on the psychology of moral choice shows, even philosophers are often just rationalising their gut reactions.

Perhaps, however, this verdict is itself too hasty. Yes, our moral judgments are fast, hot and emotional. But the emotions are not raw: they are conditioned by the values of the societies in which the respondents live. Out of the 117 countries that can be compared, France demonstrates its commitment to égalité by ranking fourth for choices that maximise the number of lives saved, irrespective of whose lives they are. By contrast, Japan is last in this list. Although that would appear to confirm its reputation as a hierarchical nation, the way the Japanese differentiate here has little to do with social standing. It ranks at only 86th in preference for saving the lives of the higher status, much lower than the UK, which sits around halfway on that table. The UK shows a strong desire to see the survival of the physically fittest, ranking 10th compared to Japan’s 93rd and Nigeria’s 117. However, these last two nations differ markedly on preferential treatment for the lawful, with Japan ranked fourth and Nigeria 102nd.

It is culture, not biology, that makes the average French person value all lives more equally and the Japanese make that value depend on how we behave. And these values can in turn be shaped by the philosophical and theological ideas that hold sway. The experiment is therefore not bad news for the idea of morality per se, only for a particular idea of morality that we have mistaken for an unshifting common sense for too long.

It was always naïve to believe that moral decisions are made rationally, at the point of decision. As both Confucius and Aristotle realised, we are creatures of habit and the way to make better moral choices is to cultivate certain attitudes and practices so that our “knee jerk” reactions are correct.

A second—and related—mistake is perhaps more important. We think of ourselves as autonomous decision-makers, with ethical reasoning that is essentially personal. But morality is all about how we co-operate and our values are largely determined by culture. Rational deliberations about what values we should hold in highest esteem therefore should be social, not private.

Far from burying the idea of morality, then, this research helpfully reframes it as a social phenomenon. We need to think collectively about what values our society is going to prioritise, not only in how we program AI but in how we shape public institutions. And if we make the right choices in our ground rules, fashioning society in the right way, our “gut” reactions will more often than not attune with what is right.