Imagine you could experience the life of every person who has ever—and will ever—live. You are born 300,000 years ago in Africa and see out a (probably short) existence as the first creature who counts as human. When you die, you are reincarnated as the second-ever human, then the third-ever human, and so on, until you reach the present day. You would, by 2022, have lived for around four trillion years. Yet even if humanity only lasts for as long as the average mammalian species, 99.5 per cent of your life could still be ahead of you. From the present day you steam onwards through the cycle of rebirth, until the totality of your life comprises every human experience, past, present and future, combined.
The future is enormous. If you were to experience all of it, what choices would you want humanity to make? Would you be satisfied with our current trajectory, or would you hope for a fundamental rethink of the way society is structured?
This is the riveting thought experiment that begins the new book from William MacAskill, a 35-year-old philosopher based at Oxford’s Global Priorities Institute (and one of Prospect’s World’s Top Thinkers of 2022). MacAskill is best known as a proponent of “effective altruism”, which asks how we can maximise the amount of good we do in the world and makes the case for better-targeted charitable giving. For him, morality is about empathy—putting ourselves in another’s, or in this case everyone’s, shoes. The thought experiment is designed to introduce us to the idea—MacAskill hopes it will become a movement—of “longtermism”, which says that we have neglected to pay enough attention to the wellbeing of our descendants and should recalibrate our moral and societal choices to prioritise their welfare.
It is an idea that chimes with the zeitgeist: MacAskill warns us of the very real danger that climate meltdown caused by fossil fuel use will render the world uninhabitable for future generations. But in building a philosophical scaffolding for the longtermist position—and in following the logic of its arguments, no matter how counterintuitive their conclusions—there is nothing familiar or predictable about this book. It challenges readers to reorganise their whole conceptual schema. It is part technical philosophy, part science fiction and part rallying cry. It also frames arguments at the largest-possible scale: MacAskill is thinking less about our children and grandchildren than millions, or hundreds of millions, of years into the future. Do we have an obligation to individuals who might live then, and if so, what does that look like?
The first step in the philosophical argument is to establish that the wellbeing of future generations is a good thing. To do this MacAskill draws on a familiar line of thinking in effective altruism, whose proponents—most notably the Australian philosopher Peter Singer—have long argued that distance in space should be no excuse for tolerating the suffering of other people: just because someone lives on the other side of the world does not make their pain any less real, and we should feel as motivated to alleviate it as we would with suffering closer to home. The same principle applies, says MacAskill, with time. Just because someone lives in the far future is no reason to ignore their interests: their pain and joy will one day be as real as ours now. And there could be many, many more of them: if humanity spreads across the universe, eventually there could be trillions more humans, all of whose welfare could be impacted by the choices that we make now. How should we act in light of that knowledge?
For starters, we should agree that apocalyptic events are best avoided. A nuclear war between Russia and the US today could write future generations out of the script before they even get a look-in—a moral evil under any rational view. This may sound flippant, but it is a good illustration of the capacity that we have to exert immense influence on future generations: they may not be able to impact us, but we can sure as hell impact them—an inherent asymmetry.
Right now, we are in a uniquely strong position to alter humanity’s future course
A crucial plank of MacAskill’s argument is that, right now, we are in a uniquely strong position to alter humanity’s future course. To capture the idea, he uses the metaphor of glassblowing. In the present, the glass is still molten—comparatively speaking, we are in the early days of civilisation and we can still decide the shape it will eventually take. Competing moral worldviews are currently fighting it out for supremacy. It is incumbent on us to act before the glass hardens—a risk MacAskill calls “value lock-in”. At some point, he says, the principles now in flux could settle, with certain values cementing themselves and shaping the future indefinitely. The moral contours of human existence would then be set.
MacAskill’s next move is a novel one—to me at least. Those of us alive today also have a special obligation, he explains, because we are living through a brief moment when we can influence the rest of present-day civilisation. That wasn’t the case in the past. A Roman had no easy way to convey information to the Han dynasty of China; the two civilisations lived in virtual independence from one another. Today, communications technology means that dialogue is possible between distant societies: an idea from one continent can rapidly gain traction in another. But we may yet lose that ability. If we spread out across the cosmos, we will face new physical barriers to communication—and if we spread far enough it will become impossible for one human settlement to contact another in a faraway galaxy.
MacAskill is also deeply anxious about AI, which some think could one day surpass us in intelligence: the initial ground rules we lay down for how AI works could be vital if, once it hits a certain level of sophistication, it runs away from further human control. Whether you think MacAskill’s line of thinking here is bonkers or inspired is a matter of taste—I’m inclined to think it’s the latter: we must act now to set a proper moral course.
But a big philosophical challenge in all this is that there is no saying whether our own, present-day values are the right ones. What if we fight to lock in principles that turn out to be profoundly flawed? We are inherently inclined to believe that our moral views are right, but history—the obvious example being the slave trade—shows that we are often mistaken. This is therefore also a moment of high jeopardy. It isn’t enough to wake up to the risk of value lock-in; we somehow have to future-proof our worldview in a way that no civilisation has thus far managed.
I think MacAskill is on the right lines in his response to this challenge. The key is to look beyond our attitudes towards any specific issue and focus instead on meta-values: broader principles and dispositions that can help us create a society which is as capable of moral advancement as possible. Rather than betting the house on the rightness or wrongness of a particular political, sexual or religious morality, this would be a society that put a high premium on a sense of moral exploration, enabling the contest between different opinions to play out. We should seek to encourage reason, reflection and empathy, such that over time the best views—or at least non-abhorrent ones—are more likely to prevail. By the time value lock-in occurs, the damage it would do to future civilisation might not be so great.
This is what I take to be the rough framework running through MacAskill’s book, but there is so much else of interest here. In terms of influences the book hovers somewhere between the work of John Rawls (whose A Theory of Justice used a thought experiment—the veil of ignorance—with even more intuitive force than MacAskill’s) as well as the work of Singer and the late Derek Parfit.
The last of these three is brought in for a wonderful section on population ethics, the field he pioneered and one full of head-spinning ideas. Example: because the timing of conception is so exact, and the slightest delay means that a different sperm will fertilise the egg, the minor choices we make can potentially impact not just what a person’s life is like but which person is born at all. Indeed, by knocking a handful of people’s schedules out of whack, a seemingly innocuous action done right now can, over the course of history, wipe out billions of possible people and create billions of new ones. This kind of problem is bamboozling in the abstract, but it only serves to bolster MacAskill’s argument that what we do now could have an inordinate impact on the future—the butterfly effect writ large.
So what are the central objections to longtermism? One important counterargument—familiar from criticism of effective altruism—is that in fact how near we are to someone, whether in space or time, is a reason to focus our efforts on them. We are better placed to help a friend or family member, since we are better able to judge what help a person requires—and to provide it—if we have a close relationship with them and understand their needs. There is also something to be said for loyalty as a moral good in itself, along with the practical advantages it brings to a society if we feel that we can depend on those around us. Clearly this position chimes better with the views of the typical person.
But few people think we should discount the wellbeing of present-day strangers entirely. Most of us accept that there is some role for international aid to alleviate suffering, for example in the wake of a natural disaster. If we accept a measure of responsibility for strangers now, why not in the future? And MacAskill is at pains to avoid arguing that longtermism is the only moral imperative—he simply thinks it is one that we have undervalued.
There is another counterargument that I would have liked to see MacAskill spend more time rebutting. It can be summed up as: future people will be smarter than us. On this view, it is irrational for us to sacrifice too much in combatting climate change or antibiotic resistance, because by the time such challenges really spiral out of control, humanity will have the technology to solve the problem far more efficiently than we can now. Our descendants will have such good carbon capture and storage, the thinking runs, that it makes no sense to compromise economic growth today in a bid to rein in climate change—let smarter humans in the future deal with it. Similarly, rather than ration our use of antibiotics, why not put our faith in an ingenious scientific workaround in the decades to come?
But while it is true that technology is advancing at breakneck speed, that doesn’t mean we can predict with any accuracy what new tools we’ll actually have access to in decades’, let alone centuries’, time. Sci-fi films from the 20th century predicted mass hovercraft use—something that never materialised—but they did not anticipate TikTok. Given that the stakes are so high, it would be foolish to gamble everything on theoretical breakthroughs by future humans.
And that is the point one keeps coming back to. Why risk it? We are not entitled to put our own convenience before our distant descendants, if there is even the remotest chance it will destroy them. This sounds obvious, but MacAskill wants his reader to reckon with the practical elements of a subject—the distant future—that usually feels abstract to us.
We don’t know how humans will live in hundreds of millions of years, but it’s time to weigh our obligation to these remote beings. They can’t vote, they can’t run for public office, but they’ll inherit the consequences of the choices we make now. They deserve better—whoever they are.