Skip to content

The New Moral Mathematics

By Kieran Setiya at the Boston Review

Photo by Chris Liverani / Unsplash

Space is big,” wrote Douglas Adams in The Hitchhiker’s Guide to the Galaxy (1979). “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”

Time is big, too—even if we just think on the timescale of a species. We’ve been around for approximately 300,000 years. There are now about 8 billion of us, roughly 15 percent of all humans who have ever lived. You may think that’s a lot, but it’s just peanuts to the future. If we survive for another million years—the longevity of a typical mammalian species—at even a tenth of our current population, there will be 8 trillion more of us. We’ll be outnumbered by future people on the scale of a thousand to one.

What we do now affects those future people in dramatic ways: whether they will exist at all and in what numbers; what values they embrace; what sort of planet they inherit; what sorts of lives they lead. It’s as if we’re trapped on a tiny island while our actions determine the habitability of a vast continent and the life prospects of the many who may, or may not, inhabit it. What an awful responsibility.

This is the perspective of the “longtermist,” for whom the history of human life so far stands to the future of humanity as a trip to the chemist’s stands to a mission to Mars.

Oxford philosophers William MacAskill and Toby Ord, both affiliated with the university’s Future of Humanity Institute, coined the word “longtermism” five years ago. Their outlook draws on utilitarian thinking about morality. According to utilitarianism—a moral theory developed by Jeremy Bentham and John Stuart Mill in the nineteenth century—we are morally required to maximize expected aggregate well-being, adding points for every moment of happiness, subtracting points for suffering, and discounting for probability. When you do this, you find that tiny chances of extinction swamp the moral mathematics. If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction—a one in a million chance of saving at least 8 trillion lives—you should do the latter, allowing a million people to die.

Now, as many have noted since its origin, utilitarianism is a radically counterintuitive moral view. It tells us that we cannot give more weight to our own interests or the interests of those we love than the interests of perfect strangers. We must sacrifice everything for the greater good. Worse, it tells us that we should do so by any effective means: if we can shave 0.0001 percent off the probability of human extinction by killing a million people, we should—so long as there are no other adverse effects.

But even if you think we are allowed to prioritize ourselves and those we love, and not allowed to violate the rights of some in order to help others, shouldn’t you still care about the fate of strangers, even those who do not yet exist? The moral mathematics of aggregate well-being may not be the whole of ethics, but isn’t it a vital part? It belongs to the domain of morality we call “altruism” or “charity.” When we ask what we should do to benefit others, we can’t ignore the disquieting fact that the others who occupy the future may vastly outnumber those who occupy the present, and that their very existence depends on us.

From this point of view, it’s an urgent question how what we do today will affect the further future—urgent especially when it comes to what Nick Bostrom, the philosopher who directs the Future of Humanity Institute, calls the “existential risk” of human extinction. This is the question MacAskill takes up in his new book, What We Owe the Future, a densely researched but surprisingly light read that ranges from omnicidal pandemics to our new AI overlords without ever becoming bleak.

Like Bostrom, MacAskill has a big audience—unusually big for an academic philosopher. Bill Gates has called him “a data nerd after my own heart.” In 2009 he and Ord helped found Giving What We Can, an organization that encourages people to pledge at least 10 percent of their income to charitable causes. With our tithe, MacAskill holds, we should be utilitarian, aggregating benefits, subtracting harms, and weighing odds: our 10 percent should be directed to the most effective charities, gauged by ruthless empirical measures. Thus the movement known as Effective Altruism (EA), in which MacAskill is a leading figure. (Peter Singer is another.) By one estimate, about $46 billion is now committed to EA. The movement counts among its acolytes such prominent billionaires as Peter Thiel, who gave a keynote address at the 2013 EA Summit, and cryptocurrency exchange pioneer Sam Bankman-Fried, who became a convert as an undergraduate at MIT.

Effective Altruists need not be utilitarians about morality (though some are). Theirs is a bounded altruism, one that respects the rights of others. But they are inveterate quantifiers, and when they do the altruistic math, they are led to longtermism and to the quietly radical arguments of MacAskill’s book. “Future people count,” MacAskill writes:

There could be a lot of them. We can make their lives go better. This is the case for longtermism in a nutshell. The premises are simple, and I don’t think they’re particularly controversial. Yet taking them seriously amounts to a moral revolution.

The premises are indeed simple. Most people concerned with the effects of climate change would accept them. Yet MacAskill pursues these premises to unexpected ends. If the premises are true, he argues, we should do what we can to ensure that “future civilization will be big.”

MacAskill spends a lot of time and effort asking how to benefit future people. What I’ll come back to is the moral question whether they matter in the way he thinks they do, and why. As it turns out, MacAskill’s moral revolution rests on contentious, counterintuitive claims in “population ethics.”

Read the rest at the Boston Review

Comments

Latest