Dylan Matthews: The case for caring about the year 3000

The essay below, written by Dylan Matthews, appeared in the latest edition of the Future Perfect newsletter. To my knowledge, it hasn’t been published online, so I thought I should post it here.

The end of the year is a good time to think about the future, so let’s take a bit of time this Tuesday to think about our duties to people millions of years from now.

For a few years now, an intellectual trend — I’d call it an ideology but I doubt its advocates would appreciate it — called “long-termism” has been spreading.

The idea is that if part of what makes actions good or bad are their consequences, then the goodness or badness of an action is going to be primarily determined by the consequences it has in the far, far, far future — hundreds if not thousands of years from now.

This thesis matters a lot for ethical theories like utilitarianism, where all that matters is maximizing good consequences (happiness, or satisfying individuals’ preferences, etc.), but you don’t have to be Jeremy Bentham to think that the consequences of actions have some moral weight, even if they’re not the only thing that matters.

And once you accept that the consequences of actions are morally important, then the further conclusion that the consequences that matter most are the ones in the far, far, far future, is not that hard to establish. Philosopher Nick Beckstead was one of the first to comprehensively make this argument back in 2013, and now Oxford professors Hilary Greaves and Will MacAskill have a paper developing it further and making it, if anything, a stronger claim than Beckstead’s.

Greaves and MacAskill defend what they call “strong longtermism.” It is, indeed, an incredibly strong claim. “For the purposes of evaluating actions, we can in the first instance often ​simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects,” they write. “Short-run effects act as little more than tie-breakers.”

The argument goes like this. If humankind lasts until the end of the Earth as a habitable planet — or even if there’s just a one percent chance our species lasts that long — then at least one quadrillion people will live in the future. That’s 100,000 times the current population of the Earth. If humans figure out how to colonize other solar systems, we could last even longer.

That’s premise one. Premise two is that all humans should count equally: those in the future, those in the past, those living right now.

Put those together and you get the conclusion that the consequences that will matter most will affect people in the distant future, for the simple reason that that’s where the vast majority of people who will ever exist are.

This is one of those arguments that is so tidy that you can’t help but feel that it’s missing something obvious. The immediate objection that comes to mind is what Greaves and MacAskill call the “intractability” claim: that even if influencing the far future is the most important thing, we have no idea how to do that reliably.

Two decades ago, the philosopher James Lenman famously argued that we’re “clueless” about the millennia-on effects of our actions. Perhaps a 5th century AD bandit in the German black forest did a bad thing by not burning down a pregnant woman’s hut — if that woman’s distant descendent turned out to be Adolf Hitler. Many of our actions have those kinds of profound long-term consequences that we could never ever reliably predict.

Greaves has written about this specific problem elsewhere, but for this paper, Greaves and MacAskill answer the intractability objection by listing some examples of ways they think people today could, in fact, act to improve welfare centuries or millennia in the future. They could help advance economic growth and/​or technological progress, or mitigate risks of premature extinction from nuclear war or pandemic, or reduce the risk of non-extinction calamities like climate change.

In short: You could do the sorts of things many effective altruists who identify as long-termists do today.

Greaves and MacAskill’s argument is not ironclad; there are serious objections from “non-consequentialist” moral theories that reject the idea that adding up the consequences millennia in the future is more important for determining morality than other factors, like duties to our families or political communities (Greaves and MacAskill do dig into these in their article).

But it’s a challenging paper for people like me who are mostly “short-termist.” I give to global poverty charities, not long-term charities focused on preventing extinction or promoting growth, because I feel much more confident that we know how to, say, prevent malaria deaths than keep humanity going another 1,000 years. Greaves and MacAskill acknowledge that uncertainty. But they make a strong argument that the latter problem is important enough to warrant more of my time and money.