The emerging school of patient longtermism

Written by Ben Todd and crossposted from the 80,000 Hours blog.

One of the parts of effective altruism I’ve found most intellectually interesting recently is ‘patient longtermism’.

This is a school of thinking that takes longtermism seriously, but combines that with the idea that we’re not facing an unusually urgent threat to the future, or another urgent opportunity to have a long-term impact. (We may still be facing threats to the future, but the idea is that they’re not more pressing today than the threats we’ll face down the line.)

Broadly, patient longtermists argue that instead of focusing on reducing specific existential risks or working on AI alignment and so on today, we should expect that the crucial moment for longtermists to act lies in the future, and our main task today should be to prepare for that time.

It’s not a new idea –- Benjamin Franklin was arguably a patient longtermist, and Robin Hanson was writing about it by 2011 — but there has been some interesting recent research.

Three of the most prominent arguments relevant to patient longtermism so far have been made by three researchers in Oxford, who have now all been featured on our podcast (though these guests don’t all necessarily endorse patient longtermism overall):

  1. The argument that we’re not living at the most influential time ever (aka, the rejection of the ‘hinge of history hypothesis’) by Will MacAskill, written here and discussed on our podcast.

  2. The argument that we should focus on saving & growing our resources to spend in the future rather than acting now, which Phil Trammell has written up in a much more developed and quantitative way than previous efforts, and comes down more on the side of patience. You can see the paper or hear our podcast with him.

  3. Arguments pushing back against the Bostrom-Yudkowsky view of AI by Ben Garfinkel. You can see a collection of Ben’s writings here and our interview with him. The Bostrom-Yudkowsky view is the most prominent argument that AI is not only a top priority, but that it is urgent to address in the next few decades. That makes it, in practice, a common ‘urgent longtermist’ argument. (Though Ben still thinks we should expand the field of AI safety.)

Taking a patient longtermist view would imply that the most pressing career and donation opportunities involve the following:

  • Global priorities research—identifying future issues and improving our effectiveness at dealing with them.

  • Building a long-lasting and steadily growing movement that will tackle these issues in the future. This could be the effective altruism movement, but people might also look to build movements around other key issues (e.g. a movement for the political representation of future generations).

  • Saving money that future longtermists can use, as Phil Trammell discusses. There is now an attempt to set up a fund to make this easier.

  • Investing in any career capital that will allow you to achieve more of any of the above priorities over the course of your career.

The three researchers I list above are still unsure how seriously to take patient longtermism overall, and everyone who takes patient longtermism seriously still thinks we should spend some of our resources today on whichever object-level issues seem most pressing for longtermists. They usually converge on AI safety and other efforts to reduce existential risks or risk factors. The difference is that patient longtermists think we should spend much less than what urgent longtermists think.

Indeed, most people are not purely patient or purely urgent longtermists – rather they put some credence in both schools of thinking, and where they lie is a matter of balance. Everyone agrees that the ideal longtermist portfolio would have some of each perspective.

All this said, I’m excited to see more research done into the arguments for patient longtermism and what they might imply in practical terms.

If you’d like to see the alternative take — that the present day is an especially important time — you could read The Precipice: Existential Risk and the Future of Humanity by Toby Ord, who works at the University of Oxford alongside the three researchers mentioned above.

Further reading