The emerging school of patient longtermism

Writ­ten by Ben Todd and cross­posted from the 80,000 Hours blog.

One of the parts of effec­tive al­tru­ism I’ve found most in­tel­lec­tu­ally in­ter­est­ing re­cently is ‘pa­tient longter­mism’.

This is a school of think­ing that takes longter­mism se­ri­ously, but com­bines that with the idea that we’re not fac­ing an un­usu­ally ur­gent threat to the fu­ture, or an­other ur­gent op­por­tu­nity to have a long-term im­pact. (We may still be fac­ing threats to the fu­ture, but the idea is that they’re not more press­ing to­day than the threats we’ll face down the line.)

Broadly, pa­tient longter­mists ar­gue that in­stead of fo­cus­ing on re­duc­ing spe­cific ex­is­ten­tial risks or work­ing on AI al­ign­ment and so on to­day, we should ex­pect that the cru­cial mo­ment for longter­mists to act lies in the fu­ture, and our main task to­day should be to pre­pare for that time.

It’s not a new idea –- Ben­jamin Fran­klin was ar­guably a pa­tient longter­mist, and Robin Han­son was writ­ing about it by 2011 — but there has been some in­ter­est­ing re­cent re­search.

Three of the most promi­nent ar­gu­ments rele­vant to pa­tient longter­mism so far have been made by three re­searchers in Oxford, who have now all been fea­tured on our pod­cast (though these guests don’t all nec­es­sar­ily en­dorse pa­tient longter­mism over­all):

  1. The ar­gu­ment that we’re not liv­ing at the most in­fluen­tial time ever (aka, the re­jec­tion of the ‘hinge of his­tory hy­poth­e­sis’) by Will MacAskill, writ­ten here and dis­cussed on our pod­cast.

  2. The ar­gu­ment that we should fo­cus on sav­ing & grow­ing our re­sources to spend in the fu­ture rather than act­ing now, which Phil Tram­mell has writ­ten up in a much more de­vel­oped and quan­ti­ta­tive way than pre­vi­ous efforts, and comes down more on the side of pa­tience. You can see the pa­per or hear our pod­cast with him.

  3. Ar­gu­ments push­ing back against the Bostrom-Yud­kowsky view of AI by Ben Garfinkel. You can see a col­lec­tion of Ben’s writ­ings here and our in­ter­view with him. The Bostrom-Yud­kowsky view is the most promi­nent ar­gu­ment that AI is not only a top pri­or­ity, but that it is ur­gent to ad­dress in the next few decades. That makes it, in prac­tice, a com­mon ‘ur­gent longter­mist’ ar­gu­ment. (Though Ben still thinks we should ex­pand the field of AI safety.)

Tak­ing a pa­tient longter­mist view would im­ply that the most press­ing ca­reer and dona­tion op­por­tu­ni­ties in­volve the fol­low­ing:

  • Global pri­ori­ties re­search—iden­ti­fy­ing fu­ture is­sues and im­prov­ing our effec­tive­ness at deal­ing with them.

  • Build­ing a long-last­ing and steadily grow­ing move­ment that will tackle these is­sues in the fu­ture. This could be the effec­tive al­tru­ism move­ment, but peo­ple might also look to build move­ments around other key is­sues (e.g. a move­ment for the poli­ti­cal rep­re­sen­ta­tion of fu­ture gen­er­a­tions).

  • Sav­ing money that fu­ture longter­mists can use, as Phil Tram­mell dis­cusses. There is now an at­tempt to set up a fund to make this eas­ier.

  • In­vest­ing in any ca­reer cap­i­tal that will al­low you to achieve more of any of the above pri­ori­ties over the course of your ca­reer.

The three re­searchers I list above are still un­sure how se­ri­ously to take pa­tient longter­mism over­all, and ev­ery­one who takes pa­tient longter­mism se­ri­ously still thinks we should spend some of our re­sources to­day on whichever ob­ject-level is­sues seem most press­ing for longter­mists. They usu­ally con­verge on AI safety and other efforts to re­duce ex­is­ten­tial risks or risk fac­tors. The differ­ence is that pa­tient longter­mists think we should spend much less than what ur­gent longter­mists think.

In­deed, most peo­ple are not purely pa­tient or purely ur­gent longter­mists – rather they put some cre­dence in both schools of think­ing, and where they lie is a mat­ter of bal­ance. Every­one agrees that the ideal longter­mist port­fo­lio would have some of each per­spec­tive.

All this said, I’m ex­cited to see more re­search done into the ar­gu­ments for pa­tient longter­mism and what they might im­ply in prac­ti­cal terms.

If you’d like to see the al­ter­na­tive take — that the pre­sent day is an es­pe­cially im­por­tant time — you could read The Precipice: Ex­is­ten­tial Risk and the Fu­ture of Hu­man­ity by Toby Ord, who works at the Univer­sity of Oxford alongside the three re­searchers men­tioned above.

Fur­ther reading