The ‘far future’ is not just the far future

It’s a widely held be­lief in the ex­is­ten­tial risk re­duc­tion com­mu­nity that we are likely to see a great tech­nolog­i­cal trans­for­ma­tion in the next 50 years [1]. A tech­nolog­i­cal trans­for­ma­tion will ei­ther cause flour­ish­ing, ex­is­ten­tial catas­tro­phe, or other forms of large change for hu­man­ity. The next 50 years will mat­ter di­rectly for most cur­rently liv­ing peo­ple. Ex­is­ten­tial risk re­duc­tion and han­dling the tech­nolog­i­cal trans­for­ma­tion are there­fore not just ques­tions of the ‘far fu­ture’ or the ‘long-term’; it is also a ‘near-term’ con­cern.

The far fu­ture, the long term, and as­tro­nom­i­cal waste

Often in EA, the im­por­tance of the ‘far fu­ture’ is used to mo­ti­vate ex­is­ten­tial risk re­duc­tion and other long-term ori­ented work such as AI safety. ‘Long term’ in it­self is used even more com­monly, and while it is more am­bigu­ous, it of­ten car­ries or con­tains the same mean­ing as ‘far fu­ture’. Here are some ex­am­ples: In­fluenc­ing the Far Fu­ture, The Im­por­tance of the Far Fu­ture, As­sump­tions About the Far Fu­ture and Cause Pri­or­ity, The Long Term Fu­ture, Longter­mism.

‘The im­por­tance of the far fu­ture’ ar­gu­ment builds on the pos­tu­late that there are many pos­si­ble good lives in the fu­ture, many more than cur­rently ex­ist. This long-term fu­ture can stretch hun­dreds, thou­sands, or billions of years or even fur­ther into the fu­ture. Nick Bostrom’s Astro­nom­i­cal Waste makes a com­pel­ling pre­sen­ta­tion of the ar­gu­ment:

Given these es­ti­mates, it fol­lows that the po­ten­tial for ap­prox­i­mately 10^38 hu­man lives is lost ev­ery cen­tury that coloniza­tion of our lo­cal su­per­cluster is de­layed; or equiv­a­lently, about 10^29 po­ten­tial hu­man lives per sec­ond.

The ex­is­ten­tial risk re­duc­tion po­si­tion is not pred­i­cated on as­tro­nom­i­cal waste

How­ever, while as­tro­nom­i­cal waste is a very im­por­tant ar­gu­ment, strong state­ments of its type are not nec­es­sary to take the ex­is­ten­tial risk po­si­tion. The vast ma­jor­ity of work in ex­is­ten­tial risk re­duc­tion is based on the plau­si­bil­ity that we’ll see tech­nolog­i­cally driven events of im­mense im­pact on hu­man­ity, a tech­nolog­i­cal trans­for­ma­tion, oc­cur within the next 50 years. Cur­rently liv­ing peo­ple and our chil­dren and grand­chil­dren would be dras­ti­cally af­fected by such a tech­nolog­i­cal trans­for­ma­tion[2].

Similarly, Bostrom, in his as­tro­nom­i­cal waste es­say, ar­gues that even with a ‘per­son-af­fect­ing util­i­tar­ian’ view, re­duc­ing ex­is­ten­tial risk is a pri­or­ity:

Now, if these as­sump­tions are made, what fol­lows about how a per­son-af­fect­ing util­i­tar­ian should act? Clearly, avoid­ing ex­is­ten­tial calamities is im­por­tant, not just be­cause it would trun­cate the nat­u­ral lifes­pan of six billion or so peo­ple, but also – and given the as­sump­tions this is an even weightier con­sid­er­a­tion – be­cause it would ex­tin­guish the chance that cur­rent peo­ple have of reap­ing the enor­mous benefits of even­tual coloniza­tion.

The ‘far fu­ture’ is the ‘fu­ture’

The ar­gu­ments about the value of fu­ture lives and the pos­si­ble as­tro­nom­i­cal value of the fu­ture of hu­man­ity are very im­por­tant. But our work in ex­is­ten­tial risk re­duc­tion is meant to help fu­ture, near-fu­ture, and cur­rently ex­ist­ing peo­ple. Dist­in­guish­ing be­tween these mostly doesn’t seem to be de­ci­sion-rele­vant if a tech­nolog­i­cal trans­for­ma­tion is likely to hap­pen within the next 50 years. And speak­ing of ex­is­ten­tial risk re­duc­tion and flour­ish­ing in terms of the ‘far fu­ture’ too of­ten seems like it’s likely to make peo­ple fo­cus overly much on the gen­eral difficul­ties of imag­in­ing af­fect­ing that far fu­ture.

I pro­pose we avoid call­ing what we are do­ing ‘far fu­ture’ work (or other similar terms), ex­cept­ing cases where we think it will al­most only af­fect events oc­cur­ring be­yond the next 50 years. So what should we say in­stead? The fates of cur­rently liv­ing peo­ple, near-term fu­ture liv­ing peo­ple, and long-term fu­ture liv­ing peo­ple are all a ques­tion of the ‘fu­ture’. Per­haps we should just call it ‘fu­ture’ ori­ented work.


  1. When does the ex­is­ten­tial risk re­duc­tion com­mu­nity think we may see a tech­nolog­i­cal trans­for­ma­tion? In the Walsh 2017 sur­vey, the me­dian es­ti­mate of AI ex­perts was that there’s a 50% chance we will have hu­man level AI by 2061. My as­sess­ment is that peo­ple in the ex­is­ten­tial risk re­duc­tion com­mu­nity have similar views to the AI ex­perts. I’m not aware of any di­rect sur­veys in the com­mu­nity. Peo­ple in AI safety ap­pear to gen­er­ally have shorter timelines than the AI ex­perts pol­led. Paul Chris­ti­ano: “hu­man la­bor be­ing ob­so­lete… within 20 years is some­thing within the bal­l­park of 35% … I think com­pared to the com­mu­nity of peo­ple who think about this a lot, I’m more some­where in, I’m still on the mid­dle of the dis­tri­bu­tion”. ↩︎

  2. Who will be al­ive in 20 or 50 years? Likely you, and likely your chil­dren and grand­chil­dren. The me­dian age in the world is cur­rently 29.6 years. World life ex­pec­tancy at birth is 72.2 years, US life ex­pec­tancy is 78 years, and Canada is 82 years. Even with­out fur­ther life span im­prove­ments, the av­er­age cur­rently liv­ing per­son will be al­ive 40 years from now. Im­prov­ing medicine globally will push coun­tries closer to the level of Canada in the next few decades. Stan­dard medicine doesn’t seem likely to lead to a great differ­ence be­yond that. How­ever, di­rect ag­ing pre­ven­tion or re­ver­sal in­ter­ven­tions such as senolyt­ics could cause a phase change in life ex­pec­tancy by adding decades. In­ter­ven­tions of this form may hit the mar­ket in the next few decades. ↩︎