The expected value of extinction risk reduction is positive

By Jan Brauner and Fried­er­ike Grosse-Holz

Work on this ar­ti­cle has been funded by the Cen­tre for Effec­tive Altru­ism, but the ar­ti­cle rep­re­sents the per­sonal views of the au­thors.

As­sume it mat­ters morally what hap­pens in the mil­lions of years to come. What should we do, then? Will efforts to re­duce the risk of hu­man ex­tinc­tion lead to a bet­ter or worse fu­ture?

Be­cause the EA fo­rum does not yet sup­port foot­notes, the full ar­ti­cle is posted at https://​www.effec­tivealtru­ism.org/​ar­ti­cles/​the-ex­pected-value-of-ex­tinc­tion-risk-re­duc­tion-is-pos­i­tive/​

Abstract

If most ex­pected value or dis­value lies in the billions of years to come, al­tru­ists should plau­si­bly fo­cus their efforts on im­prov­ing the long-term fu­ture. It is not clear whether re­duc­ing the risk of hu­man ex­tinc­tion would, in ex­pec­ta­tion, im­prove the long-term fu­ture, be­cause a fu­ture with hu­man­ity may be bet­ter or worse than one with­out it.

From a con­se­quen­tial­ist, welfarist view, most ex­pected value (EV) or dis­value of the fu­ture comes from sce­nar­ios in which (post-)hu­man­ity colonizes space, be­cause these sce­nar­ios con­tain most ex­pected be­ings. Sim­ply ex­trap­o­lat­ing the cur­rent welfare (part 1.1) of hu­mans and farmed and wild an­i­mals, it is un­clear whether we should sup­port spread­ing sen­tient be­ings to other planets.

From a more gen­eral per­spec­tive (part 1.2), fu­ture agents will likely care morally about the same things we find valuable or about any of the things we are neu­tral to­wards. It seems very un­likely that they would see value ex­actly where we see dis­value. If fu­ture agents are pow­er­ful enough to shape the world ac­cord­ing to their prefer­ences, this asym­me­try im­plies the EV of fu­ture agents coloniz­ing space is pos­i­tive from many welfarist per­spec­tives.

If we can defer the de­ci­sion about whether to colonize space to fu­ture agents with more moral and em­piri­cal in­sight, do­ing so cre­ates op­tion value (part 1.3). How­ever, most ex­pected fu­ture dis­value plau­si­bly comes from fu­tures con­trol­led by in­differ­ent or mal­i­cious agents. Such “bad” agents will make worse de­ci­sions than we, cur­rently, could. Thus, the op­tion value in re­duc­ing the risk of hu­man ex­tinc­tion is small.

The uni­verse may not stay empty, even if hu­man­ity goes ex­tinct (part 2.1). A non-hu­man an­i­mal civ­i­liza­tion, ex­trater­res­tri­als or un­con­trol­led ar­tifi­cial in­tel­li­gence that was cre­ated by hu­man­ity might colonize space. Th­ese sce­nar­ios may be worse than (post-)hu­man space coloniza­tion in ex­pec­ta­tion. Ad­di­tion­ally, with more moral or em­piri­cal in­sight, we might re­al­ize that the uni­verse is already filled with be­ings or things we care about (part 2.2). If the uni­verse is already filled with dis­value that fu­ture agents could alle­vi­ate, this gives fur­ther rea­son to re­duce ex­tinc­tion risk.

In prac­tice, many efforts to re­duce the risk of hu­man ex­tinc­tion also have other effects of long-term sig­nifi­cance. Such efforts might of­ten re­duce the risk of global catas­tro­phes (part 3.1) from which hu­man­ity would re­cover, but which might set tech­nolog­i­cal and so­cial progress on a worse track than they are on now. Fur­ther­more, such efforts of­ten pro­mote global co­or­di­na­tion, peace and sta­bil­ity (part 3.2), which is cru­cial for safe de­vel­op­ment of pivotal tech­nolo­gies and to avoid nega­tive tra­jec­tory changes in gen­eral.

Ag­gre­gat­ing these con­sid­er­a­tions, efforts to re­duce ex­tinc­tion risk seem pos­i­tive in ex­pec­ta­tion from most con­se­quen­tial­ist views, rang­ing from neu­tral on some views to ex­tremely pos­i­tive on oth­ers. As efforts to re­duce ex­tinc­tion risk also seem highly lev­er­aged and time-sen­si­tive, they should prob­a­bly hold promi­nent place in the long-ter­mist EA port­fo­lio.