The person-affecting value of existential risk reduction

Introduction

The stan­dard mo­ti­va­tion for the far fu­ture cause area in gen­eral, and ex­is­ten­tial risk re­duc­tion in par­tic­u­lar, is to point to the vast fu­ture that is pos­si­ble pro­vid­ing we do not go ex­tinct (see Astro­nom­i­cal Waste). One cru­cial as­sump­tion made is a ‘to­tal’ or ‘no-differ­ence’ view of pop­u­la­tion ethics: in sketch, it is just as good to bring a per­son into ex­is­tence with a happy life for 50 years as it is to add fifty years of happy life to some­one who already ex­ists. Thus the 10lots of po­ten­tial peo­ple give profound moral weight to the cause of x-risk re­duc­tion.

Pop­u­la­tion ethics is in­fa­mously re­con­dite, and so dis­agree­ment with this as­sump­tion com­mon­place; many find at least some form of per­son af­fect­ing/​asym­met­ri­cal view plau­si­ble: that the value of ‘mak­ing happy peo­ple’ is ei­ther zero, or at least much lower than the value of mak­ing peo­ple happy. Such a view would re­move a lot of the up­side of x-risk re­duc­tion, as most of its value (by the lights of the to­tal view) is en­sur­ing a great host of happy po­ten­tial peo­ple ex­ist.

Yet even if we dis­count the (for­give me) per­son-effect­ing benefit, ex­tinc­tion would still en­tail vast per­son-af­fect­ing harm. There are 7.6 billion peo­ple al­ive to­day, and 7.6 billion pre­ma­ture deaths would be deemed a con­sid­er­able harm by most. Even fairly small (albeit non-pas­calian) re­duc­tions in the like­li­hood of ex­tinc­tion could prove highly cost-effec­tive.

To my knowl­edge, no one has ‘crunched the num­bers’ on the ex­pected value of x-risk re­duc­tion by the lights of per­son af­fect­ing views. So I’ve thrown to­gether a gues­ti­mate as a first-pass es­ti­mate.

An es­ti­mate

The (for­ward) model goes like this:

  1. There are cur­rently 7.6 billion peo­ple al­ive on earth. The wor­ld­wide mean age is 38, and wor­ld­wide life ex­pec­tancy is 70.5.

  2. Thus, very naively, if ‘ev­ery­one died to­mor­row’, the av­er­age num­ber of life years lost per per­son is 32.5, and the to­tal loss is 247 Billion life years.

  3. As­sume the ex­tinc­tion risk is 1% over this cen­tury, uniform by year (i.e. the risk this year is 0.0001, ditto the next one, and so on.)

  4. Also as­sume the tractabil­ity of x-risk re­duc­tion is some­thing like (bor­row­ing from Millett and Sny­der-Beat­tie) this: ’There’s a pro­ject X that is ex­pected to cost 1 billion dol­lars each year, and would re­duce the risk (pro­por­tionately) by 1% (i.e. if we spent a billion each year this cen­tury, xrisk over this cen­tury de­clines from 1% to 0.99%).

  5. This gives a risk-re­duc­tion per year of around 1.3 * 10-6 , and so an ex­pected value of around 330 000 years of life saved.

Given all these things, the model spits out a mean ‘cost per life year’ of $1500-$26000 (mean $9200).

Caveats and elaborations

The limi­ta­tions of this are nigh-in­nu­mer­able, but I list a few of the most im­por­tant be­low an ap­prox­i­mately as­cend­ing or­der.

Zeroth: The model has a wide range of un­cer­tainty, and rea­son­able sen­si­tivity to dis­tri­bu­tional as­sump­tions: you can mod­u­late mean es­ti­mate and range by a fac­tor of 2 or so by whether the dis­tri­bu­tions used are Beta, log nor­mal, or tweak­ing their var­i­ance.

First: Ad­just­ment to give ‘cost per DALY/​QALY’ would be some­what down­ward, al­though not dra­mat­i­cally (a fac­tor of 2 would im­ply ev­ery­one who con­tinues to live does so with a dis­abil­ity weight of 0.5, in the same bal­l­park as those used for ma­jor de­pres­sion or blind­ness).

Se­cond, trends may have a large im­pact, al­though their im­por­tance is mod­u­lated by which per­son-af­fect­ing view is as­sumed. I de­liber­ately set up the es­ti­mate to work in a ‘one shot’ sin­gle year case (i.e. the figure ap­plies to a ‘spend 1B to re­duce ex­tinc­tion risk in 2018 from 0.0001 to 0.000099’ sce­nario).

By the lights of a per­son-af­fect­ing view which con­sid­ers only peo­ple who ex­ist now, mak­ing the same in­vest­ment 10 years from now (i.e. spent 1B to re­duce ex­tinc­tion risk in 2028 from 0.0001 to 0.000099) is less at­trac­tive, as some of these peo­ple would have died, and the new peo­ple who have re­placed them have lit­tle moral rele­vance. Th­ese views thus im­ply a fairly short time hori­zon, and are par­tic­u­larly sen­si­tive to x-risk in the near fu­ture. Given the ‘1%’ per cen­tury is prob­a­bly not uniform by year, and plau­si­bly lower now but higher later, this would im­ply a fur­ther penalty to cost-effec­tive­ness.

Other per­son af­fect­ing views con­sider peo­ple who will nec­es­sar­ily ex­ist (how­ever cashed out) rather than whether they hap­pen to ex­ist now (plant­ing a bomb with a timer of 1000 years is still ac­crues per­son-af­fect­ing harm). In a ‘ex­tinc­tion in 100 years’ sce­nario, this view would still count the harm of ev­ery­one al­ive then who dies, al­though still dis­count the fore­gone benefit of peo­ple who ‘could have been’ sub­se­quently in the moral calcu­lus.

Thus the trends in fac­tual ba­sis be­come more salient. One ex­am­ple is the on­go­ing de­mo­graphic tran­si­tion, and the con­se­quently older pop­u­la­tion give smaller val­ues of life-years saved if pro­tected from ex­tinc­tion in the fu­ture. This would prob­a­bly make the ex­pected cost-effec­tive­ness some­what (but not dra­mat­i­cally) worse.

A lot turns on the es­ti­mate for marginal ‘x-risk re­duc­tion’. I think the num­bers offered in terms of base rate, and how much it can be re­duced for now much lean on the con­ser­va­tive side of the con­sen­sus of far-fu­ture EAs. Con­fi­dence in (im­plied) scale or tractabil­ity an or­der of mag­ni­tude im­pose com­men­su­rate in­creases on the risk es­ti­mate. Yet in such cir­cum­stances the bulk of dis­agree­ment is ex­plained by em­piri­cal dis­agree­ment rather than a differ­ent take on the pop­u­la­tion ethics.

Fi­nally, this only ac­counts for some­thing like the (welfare) ‘face value’ of ex­is­ten­tial risk re­duc­tion. There would be some fur­ther benefits by the light of the per­son-af­fect­ing view it­self, or eth­i­cal views which those hold­ing a per­son af­fect­ing view are likely sym­pa­thetic to: ex­tinc­tion might im­pose other harms be­yond years of life lost; there could be per­son af­fect­ing benefits if some of those who sur­vive can en­joy ex­tremely long and happy lives; and there could be non-welfare goods on an ob­jec­tive list which rely on non-ex­tinc­tion (among oth­ers). On the other side, those with non-de­pri­va­tion­ists ac­counts of the bad­ness of death may still dis­count the pro­posed benefits.

Conclusion

Notwith­stand­ing these challenges, I think the model, and the re­sult that the ‘face value’ cost-effec­tive­ness of x-risk re­duc­tion is still pretty good, is in­struc­tive.

First, there is a com­mon pat­tern of thought along the lines of, “X-risk re­duc­tion only mat­ters if the to­tal view is true, and if one holds a differ­ent view one should ba­si­cally dis­count it”. Although rough, this cost-effec­tive­ness gues­ti­mate sug­gests this is mis­taken. Although it seems un­likely x-risk re­duc­tion is the best buy from the lights of a per­son-af­fect­ing view (we should be sus­pi­cious if it were), given ~$10000 per life year com­pares un­favourably to best global health in­ter­ven­tions, it is still a good buy: it com­pares favourably to marginal cost effec­tive­ness for rich coun­try health­care spend­ing, for ex­am­ple.

Se­cond, al­though it seems un­likely that x-risk re­duc­tion would be the best buy by the lights of a per­son af­fect­ing view, this would not be wildly out­landish. Those with a per­son-af­fect­ing view who think x-risk is par­tic­u­larly likely, or that the cause area has eas­ier wins available than im­plied in the model, might find the best op­por­tu­ni­ties to make a differ­ence. It may there­fore sup­ply rea­son for those with such views to in­ves­ti­gate the fac­tual mat­ters in greater depth, rather than rul­ing it out based on their moral com­mit­ments.

Fi­nally, most should be morally un­cer­tain in mat­ters as re­con­dite as pop­u­la­tion ethics. Un­for­tu­nately, how to ad­dress moral un­cer­tainty is similarly re­con­dite. If x-risk re­duc­tion is ‘good but not the best’ rather than ‘worth­less’ by the lights of per­son af­fect­ing views, this likely im­plies x-risk re­duc­tion looks more valuable what­ever the size of the ‘per­son af­fect­ing party’ in one’s moral par­li­a­ment.