A proposed adjustment to the astronomical waste argument

An ex­is­ten­tial risk is a risk “that threat­ens the pre­ma­ture ex­tinc­tion of Earth-origi­nat­ing in­tel­li­gent life or the per­ma­nent and dras­tic de­struc­tion of its po­ten­tial for de­sir­able fu­ture de­vel­op­ment,” (Bostrom, 2013). Nick Bostrom has ar­gued that

“[T]he loss in ex­pected value re­sult­ing from an ex­is­ten­tial catas­tro­phe is so enor­mous that the ob­jec­tive of re­duc­ing ex­is­ten­tial risks should be a dom­i­nant con­sid­er­a­tion when­ever we act out of an im­per­sonal con­cern for hu­mankind as a whole. It may be use­ful to adopt the fol­low­ing rule of thumb for such im­per­sonal moral ac­tion:

Max­ipok: Max­i­mize the prob­a­bil­ity of an “OK out­come,” where an OK out­come is any out­come that avoids ex­is­ten­tial catas­tro­phe.”

There are a num­ber of peo­ple in the effec­tive al­tru­ism com­mu­nity who ac­cept this view and cite Bostrom’s ar­gu­ment as their pri­mary jus­tifi­ca­tion. Many of these peo­ple also be­lieve that the best ways of min­i­miz­ing ex­is­ten­tial risk in­volve mak­ing plans to pre­vent spe­cific ex­is­ten­tial catas­tro­phes from oc­cur­ring, and be­lieve that the best giv­ing op­por­tu­ni­ties must be with char­i­ties that pri­mar­ily fo­cus on re­duc­ing ex­is­ten­tial risk. They also ap­peal to Bostrom’s ar­gu­ment to sup­port their views. (Edited to add: Note that Bostrom him­self sees max­ipok as neu­tral on the ques­tion of whether the best meth­ods of re­duc­ing ex­is­ten­tial risk are very broad and gen­eral, or highly tar­geted and spe­cific.) For one ex­am­ple of this, see Luke Muehlhauser’s com­ment:

“Many hu­mans liv­ing to­day value both cur­rent and fu­ture peo­ple enough that if ex­is­ten­tial catas­tro­phe is plau­si­ble this cen­tury, then upon re­flec­tion (e.g. af­ter coun­ter­act­ing their un­con­scious, de­fault scope in­sen­si­tivity) they would con­clude that re­duc­ing the risk of ex­is­ten­tial catas­tro­phe is the most valuable thing they can do — whether through di­rect work or by donat­ing to sup­port di­rect work.”

I now think these views re­quire some sig­nifi­cant ad­just­ments and qual­ifi­ca­tions, and given these ad­just­ments and qual­ifi­ca­tions, their prac­ti­cal im­pli­ca­tions be­come very un­cer­tain. I still be­lieve that what mat­ters most about what we do is how our ac­tions af­fect hu­man­ity’s long-term fu­ture po­ten­tial, and I still be­lieve that tar­geted ex­is­ten­tial risk re­duc­tion and re­search is a promis­ing cause, but it now seems un­clear whether tar­geted ex­is­ten­tial risk re­duc­tion is the best area to look for ways of mak­ing the dis­tant fu­ture go as well as pos­si­ble. It may be and it may not be, and which is right prob­a­bly de­pends on many messy de­tails about spe­cific op­por­tu­ni­ties, as well as gen­eral method­olog­i­cal con­sid­er­a­tions which are, at this point, highly un­cer­tain. Var­i­ous con­sid­er­a­tions played a role in my rea­son­ing about this, and I in­tend to talk about more of them in greater de­tail in the fu­ture. I’ll talk about just a cou­ple of these con­sid­er­a­tions in this post.

In this post, I ar­gue that:

  1. Though Bostrom’s ar­gu­ment sup­ports the con­clu­sion that max­i­miz­ing hu­man­ity’s long term po­ten­tial is ex­tremely im­por­tant, it does not provide strong ev­i­dence that re­duc­ing ex­is­ten­tial risk is the best way of max­i­miz­ing hu­man­ity’s fu­ture po­ten­tial. There is a much broader class of ac­tions which may af­fect hu­man­ity’s long-term po­ten­tial, and Bostrom’s ar­gu­ment does not uniquely fa­vor ex­is­ten­tial risk over other mem­bers in this class.

  2. A ver­sion of Bostrom’s ar­gu­ment bet­ter sup­ports a more gen­eral view: what mat­ters most is that we make path-de­pen­dent as­pects of the far fu­ture go as well as pos­si­ble. There are im­por­tant ques­tions about whether we should ac­cept this more gen­eral view and what its prac­ti­cal sig­nifi­cance is, but this more gen­eral view seems to be a strict im­prove­ment on the view that min­i­miz­ing ex­is­ten­tial risk is what mat­ters most.

  3. The above points fa­vor very broad, gen­eral, and in­di­rect ap­proaches to shap­ing the far fu­ture for the bet­ter, rather than think­ing about very spe­cific risks and re­sponses, though there are many rele­vant con­sid­er­a­tions and the is­sue is far from set­tled.

I think some promi­nent ad­vo­cates of ex­is­ten­tial risk re­duc­tion already agree with these gen­eral points, and be­lieve that other ar­gu­ments, or other ar­gu­ments to­gether with Bostrom’s ar­gu­ment, es­tab­lish that di­rect ex­is­ten­tial risk re­duc­tion is what mat­ters most. This post is most rele­vant to peo­ple who cur­rently think Bostrom’s ar­gu­ments may set­tle the is­sues dis­cussed above.

Path-de­pen­dence and tra­jec­tory changes

In think­ing about how we might af­fect the far fu­ture, I’ve found it use­ful to use the con­cept of the world’s de­vel­op­ment tra­jec­tory, or just tra­jec­tory for short. The world’s de­vel­op­ment tra­jec­tory, as I use the term, is a rough sum­mary way the fu­ture will un­fold over time. The sum­mary in­cludes var­i­ous facts about the world that mat­ter from a macro per­spec­tive, such as how rich peo­ple are, what tech­nolo­gies are available, how happy peo­ple are, how de­vel­oped our sci­ence and cul­ture is along var­i­ous di­men­sions, and how well things are go­ing all-things-con­sid­ered at differ­ent points of time. It may help to think of the tra­jec­tory as a col­lec­tion of graphs, where each graph in the col­lec­tion has time on the x-axis and one of these other vari­ables on the y-axis.

With that con­cept in place, con­sider three differ­ent types of benefits from do­ing good. First, do­ing some­thing good might have prox­i­mate benefits—this is the name I give to the fairly short-run, fairly pre­dictable benefits that we or­di­nar­ily think about when we cure some child’s blind­ness, save a life, or help an old lady cross the street. Se­cond, there are benefits from speed­ing up de­vel­op­ment. In many cases, rip­ple effects from good or­di­nary ac­tions speed up de­vel­op­ment. For ex­am­ple, sav­ing some child’s life might cause his coun­try’s econ­omy to de­velop very slightly more quickly, or make cer­tain tech­nolog­i­cal or cul­tural in­no­va­tions ar­rive more quickly. Third, our ac­tions may slightly or sig­nifi­cantly al­ter the world’s de­vel­op­ment tra­jec­tory. I call these shifts tra­jec­tory changes. If we ever pre­vent an ex­is­ten­tial catas­tro­phe, that would be an ex­treme ex­am­ple of a tra­jec­tory change. There may also be smaller tra­jec­tory changes. For ex­am­ple, if some species of dolphins that we re­ally loved were de­stroyed, that would be a much smaller tra­jec­tory change.

The con­cept of a tra­jec­tory change is closely re­lated to the con­cept of path de­pen­dence in the so­cial sci­ences, though when I talk about tra­jec­tory changes I am in­ter­ested in effects that per­sist much longer than stan­dard ex­am­ples of path de­pen­dence. A clas­sic ex­am­ple of path de­pen­dence is our use of QWERTY key­boards. Our key­boards could have been ar­ranged in any num­ber of other pos­si­ble ways. A large part of the ex­pla­na­tion of why we use QWERTY key­boards is that it hap­pened to be con­ve­nient for mak­ing type­writ­ers, that a lot of peo­ple learned to use these key­boards, and there are ad­van­tages to hav­ing most peo­ple use the same kind of key­board. In essence, there is path de­pen­dence when­ever some as­pect of the fu­ture could eas­ily have been way X, but it is ar­ranged in way Y due to some­thing that hap­pened in the past, and now it would be hard or im­pos­si­ble to switch to way X. Path de­pen­dence is es­pe­cially in­ter­est­ing when way X would have been bet­ter than way Y. Some poli­ti­cal sci­en­tists have ar­gued that path de­pen­dence is very com­mon in poli­tics. For ex­am­ple, in an in­fluen­tial pa­per (with over 3000 cita­tions) Pier­son (2000, p. 251) ar­gues that:

Spe­cific pat­terns of timing and se­quence mat­ter; a wide range of so­cial out­comes may be pos­si­ble; large con­se­quences may re­sult from rel­a­tively small or con­tin­gent events; par­tic­u­lar courses of ac­tion, once in­tro­duced, can be al­most im­pos­si­ble to re­verse; and con­se­quently, poli­ti­cal de­vel­op­ment is punc­tu­ated by crit­i­cal mo­ments or junc­tures that shape the ba­sic con­tours of so­cial life.

The con­cept of a tra­jec­tory change is also closely re­lated to the con­cept of a his­tor­i­cal con­tin­gency. If Thomas Edi­son had not in­vented the light bulb, some­one else would have done it later. In this sense, it is not his­tor­i­cally con­tin­gent that we have light bulbs, and the most ob­vi­ous benefits from Thomas Edi­son in­vent­ing the light bulb are prox­i­mate benefits and benefits from speed­ing up de­vel­op­ment. Some­thing analo­gous is prob­a­bly true of many other tech­nolog­i­cal in­no­va­tions such as com­put­ers, can­dles, wheelbar­rows, ob­ject-ori­ented pro­gram­ming, and the print­ing press. Some im­por­tant ex­am­ples of his­tor­i­cal con­tin­gen­cies: the rise of Chris­ti­an­ity, the cre­ation of the US Con­sti­tu­tion, and the writ­ings of Karl Marx. Var­i­ous as­pects of Chris­tian moral­ity in­fluence the world to­day in sig­nifi­cant ways, but the fact that those as­pects of moral­ity, in ex­actly those ways, were part of a dom­i­nant world re­li­gion was his­tor­i­cally con­tin­gent. And there­fore events like Je­sus’s death and Paul writ­ing his epis­tles are ex­am­ples of tra­jec­tory changes. Like­wise, the US Con­sti­tu­tion was the product of de­liber­a­tion among a spe­cific set of men, the doc­u­ment af­fects gov­ern­ment policy to­day and will af­fect it for the fore­see­able fu­ture, but it could eas­ily have been a differ­ent doc­u­ment. And now that the doc­u­ment ex­ists in its spe­cific le­gal and his­tor­i­cal con­text, it is challeng­ing to make changes to it, so the change is some­what self-re­in­forc­ing.

Some small tra­jec­tory changes could be suboptimal

Per­sis­tent tra­jec­tory changes that do not in­volve ex­is­ten­tial catas­tro­phes could have great sig­nifi­cance for shap­ing the far fu­ture. It is un­likely that the far fu­ture will in­herit many of our in­sti­tu­tions ex­actly as they are, but var­i­ous as­pects of the far fu­ture—in­clud­ing so­cial norms, val­ues, poli­ti­cal sys­tems, and per­haps even some tech­nolo­gies—may be path de­pen­dent on what hap­pens now, and some­times in sub­op­ti­mal ways. In gen­eral, it is rea­son­able to as­sume that if there is some prob­lem that might ex­ist in the fu­ture and we can do some­thing to fix it now, fu­ture peo­ple would also be able to solve that prob­lem. But if val­ues or so­cial norms change, they might not agree that some things we think are prob­lems re­ally are prob­lems. Or, if peo­ple make the wrong de­ci­sions now, cer­tain stan­dards or con­ven­tions may get en­trenched, and re­sult­ing prob­lems may be too ex­pen­sive to be worth fix­ing. For fur­ther cat­e­gories of ex­am­ples of path-de­pen­dent as­pects of the far fu­ture, see these posts by Robin Han­son.

The as­tro­nom­i­cal waste ar­gu­ment and tra­jec­tory changes

Bostrom’s ar­gu­ment only works if re­duc­ing ex­is­ten­tial risk is the most effec­tive way of max­i­miz­ing hu­man­ity’s fu­ture po­ten­tial. But there is no ro­bust ar­gu­ment that try­ing to re­duce ex­is­ten­tial risk is a more effec­tive way of shap­ing the far fu­ture than try­ing to cre­ate other pos­i­tive tra­jec­tory changes. Bostrom’s ar­gu­ment for the over­whelming im­por­tance of re­duc­ing ex­is­ten­tial risk can be sum­ma­rized as fol­lows:

  1. The ex­pected size of hu­man­ity’s fu­ture in­fluence is as­tro­nom­i­cally great.

  2. If the ex­pected size of hu­man­ity’s fu­ture in­fluence is as­tro­nom­i­cally great, then the ex­pected value of the fu­ture is as­tro­nom­i­cally great.

  3. If the ex­pected value of the fu­ture is as­tro­nom­i­cally great, then what mat­ters most is that we max­i­mize hu­man­ity’s long-term po­ten­tial.

  4. Some of our ac­tions are ex­pected to re­duce ex­is­ten­tial risk in not-ridicu­lously-small ways.

  5. If what mat­ters most is that we max­i­mize hu­man­ity’s fu­ture po­ten­tial and some of our ac­tions are ex­pected to re­duce ex­is­ten­tial risk in not-ridicu­lously-small ways, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to re­duce ex­is­ten­tial risk.

  6. There­fore, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to re­duce ex­is­ten­tial risk.

Call that the “as­tro­nom­i­cal waste” ar­gu­ment.

It is un­clear whether premise (5) is true be­cause it is un­clear whether try­ing to re­duce ex­is­ten­tial risk is the most effec­tive way of max­i­miz­ing hu­man­ity’s fu­ture po­ten­tial. For all we know, it could be more effec­tive to try to cre­ate other pos­i­tive tra­jec­tory changes. Clearly, it would be bet­ter to pre­vent ex­tinc­tion than to im­prove our so­cial norms in a way that in­di­rectly makes the fu­ture go one mil­lionth bet­ter, but, in gen­eral, “X is a big­ger prob­lem than Y” is only a weak ar­gu­ment that “try­ing to ad­dress X is more im­por­tant than try­ing to ad­dress Y.” To be strong, the ar­gu­ment must be sup­ple­mented by look­ing at many other con­sid­er­a­tions re­lated to X and Y, such as how much effort is go­ing into solv­ing X and Y, how tractable X and Y are, how much X and Y could use ad­di­tional re­sources, and whether there are sub­sets of X or Y that are es­pe­cially strong in terms of these con­sid­er­a­tions.

Bostrom does have ar­gu­ments that speed­ing up de­vel­op­ment and pro­vid­ing prox­i­mate benefits are not as im­por­tant, in them­selves, as re­duc­ing ex­is­ten­tial risk. And these ar­gu­ments, I be­lieve, have some plau­si­bil­ity. Since we don’t have an ar­gu­ment that re­duc­ing ex­is­ten­tial risk is bet­ter than try­ing to cre­ate other pos­i­tive tra­jec­tory changes and an ex­is­ten­tial catas­tro­phe is one type of tra­jec­tory change, it seems more rea­son­able for defen­ders of the as­tro­nom­i­cal waste ar­gu­ment to fo­cus on tra­jec­tory changes in gen­eral. It would be bet­ter to re­place the last two steps of the above ar­gu­ment with:

4’ Some of our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory in not-ridicu­lously-small ways.

5’. If what mat­ters most is that we max­i­mize hu­man­ity’s fu­ture po­ten­tial and some of our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory in not-ridicu­lously-small ways, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory.

6’. There­fore, what it is best to do is pri­mar­ily de­ter­mined by how our ac­tions are ex­pected to change our de­vel­op­ment tra­jec­tory.

This seems to be a strictly more plau­si­ble claim than the origi­nal one, though it is less fo­cused.

In re­sponse to the ar­gu­ments in this post, which I e-mailed him in ad­vance, Bostrom wrote a re­ply (see the end of the post). The key com­ment, from my per­spec­tive, is:

“Many tra­jec­tory changes are already en­com­passed within the no­tion of an ex­is­ten­tial catas­tro­phe. Be­com­ing per­ma­nently locked into some rad­i­cally sub­op­ti­mal state is an xrisk. The no­tion is more use­ful to the ex­tent that likely sce­nar­ios fall rel­a­tively sharply into two dis­tinct cat­e­gories—very good ones and very bad ones. To the ex­tent that there is a wide range of sce­nar­ios that are roughly equally plau­si­ble and that vary con­tin­u­ously in the de­gree to which the tra­jec­tory is good, the ex­is­ten­tial risk con­cept will be a less use­ful tool for think­ing about our choices. One would then have to re­sort to a more com­pli­cated calcu­la­tion. How­ever, ex­tinc­tion is quite di­choto­mous, and there is also a thought that many suffi­ciently good fu­ture civ­i­liza­tions would over time asymp­tote to the op­ti­mal track.”

I agree that a key ques­tion here is whether there is a very large range of plau­si­ble equil­ibria for ad­vanced civ­i­liza­tions, or whether civ­i­liza­tions that man­age to sur­vive long enough nat­u­rally con­verge on some­thing close to the best pos­si­ble out­come. The more con­fi­dence one has in the sec­ond pos­si­bil­ity, the more in­ter­est­ing ex­is­ten­tial risk is as a con­cept. The less con­fi­dence one has in the sec­ond pos­si­bil­ity, the more in­ter­est­ing tra­jec­tory changes in gen­eral are. How­ever, I would em­pha­size that un­less we can be highly con­fi­dent in the sec­ond pos­si­bil­ity, it seems that we can­not be con­fi­dent that re­duc­ing ex­is­ten­tial risk is more im­por­tant than cre­at­ing other pos­i­tive tra­jec­tory changes be­cause of the as­tro­nom­i­cal waste ar­gu­ment alone. This would turn on fur­ther con­sid­er­a­tions of the sort I de­scribed above.

Broad and nar­row strate­gies for shap­ing the far future

Both the as­tro­nom­i­cal waste ar­gu­ment and the fixed up ver­sion of that ar­gu­ment con­clude that what mat­ters most is how our ac­tions af­fect the far fu­ture. I am very sym­pa­thetic to this view­point, ab­stractly con­sid­ered, but I think its prac­ti­cal im­pli­ca­tions are highly un­cer­tain. There is a spec­trum of strate­gies for shap­ing the far fu­ture that ranges from the very tar­geted (e.g., stop that as­ter­oid from hit­ting the Earth) to very broad (e.g., cre­ate eco­nomic growth, help the poor, provide ed­u­ca­tion pro­grams for tal­ented youth), with op­tions like “tell pow­er­ful peo­ple about the im­por­tance of shap­ing the far fu­ture” in be­tween. The limit­ing case of breadth might be just op­ti­miz­ing for prox­i­mate benefits or for speed­ing up de­vel­op­ment. Defen­ders of the as­tro­nom­i­cal waste ar­gu­ment tend to be on the highly tar­geted end of this spec­trum. I think it’s a very in­ter­est­ing ques­tion where on this spec­trum we should pre­fer to be, other things be­ing equal, and it’s a topic I plan to re­turn to in the fu­ture.

The ar­gu­ments I’ve offered above fa­vor broader strate­gies for shap­ing the far fu­ture, though they don’t set­tle the is­sue. The main rea­son I say this is that the best ways of cre­at­ing pos­i­tive tra­jec­tory changes may be very broad and gen­eral, whereas the best ways of re­duc­ing ex­is­ten­tial risk may be more nar­row and spe­cific. For ex­am­ple, it may be rea­son­able to try to as­sess, in de­tail, ques­tions like, “What are the largest spe­cific ex­is­ten­tial risks?” and, “What are the most effec­tive ways of re­duc­ing those spe­cific risks?” In con­trast, it seems less promis­ing to try to make spe­cific guesses about how we might cre­ate smaller pos­i­tive tra­jec­tory changes be­cause there are so many pos­si­bil­ities and many tra­jec­tory changes do not have sig­nifi­cance that is pre­dictable in ad­vance. No one could have pre­dicted the per­sis­tent rip­ple effects that Je­sus’s life had, for ex­am­ple. In other cases—such as the fram­ing of the US Con­sti­tu­tion—it’s clear that a de­ci­sion has tra­jec­tory change po­ten­tial, but it would be hard to spec­ify, in ad­vance, which con­crete mea­sures should be taken. In gen­eral, it seems that the worse you are at pre­dict­ing some phe­nomenon that is crit­i­cal to your plans, the less your plans should de­pend on spe­cific pre­dic­tions about that phe­nomenon. Be­cause of this, promis­ing ways to cre­ate pos­i­tive tra­jec­tory changes in the world may be more broad than the most promis­ing ways of try­ing to re­duce ex­is­ten­tial risk speci­fi­cally. Im­prov­ing ed­u­ca­tion, im­prov­ing par­ent­ing, im­prov­ing sci­ence, im­prov­ing our poli­ti­cal sys­tem, spread­ing hu­man­i­tar­ian val­ues, or oth­er­wise im­prov­ing our col­lec­tive wis­dom as stew­ards of the fu­ture could, I be­lieve, cre­ate many small, un­pre­dictable pos­i­tive tra­jec­tory changes.

I do not mean to sug­gest that broad ap­proaches are nec­es­sar­ily best, only that peo­ple in­ter­ested in shap­ing the far fu­ture should take them more se­ri­ously than they cur­rently do. The way I see the trade-off be­tween highly tar­geted strate­gies and highly broad strate­gies is as fol­lows. Highly tar­geted strate­gies for shap­ing the far fu­ture of­ten de­pend on highly spec­u­la­tive plans, of­ten with many steps, which are hard to ex­e­cute. We of­ten have very lit­tle sense of whether we are mak­ing valuable progress on AI risk re­search or geo-en­g­ineer­ing re­search. On the other hand, highly broad strate­gies must rely on im­plicit as­sump­tions about the rip­ple effects of do­ing good in more or­di­nary ways. It is very sub­tle and spec­u­la­tive to say how or­di­nary ac­tions are re­lated to pos­i­tive tra­jec­tory changes, and es­ti­mat­ing mag­ni­tudes seems ex­tremely challeng­ing. Con­sid­er­ing these trade-offs in spe­cific cases seems like a promis­ing area for ad­di­tional re­search.

Summary

In this post, I ar­gued that:

  1. The as­tro­nom­i­cal waste ar­gu­ment be­comes strictly more plau­si­ble if we re­place the idea of min­i­miz­ing ex­is­ten­tial risk with the idea of cre­at­ing pos­i­tive tra­jec­tory changes.

  2. There are many ways in which our ac­tions could un­pre­dictably af­fect our gen­eral de­vel­op­ment tra­jec­tory, and there­fore many ways in which our ac­tions could shape the far fu­ture for the bet­ter. This is one rea­son to fa­vor broad strate­gies for shap­ing the far fu­ture.

The tra­jec­tory change per­spec­tive may have other strate­gic im­pli­ca­tions for peo­ple who are con­cerned about max­i­miz­ing hu­man­ity’s long-term po­ten­tial. I plan to write about these im­pli­ca­tions in the fu­ture.[i]

Com­ment from Nick Bostrom on this post

[What fol­lows is an e-mail re­sponse from Nick Bostrom. He sug­gested that I share his com­ment along with the post. Note that I added a cou­ple of small clar­ifi­ca­tions to this post (noted above) in re­sponse to Bostrom’s com­ment.]

One can ar­rive at a more prob­a­bly cor­rect prin­ci­ple by weak­en­ing, even­tu­ally ar­riv­ing at some­thing like ‘do what is best’ or ‘max­i­mize ex­pected good’. There the well-trained an­a­lytic philoso­pher could rest, hav­ing achieved perfect ster­il­ity. Of course, to get some­thing fruit­ful, one has to look at the world not just at our con­cepts.

Many tra­jec­tory changes are already en­com­passed within the no­tion of an ex­is­ten­tial catas­tro­phe. Be­com­ing per­ma­nently locked into some rad­i­cally sub­op­ti­mal state is an xrisk. The no­tion is more use­ful to the ex­tent that likely sce­nar­ios fall rel­a­tively sharply into two dis­tinct cat­e­gories—very good ones and very bad ones. To the ex­tent that there is a wide range of sce­nar­ios that are roughly equally plau­si­ble and that vary con­tin­u­ously in the de­gree to which the tra­jec­tory is good, the ex­is­ten­tial risk con­cept will be a less use­ful tool for think­ing about our choices. One would then have to re­sort to a more com­pli­cated calcu­la­tion. How­ever, ex­tinc­tion is quite di­choto­mous, and there is also a thought that many suffi­ciently good fu­ture civ­i­liza­tions would over time asymp­tote to the op­ti­mal track.

In a more ex­tended and care­ful anal­y­sis there are good rea­sons to con­sider sec­ond-or­der effects that are not cap­tured by the sim­ple con­cept of ex­is­ten­tial risk. Re­duc­ing the prob­a­bil­ity of nega­tive-value out­comes is ob­vi­ously im­por­tant, and some pa­ram­e­ters such as global val­ues and co­or­di­na­tion may ad­mit of more-or-less con­tin­u­ous vari­a­tion in a cer­tain class of sce­nar­ios and might af­fect the value of the long-term out­come in cor­re­spond­ingly con­tin­u­ous ways. (The de­gree to which these com­pli­ca­tions loom large also de­pends on some un­set­tled is­sues in ax­iol­ogy; so in an all-things-con­sid­ered as­sess­ment, the proper han­dling of nor­ma­tive un­cer­tainty be­comes im­por­tant. In fact, cre­at­ing a fu­ture civ­i­liza­tion that can be en­trusted to re­solve nor­ma­tive un­cer­tainty well wher­ever an epistemic re­s­olu­tion is pos­si­ble, and to find widely ac­cept­able and mu­tu­ally benefi­cial com­pro­mises to the ex­tent such re­s­olu­tion is not pos­si­ble—this seems to me like a promis­ing con­ver­gence point for ac­tion.)

It is not part of the xrisk con­cept or the max­ipok prin­ci­ple that we ought to adopt some max­i­mally di­rect and con­crete method of re­duc­ing ex­is­ten­tial risk (such as as­ter­oid defense): whether one best re­duces xrisk through di­rect or in­di­rect means is an al­to­gether sep­a­rate ques­tion.


[i] I am grate­ful to Nick Bostrom, Paul Chris­ti­ano, Luke Muehlhauser, Vipul Naik, Carl Shul­man, and Jonah Sinick for feed­back on ear­lier drafts of this post.

Cross­posted from LessWrong