Assumptions about the far future and cause priority

Ab­stract. This ar­ti­cle ex­am­ines the po­si­tion that cause ar­eas re­lated to ex­is­ten­tial risk re­duc­tion, such as AI safety, should be vir­tu­ally in­finitely preferred to other cause ar­eas such as global poverty. I will ex­plore some ar­gu­ments for and against this po­si­tion. My first goal is to raise greater aware­ness of the cru­cial im­por­tance of a par­tic­u­lar as­sump­tion con­cern­ing the far fu­ture, which negates the pos­si­bil­ity of long-term ex­po­nen­tial growth of our util­ity func­tion. I will also dis­cuss the clas­si­cal de­ci­sion rule based on the max­i­miza­tion of ex­pected val­ues. My sec­ond goal is to ques­tion this as­sump­tion and this de­ci­sion rule. In par­tic­u­lar, I won­der whether ex­po­nen­tial growth could be sus­tained through the ex­plo­ra­tion of in­creas­ingly com­plex pat­terns of mat­ter; and whether, when at­tempt­ing to max­i­mize the ex­pected val­ues of differ­ent ac­tions, we might for­get to take into ac­count pos­si­bly large costs caused by later up­dates on our be­liefs on the far fu­ture. While I con­sider the ideas pre­sented here to be highly spec­u­la­tive, my hope is to elicit a more thor­ough anal­y­sis of the ar­gu­ments un­der­ly­ing the case for ex­is­ten­tial risk re­duc­tion.

A fic­ti­tious con­ver­sa­tion on cause priority

The con­sid­er­a­tions be­low could be put into com­pli­cated-look­ing math­e­mat­i­cal mod­els in­volv­ing in­te­grals and prob­a­bil­ity mea­sures. I will not fol­low this path, and fo­cus in­stead on a hand­ful of sim­ple model cases. For con­ve­nience of ex­po­si­tion, there will be fic­ti­tious char­ac­ters who are sup­posed to hold each model as their be­lief on the far fu­ture. Th­ese mod­els will be very sim­ple. In my opinion, noth­ing of value is be­ing lost by pro­ceed­ing in this way.

The char­ac­ters in the fic­ti­tious story have a fan­tas­tic op­por­tu­nity to do good: they are about to spend 0.1% of the world GDP on what­ever they want. They will de­bate what to do in light of their be­liefs on the far fu­ture. They agree on many things already: they are at least vaguely util­i­tar­ian and con­cerned about the far fu­ture; they be­lieve that in ex­actly 100 years from now (un­less they in­ter­vene), there will be a 10% chance that all sen­tient life sud­denly goes ex­tinct (and oth­er­wise ev­ery­thing goes on just fine); out­side of this event they be­lieve that there will be no such ex­is­ten­tial risk; fi­nally, they be­lieve that sen­tient life must come to an end in 100 billion years. Also, we take it that all these be­liefs are ac­tu­ally cor­rect.

The char­ac­ters in the story hes­i­tate be­tween a “growth” in­ter­ven­tion, which they es­ti­mate would in­stan­ta­neously raise their util­ity func­tion by 1%,[1] and an “ex­is­ten­tial” in­ter­ven­tion, which would re­duce the prob­a­bil­ity of ex­tinc­tion they will face in 100 years to 9.9% in­stead of 10%.[2]

Alice be­lieves that un­less ex­tinc­tion oc­curs, our util­ity func­tion always grows at a roughly con­stant rate, un­til ev­ery­thing stops in 100 billion years. She calcu­lates that in her model, the growth in­ter­ven­tion moves the util­ity func­tion up­wards by 1% at any point in the fu­ture. In par­tic­u­lar, the ex­pec­ta­tion of the to­tal util­ity cre­ated in the fu­ture in­creases by 1% if she chooses the growth in­ter­ven­tion. With the ex­is­ten­tial in­ter­ven­tion, she calcu­lates that (up to a minus­cule er­ror) this ex­pec­ta­tion moves up by 0.1%. Since 1% > 0.1%, she ar­gues for the growth in­ter­ven­tion.

Bob’s view on the far fu­ture is differ­ent. He be­lieves that the growth rate of our util­ity func­tion will first ac­cel­er­ate, as we dis­cover more and more tech­nolo­gies. In fact, it will ac­cel­er­ate so fast that we will quickly have dis­cov­ered all dis­cov­er­able tech­nolo­gies. We will similarly quickly figure out the best ar­range­ment of mat­ter to max­i­mize our util­ity func­tion lo­cally, and all that will be left to do is colonize space and fill it with this op­ti­mal ar­range­ment of mat­ter. How­ever, we are bound by the laws of physics and can­not colonize space faster than the speed of light. This im­plies that in the long run, our util­ity func­tion can­not grow faster than t^3 (where t is time). The growth rate of this func­tion[3] de­cays to zero quickly, like 1/​t. So in effect, we may as well sup­pose that our util­ity func­tion will spike up quickly, and then plateau at a value that can es­sen­tially be re­garded as a con­stant[4]. For the ex­is­ten­tial in­ter­ven­tion, he finds that the ex­pected util­ity of the fu­ture in­creases by about 0.1%, in agree­ment with Alice’s as­sess­ment. How­ever, he reaches a very differ­ent con­clu­sion when eval­u­at­ing the growth in­ter­ven­tion. In­deed, in his model, the growth in­ter­ven­tion only im­proves the fate of the fu­ture be­fore the on­set of the plateau, and brings this on­set a bit closer to the pre­sent. In par­tic­u­lar, it has es­sen­tially no effect on the util­ity func­tion af­ter the plateau is reached. But this is where the vast ma­jor­ity of the fu­ture re­sides. So the growth in­ter­ven­tion will barely budge the to­tal util­ity of the fu­ture. He there­fore ar­gues for the ex­is­ten­tial in­ter­ven­tion.

Clara holds a more so­phis­ti­cated model of the far fu­ture than both Alice and Bob. She ac­knowl­edges that we can­not be cer­tain about our pre­dic­tions. She holds that there is a range of differ­ent pos­si­ble sce­nar­ios for the far fu­ture, to which she as­signs cer­tain prob­a­bil­ities. In fact, she puts a weight of 50% on Alice’s model, and a weight of 50% on Bob’s model. Her calcu­la­tions will de­pend cru­cially on com­par­ing the ex­pected value of the to­tal util­ity of the fu­ture un­der each model. She con­sid­ers that the growth in util­ity in Alice’s model is slower than in Bob’s, so much so that the plateau that ap­pears in Bob’s model is never within sight in Alice’s model. She thus con­cludes that the far fu­ture has much greater util­ity in Bob’s model. Or else, she rea­sons that Alice must have failed to prop­erly take into ac­count the slow­down ap­pear­ing in Bob’s model. In any case, she re­joins Bob in ar­gu­ing for the ex­is­ten­tial in­ter­ven­tion.

In the next sec­tions, we dig deeper into some of the ar­gu­ments ap­pear­ing in the pre­ced­ing dis­cus­sion.

A closer look at Bob’s ar­gu­ments (no ex­po­nen­tial growth)

As far as I can tell, some ver­sion of Bob’s view that our util­ity func­tion ul­ti­mately reaches a plateau (or grows no faster than t^3) is the more typ­i­cal view among EA peo­ple who have thought about the prob­lem.[5] I will fo­cus now on the ex­am­i­na­tion of this point.

This view re­lies on the as­sump­tion that we[6] will quickly dis­cover the es­sen­tially op­ti­mal way to or­ga­nize mat­ter in the re­gion of space that we oc­cupy. Once this is done, all that is left to do is to ex­pand in space and re­pro­duce this op­ti­mal ar­range­ment of mat­ter over and over again.

It seems ex­tremely un­likely to me that we will come re­motely close to dis­cov­er­ing the util­ity-max­i­miz­ing pat­tern of mat­ter that can be formed even just here on Earth. There are about 10^50 atoms on Earth. In how many differ­ent ways could these atoms be or­ga­nized in space? To keep things sim­ple, sup­pose that we just want to delete some of them to form a more har­mo­nious pat­tern, and oth­er­wise do not move any­thing. Then there are already 2^(10^50) pos­si­ble pat­terns for us to ex­plore.

The num­ber of atoms on our planet is already so large that it is many or­ders of mag­ni­tude be­yond our in­tu­itive grasp (in com­par­i­son, 100 billion years al­most feels like you can touch it). So I’m not sure what to say to give a sense of scale for 2^(10^50); but let me give it a try. We can write down the num­ber of atoms on Earth as a one fol­lowed by 50 ze­ros. If we try to write down 2^(10^50) similarly, then we would ba­si­cally have to write a one fol­lowed by as many ze­ros as a third of the num­ber of atoms on Earth.

Let me also stress that 2^(10^50) is in fact a very pes­simistic lower bound on the num­ber of differ­ent pat­terns that we can ex­plore. Atoms are not all the same. They are made up of smaller parts that we can split up and play with sep­a­rately. We are not re­stricted to us­ing only the atoms on Earth, and we can move them fur­ther dis­tances away. Also, I do not see why the op­ti­mizer of our util­ity func­tion should be con­stant in time.[7] In com­par­i­son to the po­ten­tial num­ber of pat­terns ac­cessible to us, a time scale of 100 billion years is re­ally, re­ally, REALLY ridicu­lously short.

In or­der to res­cue Bob’s ar­gu­ment, it seems nec­es­sary to make the case that, al­though the space of pos­si­ble pat­terns is in­deed huge, ex­plor­ing this space of pat­terns has only a very limited im­pact on the growth of our util­ity func­tion. It find it very difficult to de­cide whether this is true or not. One has to an­swer ques­tions such as (1) how rapidly are we ca­pa­ble to ex­plore this space of pat­terns? (2) Should we ex­pect our speed of ex­plo­ra­tion to in­crease over time? If so, by how much? (3) How does our util­ity func­tion in­crease as we keep im­prov­ing the qual­ity of the pat­terns we dis­cover?

I do not know how to an­swer these ques­tions. But per­haps it will help to broaden our imag­i­na­tion if I sug­gest a sim­ple men­tal image of what it could look like for a civ­i­liza­tion to be mostly busy try­ing to ex­plore the space of pat­terns available to them. Pos­si­bly, our fu­ture selves will find that the great­est good will be achieved by prepar­ing for and then re­al­iz­ing a co­or­di­nated dance perfor­mance of cos­mic di­men­sion, span­ning a re­gion greater than that of the so­lar sys­tem and last­ing mil­lions of years. While they will not com­pletely dis­re­gard space coloniza­tion, they will find greater value in op­ti­miz­ing over the chore­og­ra­phy of their cos­mic dance, prepar­ing for the suc­cess of their perfor­mance, and then re­al­iz­ing it.[8] Gen­er­al­iz­ing on what I aim to cap­ture with this ex­am­ple, I find it plau­si­ble that highly ad­vanced sen­tient be­ings will be very in­ter­ested in ex­tremely re­fined and in­tri­cate do­ings com­pa­rable to art forms, which we can­not even be­gin to imag­ine, but which will score par­tic­u­larly highly for their util­ity func­tion.

A some­what Bayesian ob­jec­tion to the idea I am defend­ing here could be: if in­deed Bob’s view is in­valid, then how come the point defended here has not already be­come more com­mon­place within the EA com­mu­nity? This is more tan­gen­tial and spec­u­la­tive, so I will push a ten­ta­tive an­swer to this ques­tion into a long foot­note.[9]

A closer look at Alice’s point of view (ex­po­nen­tial growth)

Aside from Bob’s and Clara’s ob­jec­tions, an­other type of ar­gu­ment that can be raised against Alice’s view is that, some­what im­plic­itly, it may con­flate the util­ity func­tion with some­thing that at least vaguely looks like the world GDP; and that in truth, if there were a sim­ple re­la­tion­ship be­tween the util­ity func­tion and the world GDP, it would rather be that our util­ity func­tion is the log­a­r­ithm of the world GDP.

This ar­gu­ment would put Alice’s be­lief that our util­ity func­tion can grow at a steady rate over long pe­ri­ods of time into very se­ri­ous doubt. Un­der a de­fault sce­nario where GDP growth is con­stant, it would mean that our util­ity func­tion only grows by some fixed amount per unit of time.

It is difficult to ar­gue about the re­la­tion­ship be­tween a state of the world and what the value of our util­ity func­tion should be. I will only point out that the ar­gu­ment is very frag­ile to the pre­cise func­tional re­la­tion we pos­tu­late be­tween our util­ity func­tion and world GDP. In­deed, if we de­cide that our util­ity func­tion is some (pos­si­bly small) power of the world GDP, in­stead of its log­a­r­ithm, then a steady growth rate of GDP does again im­ply a steady growth rate of our util­ity func­tion (as op­posed to adding a con­stant amount per unit of time). If there were a re­la­tion­ship be­tween our util­ity func­tion and the world GDP, then I do not know how I could go about and de­cide whether our util­ity func­tion looks more like log(GDP) or more like, say, (GDP)^0.1. If any­thing, pos­tu­lat­ing that our util­ity func­tion looks like (GDP)^x for some ex­po­nent x be­tween 0 and 1 gives us more free­dom for ad­just­ment be­tween re­al­ity and our model of it. I feel that it would also work bet­ter un­der cer­tain cir­cum­stances; for in­stance, if we du­pli­cate our world and cre­ate an iden­ti­cal copy of it, I would find it bizarre if our util­ity func­tion only in­creases by a con­stant amount, and find it more rea­son­able if it is mul­ti­plied by some fac­tor.[10]

Fi­nally, I want to point out that Alice’s view is not at all sit­ting at the tail end of some spec­trum of pos­si­ble mod­els. In­deed, I see no ob­sta­cle in the laws of physics to the idea that the growth rate of our util­ity func­tion will not only re­main pos­i­tive, but will in fact con­tinue to in­crease with­out bound for the next 100 billion years. In­deed, the space of pos­si­ble pat­terns of mat­ter we can po­ten­tially ex­plore be­tween time t and time 2 t grows like exp(t^4);[11] and the growth rate of this func­tion in­deed goes up to in­finity. If one takes the po­si­tion that the growth rate of our util­ity func­tion can in­crease with­out bound, then one would be led to the con­clu­sion that growth in­ter­ven­tions are always to be preferred over ex­is­ten­tial in­ter­ven­tions.

A closer look at Clara’s ar­gu­ments (ex­pec­ta­tion max­i­miza­tion)

The rea­son­ing that leads Clara to con­clude in fa­vor of Bob’s con­clu­sion is, in my opinion, very frag­ile.[12] Although I do not think that it is nec­es­sary to do so, I find it clear­est to ex­plain this by sup­pos­ing that Alice re­vises her model and says: “In fact I don’t know how our util­ity func­tion will grow in the far fu­ture. The only thing I am cer­tain about is that the growth rate of our util­ity func­tion will always be at least 3% per year (out­side of the pos­si­bil­ity of ex­tinc­tion and of the in­ter­ven­tions we do).” This is a weaker as­sump­tion on the fu­ture than her origi­nal model[13], so it should only in­crease the weight Clara is will­ing to put on Alice’s model. But with this for­mu­la­tion, what­ever Bob’s pre­dic­tion is for the fu­ture, Alice could say that maybe Bob is right up un­til the mo­ment when he pre­dicts a growth rate be­low 3%, but then she would in­sist on be­ing more op­ti­mistic and keep the growth rate at (or above) 3%. In this way, Alice’s up­dated model is guaran­teed to yield higher ex­pected util­ity than Bob’s. Roughly speak­ing, Clara’s pro­ce­dure es­sen­tially con­sists in se­lect­ing the most op­ti­mistic model around.[14]

I sup­pose that a typ­i­cal line of defense for the ex­pec­ta­tion-max­i­miza­tion pro­ce­dure has some­thing to do with the idea that it is, in some sense, “prov­ably” best; in other words, that there is some math­e­mat­i­cal rea­son­ing jus­tify­ing its su­pe­ri­or­ity. I want to challenge this view here with two counter-ar­gu­ments.[15]

First, the clas­si­cal ar­gu­ment for ex­pec­ta­tion max­i­miza­tion re­lies on the law of large num­bers. This law deals with a se­ries of rel­a­tively in­de­pen­dent vari­ables which we then sum up. It as­serts that, in situ­a­tions where each term con­tributes lit­tle to the over­all sum, the to­tal sum be­comes con­cen­trated around the sum of the ex­pected val­ues of each con­tri­bu­tion, with com­par­a­tively small fluc­tu­a­tions. In such situ­a­tions, it there­fore makes sense to max­i­mize the ex­pected value of each of our ac­tions. But, for all I know, there is only one uni­verse in which we are tak­ing bets on the long-term fu­ture.[16] If, say, we never up­date our bet for the long-term fu­ture, then there will be no av­er­ag­ing tak­ing place. In such a cir­cum­stance, max­i­miz­ing ex­pected val­ues seems to me rather ar­bi­trary, and I would see no con­tra­dic­tion if some­one de­cided to op­ti­mize for some differ­ent quan­tity.

My most im­por­tant ob­jec­tion to Clara’s rea­son­ing is that, in my opinion, it fails to take into ac­count cer­tain effects which I will call “switch­ing costs”.[17] Although I ex­plored the op­po­site hy­poth­e­sis in the pre­vi­ous para­graph, I find it more likely that we will reg­u­larly up­date our pre­dic­tion on the far fu­ture. And, in view of the scale of the un­cer­tain­ties, I ex­pect that our be­liefs about it will ac­tu­ally mostly look like ran­dom noise.[18] Fi­nally, I ex­pect that the stan­dard ex­pec­ta­tion-max­i­miza­tion pre­scrip­tion will be all-or-noth­ing: only do growth in­ter­ven­tions; or only do ex­is­ten­tial in­ter­ven­tions. It seems to me that Clara’s calcu­la­tion is too short-sighted, and fails to take into ac­count the cost as­so­ci­ated with re­vis­ing our opinion in the fu­ture. To illus­trate this, sup­pose that Alice, Bob and Clara run a char­i­ta­ble or­ga­ni­za­tion called ABCPhil which in­vests 0.1% of world GDP each year to do the max­i­mal amount of good. Imag­ine if for the next 10 years, ABCPhil was only fi­nanc­ing ex­is­ten­tial in­ter­ven­tions; and then it would sud­denly switch to only fi­nanc­ing growth in­ter­ven­tions for the next 10 years; and so on, com­pletely switch­ing ev­ery 10 years. Now, com­pare this with the sce­nario where ABCPhil fi­nances both equally all the time. While this com­par­i­son is not straight­for­ward, I would be in­clined to be­lieve that the sec­ond sce­nario is su­pe­rior. In any case, my point here is that Clara’s rea­son­ing, as stated, sim­ply ig­nores this ques­tion, and this may be an im­por­tant prob­lem.

Ten­ta­tive guidelines for con­crete de­ci­sion taking

In view of the dis­cus­sion in the pre­vi­ous sec­tion, and in par­tic­u­lar of the pos­si­ble prob­lem of “switch­ing costs”, I do not be­lieve that there can be a sim­ple recipe that Clara could just ap­ply to dis­cover the op­ti­mal de­ci­sion she should take. (We will take Clara’s point of view here since she is more rea­son­ably ac­knowl­edg­ing her un­cer­tainty about her model of the fu­ture.) The best that can be done is to in­di­cate a few guidelines for de­ci­sion tak­ing, which then need to be com­ple­mented by some amount of “good judg­ment”.

I find it most use­ful to think in terms of “refer­ence points”, a small set of pos­si­ble de­ci­sions that each are the re­sult of a cer­tain type of think­ing. Once these refer­ence points are iden­ti­fied, “good judg­ment” can then weigh in and bend the fi­nal de­ci­sion more or less to­ward a refer­ence point, de­pend­ing on rough guesses as to the mag­ni­tude of the effects that are not cap­tured well un­der each per­spec­tive.

One such refer­ence point is in­deed that re­sult­ing from the max­i­miza­tion of ex­pected val­ues (which is what Clara was do­ing in the origi­nal story). A sec­ond refer­ence point, which I will call the “hedg­ing” de­ci­sion rule, is as fol­lows. First, Clara calcu­lates what is the best ac­tion un­der each model of the fu­ture; in fact, Alice and Bob have done this calcu­la­tion for her already. Then, she ag­gre­gates the de­ci­sions ac­cord­ing to the like­li­hood she places on each model. In other words, she gives money to Alice and Bob in pro­por­tion to how much she be­lieves each is right, and then let them do what they think is best.[19]

I want to stress again that I do not claim the hedg­ing de­ci­sion rule to be always su­pe­rior to ex­pec­ta­tion max­i­miza­tion. How­ever, it is also true that, in cir­cum­stances in which we be­lieve that the switch­ing costs are large, ex­pec­ta­tion max­i­miza­tion[20] will lead to con­clu­sions that are in­fe­rior to those de­rived from the hedg­ing de­ci­sion rule.

The hedg­ing de­ci­sion rule is de­signed to be more ro­bust to switch­ing costs. The task of “good judg­ment” then is to try to eval­u­ate whether these costs (and pos­si­bly other con­sid­er­a­tions) are likely to be sig­nifi­cant or not. If not, then one should de­vi­ate only very lit­tle from ex­pec­ta­tion max­i­miza­tion. If yes, then one should be more in­clined to fa­vor the hedg­ing de­ci­sion rule.

It is in­ter­est­ing to no­tice that it is only with Alice’s as­sump­tions that one needs to ac­tu­ally look at the ac­tual effi­ciency of each in­ter­ven­tion, and that one could come up with a con­crete rule for com­par­ing them which is not all-or-noth­ing.[21] While this has limi­ta­tions, I find it very use­ful to have a con­crete rule of thumb for com­par­ing the effi­ciency of differ­ent in­ter­ven­tions. In a fi­nal round of ad­just­ment of Clara’s de­ci­sions, I be­lieve that this ad­di­tional in­for­ma­tion should also be taken into ac­count. The ex­tent of this fi­nal ad­just­ment is again left to Clara’s “good judg­ment”.[22]

How I came to write this

In this sec­tion, I want to ex­plain what led me to write the pre­sent ar­ti­cle. It comes from my at­tempt to un­der­stand the ca­reer recom­men­da­tions given on the 80k web­site. The ad­vice given there changed re­cently. In my opinion, the re­cent ver­sion strongly sug­gests that one should give much higher pri­or­ity to ca­reers re­lated to ex­is­ten­tial risk re­duc­tion than to ca­reers re­lated to, say, im­prov­ing health in poor coun­tries.[23]

Be­fore spread­ing the word, I wanted to make sure that I un­der­stood and agreed with it. The pre­sent ar­ti­cle is a sum­mary of my best effort, and my con­clu­sion is that I still don’t un­der­stand it.[24]

In a nut­shell, I am wor­ried that switch­ing costs have not been es­ti­mated prop­erly. This can be be­cause peo­ple at 80k feel more cer­tain than I am about the tra­jec­tory of the far fu­ture; or be­cause they think that switch­ing costs are not very high. I have already dis­cussed my opinion on the tra­jec­tory of the far fu­ture at length, so I will now only fo­cus on the sort of switch­ing costs I am wor­ried about.

Sup­pose that I am very in­ter­ested in im­prov­ing health in poor coun­tries; and that I am not all that con­vinced by rel­a­tively con­voluted ar­gu­ments about what will hap­pen to sen­tient life in billions of years. Even if ev­ery­one in the EA com­mu­nity has the best in­ten­tions, I would per­son­ally find it de­press­ing to be sur­rounded by peo­ple who think that what I in­tend to do is of neg­ligible im­por­tance. I would also feel the pres­sure to switch to top­ics such as AI safety, an ex­tremely com­pet­i­tive topic re­quiring a lot of ex­per­tise. I think I would be very likely to sim­ply leave the group.

Imag­ine now that in 10 years, some­one comes up with a great ar­gu­ment which sud­denly con­vinces the EA com­mu­nity that growth in­ter­ven­tions are ac­tu­ally vastly su­pe­rior to ex­is­ten­tial in­ter­ven­tions. If most peo­ple in­ter­ested in growth in­ter­ven­tions have left the group, it will be ex­tremely difficult for the EA com­mu­nity to bear the tran­si­tion. And next, at least some peo­ple work­ing on AI safety would con­sider that the cost of switch­ing is too high, and that work­ing on AI safety still kind of makes sense. As time passes, and sup­pos­ing that the EA com­mu­nity has man­aged to tran­si­tion to growth in­ter­ven­tions with­out just dis­in­te­grat­ing, peo­ple work­ing on AI safety would grow tired of be­ing re­minded that their work is of neg­ligible im­por­tance, and would tend to leave the group. Up un­til the next switch of opinion.

No­tice also that in the fic­ti­tious sce­nario out­lined above, it will in fact be quite difficult for the “great ar­gu­ment” to emerge from the EA com­mu­nity, and then also very hard for it to be known and ac­knowl­edged, since peo­ple with differ­ent in­ter­ests no longer in­ter­act. And I am not even dis­cussing pos­si­ble syn­er­gis­tic effects be­tween EA peo­ple work­ing on differ­ent cause ar­eas, which in my opinion can also be very sig­nifi­cant.

Conclusion

This ar­ti­cle ex­am­ined the view that in­ter­ven­tions aiming to re­duce ex­is­ten­tial risk are vir­tu­ally in­finitely su­pe­rior to those that aim to ac­cel­er­ate growth.

In my un­der­stand­ing, this view re­lies cru­cially on the as­sump­tion that the util­ity of the fu­ture can­not have ex­po­nen­tial growth in the long term, and will in­stead es­sen­tially reach a plateau. While I do not in­tend to rule out this pos­si­bil­ity, I tried to ex­plain why I per­son­ally find the al­ter­na­tive pos­si­bil­ity of a sus­tained ex­po­nen­tial growth at least plau­si­ble.

One at­tempt to ag­gre­gate these differ­ent pre­dic­tions on the far fu­ture can be to com­pute the ex­pected value of differ­ent in­ter­ven­tions, tak­ing our un­cer­tainty on the far fu­ture into ac­count. In my opinion, this ap­proach has im­por­tant limi­ta­tions, in par­tic­u­lar be­cause it ig­nores cer­tain “switch­ing costs”.

The pre­sent ar­ti­cle is a sum­mary of my at­tempt to un­der­stand some of the ideas which I con­sider cen­tral to the EA move­ment. I sup­pose that the peo­ple work­ing full-time on the prob­lems dis­cussed here have a much deeper un­der­stand­ing of the is­sues at stake, and a much finer po­si­tion than the one I have out­lined here. I hope that this ar­ti­cle will sig­nal that it may cur­rently be very difficult to re­verse-en­g­ineer what this finer po­si­tion is. If noth­ing else, this ar­ti­cle can thus be taken as a re­quest for clar­ifi­ca­tion.

Ac­knowl­edge­ments. I would like to warmly thank the mem­bers of the French EA com­mu­nity, and in par­tic­u­lar Laura Green and Len­nart Stern, for their sup­port and very use­ful feed­back.


  1. The point here is not about won­der­ing if this num­ber is rea­son­able. Rather, it is about see­ing how this num­ber en­ters (or does not en­ter) the de­ci­sion pro­cess. But, to give some sub­stance to it, if we very crudely con­flate our util­ity func­tion with world GDP, then I think it is rea­son­able to place a re­turn of at least a fac­tor of 10 on some of the bet­ter growth in­vest­ments. ↩︎

  2. Again I want to stress that the point here is not to de­bate these num­bers (I was told that a de­crease of the ex­tinc­tion risk of 0.1 per­centage point for an in­vest­ment of 0.1% of the world GDP was rea­son­able, but found it difficult to find refer­ences; I would ap­pre­ci­ate com­ments point­ing to rele­vant refer­ences). ↩︎

  3. The growth rate mea­sures the in­stan­ta­neous in­crease of the func­tion, in pro­por­tion to the size of the func­tion. In for­mu­las, if y(t) is the func­tion, then the growth rate is y’(t)/​y(t) (this is also the deriva­tive of log(y(t))). ↩︎

  4. If you feel un­com­fortable with the idea that the func­tion t^3 looks like a con­stant, let me stress that all the rea­son­ings here are based on rates of growth. So if we were plot­ting these curves, it would be much more in­for­ma­tive that we draw the log­a­r­ithm of these func­tions. And, re­ally, log(t^3) does look like a con­stant when t is large. To be more pre­cise, one can check that Bob’s con­clu­sions will hold as long as the growth rate falls down to es­sen­tially zero suffi­ciently quickly com­pared with the 100 billion year time scale, so that we can con­flate any such sce­nario with a “plateau” sce­nario. In for­mu­las, de­note the pre­sent time by t1 and the fi­nal time by T = t1 + 100 billion years. If we pos­tu­late that our util­ity func­tion at time t is t^3, then the to­tal util­ity of the fu­ture would be the in­te­gral of this func­tion for t rang­ing be­tween t1 and T. The growth in­ter­ven­tion al­lows to re­place this func­tion by (t+s)^3, where s is such that (t1 + s)^3/​t1^3 = 1.01. When we calcu­late the to­tal util­ity of the fu­ture for this func­tion, we find that it amounts to in­te­grat­ing the func­tion t^3 for t vary­ing be­tween t1 + s and T + s. The util­ity gain caused by the in­ter­ven­tion is thus es­sen­tially the in­te­gral be­tween T and T+s of the func­tion t^3 (the dis­crep­ancy near t1 is com­par­a­tively very small). This is ap­prox­i­mately s T^3, which is ex­tremely small com­pared with the to­tal in­te­gral, which is of the or­der of T^4 (the ra­tio is of the or­der of s/​T, which es­sen­tially com­pares the speedup brought by the in­ter­ven­tion with the time scale of 100 billion years). ↩︎

  5. This was my ex­pe­rience when talk­ing to peo­ple, and has been con­firmed by my search­ing through the liter­a­ture. In par­tic­u­lar, this 80k ar­ti­cle at­tempts to sur­vey the views of the com­mu­nity (to be pre­cise, “mostly peo­ple as­so­ci­ated with CEA, MIRI, FHI, GCRI, and re­lated or­gani­sa­tions”), and states that al­though a growth in­ter­ven­tion “looks like it may have a last­ing speedup effect on the en­tire fu­ture”, “the cur­rent con­sen­sus is that this doesn’t do much to change the value of the fu­ture. Shift­ing ev­ery­thing for­ward in time is es­sen­tially morally neu­tral.” Nick Bostrom ar­gues in more de­tail for a plateau in The fu­ture of hu­man­ity (2007), and in foot­note 20 of his book Su­per­in­tel­li­gence. The “plateau” view is some­what im­plicit in Astro­nom­i­cal waste (2003), as well as in the con­cept of “tech­nolog­i­cal ma­tu­rity” in Ex­is­ten­tial risk as global pri­or­ity (2013). This view was per­haps best sum­ma­rized by Holden Karnofsky here, where he says: “we’ve en­coun­tered nu­mer­ous peo­ple who ar­gue that char­i­ties work­ing on re­duc­ing the risk of sud­den hu­man ex­tinc­tion must be the best ones to sup­port, since the value of sav­ing the hu­man race is so high that “any imag­in­able prob­a­bil­ity of suc­cess” would lead to a higher ex­pected value for these char­i­ties than for oth­ers.” (See how­ever here for an up­date on Karnofsky’s ideas.) ↩︎

  6. I use the word “we” but I do not mean to im­ply that sen­tient be­ings in the far fu­ture will nec­es­sar­ily look like the pre­sent-day “we”. ↩︎

  7. To take a triv­ial ex­am­ple, I clearly pre­fer to watch a movie from be­gin­ning to end than to stare at a given frame for two hours. So my util­ity func­tion is not sim­ply a func­tion of the situ­a­tion at a given time which I then in­te­grate. Rather, it takes into ac­count the whole tra­jec­tory of what is hap­pen­ing as time flows. ↩︎

  8. As an­other illus­tra­tion, Elon Musk is sur­prised that space coloniza­tion is not re­ceiv­ing more at­ten­tion, while many oth­ers counter that there is already a lot of use­ful work to be done here on Earth. I am sug­gest­ing here that this sort of situ­a­tion may re­main broadly un­changed even for ex­tremely ad­vanced civ­i­liza­tions. In­ci­den­tally, this would go some way into miti­gat­ing Fermi’s para­dox: maybe other ad­vanced civ­i­liza­tions have not come to visit us be­cause they are mostly busy op­ti­miz­ing their sur­round­ing en­vi­ron­ment, and don’t care all that much about coloniz­ing space. ↩︎

  9. For one, I won­der if my cul­tural back­ground in con­ti­nen­tal Europe makes me more likely to defend the point of view ex­pressed here. It seems to me that the op­po­site view is more al­igned with a rather “atom­ist” view of the ideal so­ciety as a col­lec­tion of rel­a­tively small and mostly self-re­li­ant agents, and that this sort of view is more pop­u­lar in the US (and in par­tic­u­lar in its tech com­mu­nity) than el­se­where. It also blends bet­ter with the hy­poth­e­sis of a com­ing sin­gu­lar­ity. On a differ­ent note, we must ac­knowl­edge that the EA com­mu­nity is dis­pro­por­tionately made of highly in­tel­lec­tu­al­iz­ing peo­ple who find com­puter sci­ence, math­e­mat­ics or philos­o­phy to be very en­joy­able ac­tivi­ties. I bet it will not sur­prise any­one read­ing these lines if I say that I do math re­search for a liv­ing. And, well, I feel like I am in a bet­ter po­si­tion to con­tribute if the best thing we can do is to work on AI safety, than if it is to dis­tribute bed nets. In other words, the ex­cel­lent al­ign­ment be­tween many EAs’ in­ter­ests and the AI safety prob­lem is a warn­ing sign, and sug­gests that we should be par­tic­u­larly vigilant that we are not fool­ing our­selves. This be­ing said, I cer­tainly do not mean to im­ply that EAs have a con­scious bias; in fact I be­lieve that the EA com­mu­nity is try­ing much harder than is typ­i­cal to be free of bi­ases. But a lot of our think­ing pro­cesses hap­pen un­con­sciously, and, for in­stance, if there is an idea around that looks rea­son­ably well thought-of and whose con­clu­sion I feel re­ally happy with, then my sub­con­scious think­ing will not think as hard about whether there is a flaw in the ar­gu­ment as if I was strongly dis­pleased with the idea. Or per­haps it will not bring it as force­fully to my con­scious self. Or per­haps some vague ver­sion of a doubt will reach my con­scious self, but I will not be most will­ing to play with this vague doubt un­til it can ma­ture into some­thing suffi­ciently strong to be com­mu­ni­ca­ble and cred­ible. Other con­sid­er­a­tions re­lated to the so­ciol­ogy of our com­mu­nity ap­pear in Ben Garfinkel’s talk How sure are we about this AI stuff? (EAG 2018). A sep­a­rate prob­lem is that ques­tion­ing an im­por­tant idea of the EA move­ment re­quires, in ad­di­tion to a fairly ex­ten­sive knowl­edge of the com­mu­nity and its cur­rent think­ing, at least some will­ing­ness to en­gage with a va­ri­ety of ideas in math­e­mat­ics, com­puter sci­ence, philos­o­phy, physics, eco­nomics, etc., plac­ing the “bar­rier to en­try” quite high. ↩︎

  10. I sup­pose that to­tal util­i­tar­i­ans would want the util­ity func­tion to be dou­bled in such a cir­cum­stance (which would force the ex­po­nent x to be 1). My per­sonal in­tu­ition is more con­fused, and I would not ob­ject to a model in which the to­tal util­ity is mul­ti­plied by some num­ber be­tween 1 and 2. (Maybe I feel that there should be some sort of pre­mium to origi­nal­ity, so that cre­at­ing an iden­ti­cal copy of some­thing is not quite as good as twice hav­ing only one ver­sion of it; or maybe I’m just will­ing to ac­cept that we are just look­ing for a sim­ple work­able model, which will nec­es­sar­ily be im­perfect.) ↩︎

  11. To be pre­cise, this es­ti­mate im­plic­itly pos­tu­lates that we find mat­ter to play with roughly pro­por­tion­ally to the vol­ume of space we oc­cupy, but in re­al­ity mat­ter in space is rather more sparsely dis­tributed. For a more ac­cu­rate es­ti­mate, we can con­sider the effec­tive di­men­sion d of the repar­ti­tion of mat­ter around us, so that the amount of mat­ter within dis­tance r from us grows roughly like r^d. The es­ti­mate in the main text should then be exp(t^(1+d)), and this num­ber d is in fact most likely be­low 3. How­ever I think any rea­son­able es­ti­mate of d will sug­gest that it is pos­i­tive, and this suffices to en­sure the val­idity of the con­clu­sion that a growth rate that in­creases with­out bound is pos­si­ble. ↩︎

  12. I do not mean to im­ply that Clara’s view is typ­i­cal or even com­mon within the EA com­mu­nity (I just don’t know). I mostly want to use this char­ac­ter as an ex­cuse to dis­cuss ex­pec­ta­tion max­i­miza­tion. ↩︎

  13. To be pre­cise, Alice had not speci­fied the con­stant growth rate she had in mind; here I take it that her origi­nal claim was “our util­ity func­tion will always grow at a rate of about 3% per year”. ↩︎

  14. For the record, Bostrom won­ders in The vuln­er­a­ble world hy­poth­e­sis (2018) whether there would be ways, “us­ing physics we don’t cur­rently un­der­stand well, to ini­ti­ate fast-grow­ing pro­cesses of value cre­ation (such as by cre­at­ing an ex­po­nen­tial cas­cade of baby-uni­verses whose in­hab­itants would be over­whelm­ingly happy)”. I won­der how a hard-core ex­pec­ta­tion max­i­mizer would deal with this pos­si­bil­ity. ↩︎

  15. A no­table cri­tique of ex­pec­ta­tion max­i­miza­tion by Holden Karnofsky, some­what differ­ent from what is dis­cussed in this text, can be found here. (See how­ever here for an up­date on Karnofsky’s ideas.) ↩︎

  16. I won­der if peo­ple will counter with ap­peals to quan­tum me­chan­ics and mul­ti­ple uni­verses. At any rate, if this is the sort of rea­son­ing that un­der­pins our de­ci­sions, then I would like it to be made ex­plicit. ↩︎

  17. Re­lated con­sid­er­a­tions have been dis­cussed here. ↩︎

  18. I’ll make the op­ti­mistic as­sump­tion that we are not bi­ased in some di­rec­tion. ↩︎

  19. My un­der­stand­ing is that this is in the bal­l­park of how OpenPhil op­er­ates. ↩︎

  20. at least in the “short-sighted” form ex­pressed by Clara in the story ↩︎

  21. Let me re­call this de­ci­sion rule here: if the growth in­ter­ven­tion causes a growth of our util­ity func­tion of x%, and the ex­is­ten­tial in­ter­ven­tion re­duces the prob­a­bil­ity of ex­tinc­tion by y per­centage points, then we choose the growth in­ter­ven­tion when x > y, and the ex­is­ten­tial in­ter­ven­tion oth­er­wise. ↩︎

  22. Strictly speak­ing, I think that we could ac­tu­ally write down com­pli­cated mod­els en­cod­ing the strength of the switch­ing costs etc., and iden­tify ex­plicit for­mu­las that take ev­ery as­pect dis­cussed so far into con­sid­er­a­tion. But I am skep­ti­cal that we are in a good po­si­tion to eval­u­ate all the pa­ram­e­ters that would en­ter such com­pli­cated mod­els, and I pre­fer to just push the difficul­ties into “good judge­ment”. I would per­son­ally give more cred­i­bil­ity to some­one tel­ling me that they have done some “good judge­ment” ad­just­ments, and ex­plain­ing me in words the rough con­sid­er­a­tions they put into it, than with some­one tel­ling me that they have writ­ten down very com­pli­cated mod­els, es­ti­mated all the rele­vant pa­ram­e­ters and then ap­plied the for­mula. ↩︎

  23. I think this is most strongly ex­pressed in this ar­ti­cle, where it is stated that “Speed­ing up events in so­ciety this year looks like it may have a last­ing speedup effect on the en­tire fu­ture—it might make all of the fu­ture events hap­pen slightly ear­lier than they oth­er­wise would have. In some sense this changes the char­ac­ter of the far fu­ture, al­though the cur­rent con­sen­sus is that this doesn’t do much to change the value of the fu­ture. Shift­ing ev­ery­thing for­ward in time is es­sen­tially morally neu­tral.” The key ideas ar­ti­cle states that in­ter­ven­tions such as im­prov­ing health in poor coun­tries “seem es­pe­cially promis­ing if you don’t think peo­ple can or should fo­cus on the long-term effects of their ac­tions”. This im­plic­itly con­veys that if you think that you can and should fo­cus on long-term effects, then you should not aim to work in such ar­eas. ↩︎

  24. See also here for similar con­cerns. ↩︎