‘Existential Risk and Growth’ Deep Dive #2 - A Critical Look at Model Conclusions

Summary

This is the sec­ond post in a three-part se­ries sum­maris­ing, cri­tiquing, and sug­gest­ing var­i­ants and ex­ten­sions of Leopold Aschen­bren­ner’s pa­per called Ex­is­ten­tial Risk and Growth. The se­ries is writ­ten by me and Alex Hol­ness-Tofts, with in­put from Phil Tram­mell (any er­rors in this post are due to me). Alex sum­marised the model and re­sults from the pa­per in the first post.

The main pur­pose of this post is to help the reader un­der­stand how se­ri­ously they should take the con­clu­sions from the pa­per, and to high­light some con­clu­sions that we think are ro­bust.

Key points

  • Some com­mon ob­jec­tions to the model are weak, but we think that there are some (per­haps less com­mon) strong ob­jec­tions.

  • Be­cause of this, we don’t think that the pa­per should cause you to change your views much re­gard­ing most of the im­pli­ca­tions of the model found in the pa­per. Re­plac­ing the sim­plify­ing as­sump­tions that the model makes with other, equally rea­son­able ones can eas­ily mean that the con­clu­sions pre­sented in the pa­per no longer fol­low.

  • We draw some more limited con­clu­sions that we think are ro­bust. Th­ese are: i) that mak­ing peo­ple bet­ter off makes them care more about pre­serv­ing their fu­ture com­pared to get­ting more con­sump­tion to­day, in­creas­ing the amount that so­ciety would op­ti­mally spend on ex­is­ten­tial risk re­duc­tion; ii) that if you think the sign of the im­pact of eco­nomic growth on ex­is­ten­tial risk won’t change over time, you’re not sure whether the sign is pos­i­tive or nega­tive, and you think that eco­nomic growth is in­evitable, you might as well speed up eco­nomic growth; iii) it’s not the case that mov­ing from a “rule of thumb” al­lo­ca­tion where spend­ing on ex­is­ten­tial risk re­duc­tion stays con­stant over time to an al­lo­ca­tion that varies op­ti­mally over time never makes much differ­ence to ex­is­ten­tial risk; and iv) that you don’t have to be­lieve in a sus­pi­ciously long or de­tailed list of ad-hoc claims about past and fu­ture tech­nolog­i­cal de­vel­op­ments in or­der to con­clude that world his­tory will in­clude a ‘time of per­ils’ - the con­clu­sion comes from the rel­a­tively sim­ple model pre­sented in the pa­per, for plau­si­ble model pa­ram­e­ters.

  • We think work in this area, i.e. us­ing the­o­ret­i­cal mod­els to in­ves­ti­gate the con­nec­tion be­tween eco­nomic growth and ex­is­ten­tial risk, is im­por­tant. This is be­cause i) any weak ev­i­dence that this work might gen­er­ate is still valuable, ii) similarly, weaker but more ro­bust con­clu­sions that this work might point to are valuable, and iii) there might be value from the sig­nal­ling to aca­demics and philan­thropists from se­ri­ous at­tempts at do­ing this.

In­tro­duc­tion and pre­limi­nary comments

In this post we

  • List the ma­jor con­clu­sions that fol­low from the model de­scribed in the Ex­is­ten­tial Risk and Growth pa­per, and cat­e­gorise them by ro­bust­ness. Note that this in­cludes some in­ter­est­ing con­clu­sions that are not spel­led out in the pa­per.

  • List some com­mon (and less com­mon) ob­jec­tions to the model, re­spond to them, and iden­tify which are strong and which are weak.

  • Briefly make the case for do­ing work of this kind, i.e. us­ing the­o­ret­i­cal mod­els to in­ves­ti­gate the link be­tween eco­nomic growth and ex­is­ten­tial risk.

The pri­mary au­di­ence we have in mind is peo­ple who are in­ter­ested in know­ing what con­clu­sions can be drawn from the pa­per, and how ro­bust these con­clu­sions are. The post is writ­ten such that, hope­fully, non-economists with a range of tech­ni­cal knowl­edge can un­der­stand it.

I wrote this post in con­sul­ta­tion with Phil Tram­mell and Alex Hol­ness-Tofts. “We” in this post refers to our col­lec­tive judge­ment.

I should men­tion that Alex and I don’t have eco­nomics back­grounds, al­though Phil does, and, in ad­di­tion, Phil su­per­vised the writ­ing of the pa­per. Ideally, some­one who is an ex­pert both in eco­nomic growth the­ory and ex­is­ten­tial risk would do a re­ally deep anal­y­sis of the model pre­sented in the pa­per, but in the ab­sence of this we feel that giv­ing our thoughts on this is use­ful.

Note that we are aware of a few minor er­rors in the cur­rent draft of the pa­per. It seems pos­si­ble that fix­ing these will change one or more of the con­clu­sions that fol­low from the model as­sump­tions. How­ever, for the pur­poses of this post we’ll as­sume that this isn’t the case, and ad­dress the model con­clu­sions as cur­rently given in the pa­per (alongside con­clu­sions that aren’t ex­plic­itly men­tioned in the pa­per).

Fi­nally, while we are spend­ing a lot of this post em­pha­sis­ing the limi­ta­tions of the model, we want to stress that we think that the work done in the pa­per is valuable, and that fur­ther work in this area has the po­ten­tial to be valuable. We dis­cuss our rea­sons for think­ing this in the last sec­tion of the post.

Con­clu­sions that de­pend on con­tentious model assumptions

Some of the fol­low­ing con­clu­sions de­pend on the cho­sen val­ues of the model pa­ram­e­ters 𝜀, 𝛽, and 𝛾. To get a feel for these pa­ram­e­ters, note that 𝜀 con­trols how strongly the to­tal pro­duc­tion of con­sump­tion goods in­fluences ex­is­ten­tial risk, while 𝛽 con­trols how strongly the to­tal pro­duc­tion of safety goods in­fluences ex­is­ten­tial risk. For both pa­ram­e­ters, a big­ger value means a big­ger effect. 𝛾 is the con­stant in the agent’s isoe­las­tic util­ity func­tion. Loosely speak­ing, 𝛾 con­trols how sharply the util­ity gain from an ex­tra bit of con­sump­tion falls off as con­sump­tion in­creases. A big­ger 𝛾 means a sharper fall-off. We dis­cuss the role of 𝜀 and 𝛽 in more de­tail in the next sub­sec­tion. (the first post in the se­ries also gives much more de­tail and con­text re­gard­ing these pa­ram­e­ters)

The con­clu­sions we’ll dis­cuss in this sec­tion, and which de­pend on con­tentious model as­sump­tions, are:

  1. The scale effect (𝜀 vs 𝛽) is cen­trally im­por­tant for de­ter­min­ing whether we will avert ex­is­ten­tial catastrophe

    • If we live in a world where 𝜀 > 𝛽 and 𝛾 is small, or where 𝜀 >> 𝛽, ex­is­ten­tial catas­tro­phe is inevitable

    • Other­wise, i) there’s a chance that ex­is­ten­tial catas­tro­phe is averted, and ii) there is an ex­is­ten­tial risk Kuznets curve, which means that his­tory will con­tain a ‘time of per­ils’. Note that this con­clu­sion is highly rele­vant for the case for ex­is­ten­tial risk re­duc­tion, which many peo­ple in the Effec­tive Altru­ism com­mu­nity think is a highly valuable cause area.

  2. Em­piri­cally, 𝜀 < 𝛽 is un­likely, 𝜀 > 𝛽 and 𝜀 >> 𝛽 both seem plausible

  3. Un­less we’re con­fi­dent that 𝜀 >> 𝛽, we might as well act on the hope that 𝜀 > 𝛽; in that world we might be able to un­lock an as­tro­nom­i­cally valuable fu­ture, whereas our fu­ture will nec­es­sar­ily be cur­tailed if 𝜀 >> 𝛽.

A rea­son­able ques­tion that you might ask is “how much should the work in this pa­per move me to­wards be­liev­ing these con­clu­sions?”.

A start­ing point is that if you be­lieve that the as­sump­tions the model makes are an ac­cu­rate re­flec­tion of the world, you should take the con­clu­sions se­ri­ously.

Given our own be­liefs about the as­sump­tions, we think that these con­clu­sions are not very ro­bust. We think that the work in the pa­per should move you a bit, but not much, to­wards be­liev­ing these con­clu­sions, as­sum­ing that you were pre­vi­ously am­biva­lent to­wards them.

Alter­na­tive as­sump­tions that lead to differ­ent conclusions

A key as­sump­tion is that the haz­ard rate is de­ter­mined by the fol­low­ing equa­tion where is the haz­ard rate at time , is to­tal con­sump­tion pro­duc­tion at time , is to­tal safety pro­duc­tion at time , and 𝜀, 𝛽 and are con­stant model pa­ram­e­ters.

This is a vast sim­plifi­ca­tion com­pared to the com­plex ways we might imag­ine the state of the world in­fluenc­ing ex­is­ten­tial risk in re­al­ity. To be clear, this isn’t to say that it might not turn out to be roughly right—we just don’t have much em­piri­cal ev­i­dence here.

Be­cause of this equa­tion, in the model the val­ues that we choose for 𝜀 and 𝛽 are cru­cially im­por­tant for the prospects of hu­man­ity’s long-run sur­vival, even as­sum­ing that peo­ple co­or­di­nate perfectly and don’t dis­count fu­ture value.

There are some quite plau­si­ble ways that the real world might break this as­sumed equa­tion for the haz­ard rate, in ways that mean that the con­clu­sions listed above no longer fol­low:

  • 𝜀 and 𝛽 might change over time, or as tech­nolog­i­cal so­phis­ti­ca­tion changes

    • For ex­am­ple, let’s as­sume that the model is com­pletely cor­rect, ex­cept that 𝜀 and 𝛽 can change over time. Let’s also as­sume that the safety tech­nolo­gies we have now are not very po­tent com­pared to fu­ture ones, such that 𝜀 >> 𝛽 now, but later on we will have 𝛽 > 𝜀. This seems perfectly plau­si­ble (for ex­am­ple, maybe ma­ture, well func­tion­ing AGI tech­nol­ogy will greatly in­crease safety). Then, if we be­lieve the model (in­clud­ing the part that says that 𝜀 and 𝛽 are con­stant), and we have what we be­lieve is perfect knowl­edge of 𝜀 and 𝛽 (by ob­serv­ing their cur­rent-day val­ues), we’ll con­clude that we’re in the regime where ex­is­ten­tial catas­tro­phe is in­evitable, whereas in fact it isn’t.

    • This high­lights a difficulty with do­ing an anal­y­sis that leads to a con­clu­sion about likely real-world val­ues of 𝜀 and 𝛽 (such as con­clu­sion 2) - even if we’re sure that the haz­ard rate is mod­el­led well by the haz­ard rate equa­tion, it’s not clear that the val­ues we es­ti­mate us­ing his­tor­i­cal data will ap­ply in the fu­ture.

  • Some pre­sent-day con­sump­tion tech­nolo­gies are (pre­sum­ably) sig­nifi­cantly safer than oth­ers, and in prac­tice we might be able to shift pro­duc­tion to these safer tech­nolo­gies. In the lan­guage of the model, you could think of this as so­ciety choos­ing to ma­nipu­late 𝜀 (similar com­ments ap­ply for safety tech­nolo­gies).

    • You can imag­ine a similar sce­nario to the one above, where we start off with 𝜀 >> 𝛽, but then so­ciety shifts con­sump­tion pro­duc­tion to fo­cus on goods based on safer tech­nolo­gies. In that way, per cap­ita con­sump­tion is main­tained at an ac­cept­able level, but ex­is­ten­tial risk is lower for a given level of to­tal con­sump­tion, such that 𝜀 is effec­tively low­ered, and we can con­sider our­selves to be in a 𝛽 > 𝜀 situ­a­tion.

Con­clu­sions that may de­pend on con­tentious model assumptions

  1. If we’re in a world where there’s an ex­is­ten­tial risk Kuznets curve:

    • A pe­riod of faster eco­nomic growth re­duces over­all ex­is­ten­tial risk in the long run (even though, if it hap­pens on the up­ward part of the Kuznets curve, it in­creases risk in the short term). Con­versely, slower eco­nomic growth in­creases over­all ex­is­ten­tial risk.

    • On the other hand, a tem­po­rary boom fol­lowed by a bust, where the econ­omy re­verts back to its nor­mal path af­ter a pe­riod above trend, in­creases ex­is­ten­tial risk (a tem­po­rary bust fol­lowed by a boom de­creases it).

  2. More gen­er­ally, mak­ing peo­ple bet­ter off makes them care about pre­serv­ing their fu­ture more rel­a­tive to in­creas­ing con­sump­tion to­day, and so in­creases the amount so­ciety is will­ing to spend on ex­is­ten­tial risk re­duc­tion.

  3. Re­duc­ing the rate at which peo­ple dis­count their fu­ture util­ity is prob­a­bly an eas­ier way to re­duce ex­is­ten­tial risk than in­creas­ing the growth rate (de­pend­ing on what you think the likely range for 𝛾 is).

  4. Mov­ing from a “rule of thumb” al­lo­ca­tion where spend­ing on ex­is­ten­tial risk re­duc­tion stays con­stant over time to an op­ti­mal al­lo­ca­tion (whether fu­ture value is dis­counted or not) can cre­ate the pos­si­bil­ity of avert­ing ex­is­ten­tial catas­tro­phe, where pre­vi­ously ex­is­ten­tial catas­tro­phe was in­evitable.

In gen­eral, it’s not clear to us whether or not these con­clu­sions would change un­der rea­son­able mod­ifi­ca­tions to the haz­ard rate equa­tion, or to other parts of the model.

That be­ing said, given what we know at the mo­ment, we think that these con­clu­sions are per­haps slightly more ro­bust than the ones listed in the pre­vi­ous sec­tion, but still not very ro­bust. And as with the pre­vi­ous con­clu­sions, we think that the work in the pa­per should move you a bit, but not much, to­wards be­liev­ing these con­clu­sions, as­sum­ing that you were pre­vi­ously am­biva­lent to­wards them.

Con­clu­sions that we be­lieve are robust

  1. Mak­ing peo­ple bet­ter off makes them care about pre­serv­ing their fu­ture more rel­a­tive to in­creas­ing con­sump­tion to­day, and so in­creases the amount so­ciety would op­ti­mally spend on ex­is­ten­tial risk re­duc­tion (this in­creased in­cli­na­tion (as­sum­ing op­ti­mal­ity) for so­ciety to spend money to pre­serve its own ex­is­tence can be thought of as a civil­i­sa­tional analogue of the smaller-scale in­creased will­ing­ness of in­di­vi­d­u­als to spend money to ex­tend their lives).

  2. If you think that i) eco­nomic growth is more or less in­evitable, ii) the sign of the im­pact of eco­nomic growth on ex­is­ten­tial risk won’t change over time, and iii) you’re not sure whether the sign is pos­i­tive or nega­tive, then you might as well speed up eco­nomic growth. In that world, most of the value comes from wor­lds where eco­nomic growth re­duces ex­is­ten­tial risk, be­cause in wor­lds where eco­nomic growth in­creases ex­is­ten­tial risk, we’re doomed any­way.

    • Also, this is gen­er­ally true for in­evitable trends with an un­known, but con­stant, im­pact on risk

    • (Phil has made this point in his blog; note, though, that it’s not at all clear that the sign of the im­pact of eco­nomic growth on ex­is­ten­tial risk won’t change over time)

  3. It’s not the case that mov­ing from a “rule of thumb” al­lo­ca­tion where spend­ing on ex­is­ten­tial risk re­duc­tion stays con­stant over time to an op­ti­mal al­lo­ca­tion that varies over time (whether peo­ple dis­count fu­ture value or not) never makes much differ­ence to ex­is­ten­tial risk.

  4. You don’t have to be­lieve in a sus­pi­ciously long or de­tailed list of ad-hoc claims about past and fu­ture tech­nolog­i­cal de­vel­op­ments in or­der to con­clude that world his­tory in­cludes a ‘time of per­ils’; it ap­pears in the sim­ple world de­scribed in the pa­per with a mod­er­ate 𝜀 > 𝛽 scale effect.

We con­sider the fi­nal point to be es­pe­cially im­por­tant be­cause, in our view, the plau­si­bil­ity of a ‘time of per­ils’ is ex­tremely ac­tion-rele­vant from a longter­mist per­spec­tive. We can con­sider this time of per­ils as an ex­is­ten­tial risk ver­sion of the en­vi­ron­men­tal Kuznets curve, where en­vi­ron­men­tal degra­da­tion rises at first, but even­tu­ally falls as so­ciety be­comes richer and more will­ing to spend on mea­sures that im­prove en­vi­ron­men­tal qual­ity.

Note also that con­clu­sion 3 is in con­trast to what is found in some pre­vi­ous work (Martin and Pindyck (2015, 2019) and Aurland-Bre­desen (2019)) on catas­tro­phe miti­ga­tion, which con­cludes that you don’t want to change the frac­tion of to­tal out­put spent on safety much as you get richer. The differ­ence comes from the fact that this pre­vi­ous work con­sid­ers frac­tional losses to con­sump­tion, rather than ex­is­ten­tial catas­tro­phe.

Ob­jec­tions to the model

Com­mon ob­jec­tions that we be­lieve are not strong

Ob­jec­tion: Why is it sen­si­ble to as­sume risk de­pends on the level of pro­duc­tion in the con­sump­tion and safety sec­tors, rather than the rate of in­crease of pro­duc­tion in the con­sump­tion and safety sec­tors?
Re­sponse: Con­sider the im­pli­ca­tions of spend­ing 10x more on elec­tric­ity pro­duc­tion, of which 20% is power plants burn­ing more fos­sil fuels and 80% is some ex­tremely ex­pen­sive (with our cur­rent tech) car­bon cap­ture that makes all the new pro­duc­tion zero-net-emis­sions. Or spend­ing 10x more on AI de­vel­op­ment than we cur­rently do, of which only 20% goes to adding new ca­pa­bil­ities and 80% goes to run­ning ex­tremely com­pu­ta­tion­ally ex­pen­sive (with our cur­rent tech) proof-check­ing pro­ce­dures that ver­ify that the new code does ex­actly what we ex­pect. It seems like that wouldn’t in­crease risk, even though the rate of in­crease of spend­ing on con­sump­tion pro­duc­tion is high. The in­tu­ition that speed is risky, if you pick it apart, is at least largely driven by the thought that risks come from in­creas­ing pro­duc­tion in the con­sump­tion sec­tor, be­fore you had the time to re­search how to de­velop safety mea­sures ac­cord­ingly (in­creas­ing ) and then main­tain the rele­vant safety in­fras­truc­ture ().

Ob­jec­tion: There is no cap­i­tal in the model, but in re­al­ity cap­i­tal is an im­por­tant de­ter­mi­nant of pro­duc­tion, as well as labour.
Re­sponse: We think this is un­likely to change the im­por­tant re­sults of the model (al­though it would be in­ter­est­ing to in­clude cap­i­tal in the model, in or­der to model philan­thropic in­vest­ment).

Ob­jec­tion: In the model, it is as­sumed that peo­ple live for­ever and that they don’t have any de­scen­dants that they care about. This is clearly un­re­al­is­tic.
Re­sponse: While there is a class of eco­nomic mod­els that model gen­er­a­tions more re­al­is­ti­cally (called over­lap­ping gen­er­a­tions mod­els), we don’t think that us­ing this kind of ap­proach would change the im­por­tant re­sults of the model.

Ob­jec­tion: The model says that, for pa­ram­e­ter val­ues that are likely to be re­al­is­tic (speci­fi­cally, if 𝜀 > 𝛽, or 𝛾 is large), so­ciety will even­tu­ally al­lo­cate al­most ev­ery­one into the safety sec­tor. But it seems im­plau­si­ble that in the long run al­most ev­ery­one will be work­ing on safety.
Re­sponse: We’re not sure that that sce­nario is so im­plau­si­ble. You could imag­ine a fu­ture where ad­vanced tech­nol­ogy led to the pro­duc­tion of re­ally great con­sumer goods, but also to the pos­si­bil­ity of very dan­ger­ous goods, such that al­most ev­ery­one had to be em­ployed, say, mon­i­tor­ing the use of the dan­ger­ous goods (or do­ing re­search into how to mon­i­tor them bet­ter).

Ob­jec­tion: Given differ­en­tial tech­nolog­i­cal progress, it might mat­ter a whole lot whether a po­ten­tially risky tech­nol­ogy is de­vel­oped sooner or later. If later, the prob­a­bil­ity in­creases that we found cor­re­spond­ing risk-miti­gat­ing tech­nolo­gies in the mean­time. But it seems to be the case that ac­cord­ing to the model, speed­ing up growth always de­creases long-run risk (at least, given our un­cer­tainty in the model pa­ram­e­ters)
Re­sponse: Slow­ing growth within the model cor­re­sponds to re­duc­ing the pop­u­la­tion growth rate. This leads to a re­duc­tion in both the rate of con­sump­tion tech­nol­ogy dis­cov­ery and the rate of safety tech­nol­ogy dis­cov­ery (ig­nor­ing changes to em­ployee al­lo­ca­tion). So, within the model, to re­duce growth is not to cause differ­en­tial tech­nolog­i­cal progress. Rather, within the model, pro­mot­ing differ­en­tial tech­nolog­i­cal progress means al­lo­cat­ing a greater share of sci­en­tists to safety tech­nol­ogy.

Moder­ately strong /​ pos­si­bly strong objections

Ob­jec­tion: The model is an ap­prox­i­ma­tion of how the world is now and how we think it might be in the fu­ture. But how sure are we that things won’t have qual­i­ta­tively changed in, say, 500 years’ time
Re­sponse: Sure, we do need to as­sume that the world hasn’t changed so rad­i­cally in 500 years’ time that the model is no longer ap­pli­ca­ble. If you think so­ciety will rad­i­cally change in some way, you’d have to think care­fully about whether the as­sump­tions the model makes would still ap­ply. One thing in favour of the model, though, is that it isn’t very closely fit­ted to the way so­ciety is cur­rently or­ganised, so you might ex­pect it to be more re­silient to rad­i­cal changes to so­ciety than it would oth­er­wise be.

Ob­jec­tion: Why should ex­is­ten­tial risk de­pend on the to­tal amount of con­sump­tion rather than per cap­ita con­sump­tion? Surely what mat­ters is how much re­sources each in­di­vi­d­ual has, rather than the to­tal amount of stuff pro­duced
Re­sponse: If you think that the im­por­tant thing for ex­is­ten­tial risk is how much each in­di­vi­d­ual pro­duces rather than how much ev­ery­one pro­duces col­lec­tively, then you shouldn’t find the model very con­vinc­ing. On the sur­face though, it seems plau­si­ble to us that to­tal con­sump­tion would be more im­por­tant. In any case, note that Phil has cre­ated an al­ter­na­tive model that is rele­vant here. Phil’s model as­sumes a fixed pop­u­la­tion (and ex­oge­nous pro­duc­tivity growth), which means that per cap­ita con­sump­tion only differs from to­tal con­sump­tion by a con­stant fac­tor, so that it doesn’t re­ally mat­ter which one ex­is­ten­tial risk de­pends on. That model re­sults in similar con­clu­sions to Aschen­bren­ner’s one, ex­cept that ex­is­ten­tial catas­tro­phe is no longer in­evitable for 𝜀 >> 𝛽, pro­vided that 𝛾 is not too small (but NB this re­sult is pro­vi­sional while po­ten­tial er­rors in Aschen­bren­ner’s pa­per are checked). See the Ap­pendix for more dis­cus­sion of the role of pop­u­la­tion in the model.

Ob­jec­tion: I think the level of tech­nolog­i­cal de­vel­op­ment should be im­por­tant for ex­is­ten­tial risk, but the model doesn’t ac­count for this
Re­sponse: The choice to use the level of con­sump­tion pro­duc­tion rather than the level of con­sump­tion tech­nol­ogy as an in­put to the haz­ard rate equates, roughly speak­ing, to as­sum­ing that ex­is­ten­tial risk comes from things be­ing pro­duced rather than re­search be­ing done (e.g. due to lab ac­ci­dents) or tech­nol­ogy be­ing available. It would be in­ter­est­ing to ex­plore a model where the haz­ard rate de­pends ex­plic­itly on the level of tech­nolog­i­cal de­vel­op­ment.

Ob­jec­tion: In the model, pop­u­la­tion grows for­ever at a con­stant rate. But, in re­al­ity we ex­pect pop­u­la­tion growth to rapidly de­cline over the next cen­tury. This be­ing the case, how can the model tell us any­thing about re­al­ity?
Ob­jec­tion: In the model, eco­nomic growth is ul­ti­mately driven by pop­u­la­tion growth. Is there strong ev­i­dence for this be­ing true in the real world?
Re­sponse: Phil has cre­ated an al­ter­na­tive model that doesn’t as­sume ex­po­nen­tial pop­u­la­tion growth, and found that this does lead to differ­ent re­sults in some cases. Speci­fi­cally, the model re­sults in similar con­clu­sions to Aschen­bren­ner’s one, ex­cept that ex­is­ten­tial catas­tro­phe is no longer in­evitable for 𝜀 >> 𝛽, pro­vided that 𝛾 is not too small (but NB this re­sult is pro­vi­sional while po­ten­tial er­rors in Aschen­bren­ner’s pa­per are checked). See the Ap­pendix for a more de­tailed dis­cus­sion.

Ob­jec­tion: In the model, so­ciety al­lo­cates work­ers be­tween the four pos­si­ble oc­cu­pa­tions (con­sump­tion worker, con­sump­tion sci­en­tist, safety worker, safety sci­en­tist) ac­cord­ing to what­ever max­imises util­ity. Isn’t this vastly op­ti­mistic? It’s hard to imag­ine that we are cur­rently at an op­ti­mal al­lo­ca­tion, even for peo­ple with a non-zero dis­count rate.
Re­sponse: We are prob­a­bly not par­tic­u­larly close to the op­ti­mal al­lo­ca­tion at the mo­ment. If you be­lieve the model as­sump­tions other than Op­ti­mal Allo­ca­tion, you can con­sider that the model gives us what hap­pens in a “best case” for co­or­di­na­tion on ex­is­ten­tial risk miti­ga­tion. It would be in­ter­est­ing to look at the non-op­ti­mal al­lo­ca­tion case. This might give you in­for­ma­tion about how valuable mov­ing to­wards the op­ti­mal al­lo­ca­tion would be.

Ob­jec­tion: In the model, we as­sume that ev­ery­one knows the true 𝜀 and 𝛽 val­ues, so that so­ciety can al­lo­cate re­sources ac­cord­ingly. But in re­al­ity it seems like it would be hard to be very con­fi­dent about what these val­ues are.
Re­sponse: It might be in­ter­est­ing to model this un­cer­tainty ex­plic­itly. If you think that peo­ple have roughly the right cen­tral es­ti­mates, but with some un­cer­tainty, our guess is that this wouldn’t change the big pic­ture re­sults very much, but it might change the de­tailed re­sults in in­ter­est­ing ways. If you think that peo­ple are bi­ased in one di­rec­tion, it seems clear that this can qual­i­ta­tively change the re­sults (for ex­am­ple, if peo­ple think they’re in a regime where they should be trans­fer­ring labour from safety to con­sump­tion, when in fact the op­po­site is true). In that case, mod­el­ling this ex­plic­itly could shed light on the value of ed­u­cat­ing peo­ple about ex­is­ten­tial risk.

Ob­jec­tion: It doesn’t seem sen­si­ble to have the same 𝛼 pa­ram­e­ter for the good pro­duc­tion func­tions in the safety and con­sump­tion sec­tors.
Re­sponse: Our guess is that al­low­ing the two sec­tors to have differ­ent pa­ram­e­ters rather than a sin­gle 𝛼 pa­ram­e­ter doesn’t change the im­por­tant re­sults of the model. How­ever, it might be in­ter­est­ing (and fairly easy) to try this out.

Strongest objections

Ob­jec­tion: 𝜀 and 𝛽 aren’t re­ally fixed /​ the haz­ard rate is de­ter­mined in a far more com­plex way than what is in the model. In the end, the likely path of the haz­ard rate will come down to de­tails about which tech­nolo­gies get de­vel­oped in which or­der, how dan­ger­ous they turn out to be, and things like how sta­ble and ro­bust im­por­tant in­sti­tu­tions are. The model can’t cap­ture these things.
Ob­jec­tion: In the 𝜀 >> 𝛽 case, we face a choice be­tween ex­tinc­tion on the one hand, and de­vot­ing so much of our re­sources to safety that we live lives worse than death on the other. But surely with perfect co­or­di­na­tion (which we as­sume in the model), there’s some chance we could or­ganise so­ciety in such a way that we have lots of “safe” tech­nol­ogy /​ goods and none of the “dan­ger­ous” ones, such that we live lives worth liv­ing and in safety. A car­toon ex­am­ple might be a world where ad­vanced tech­nolo­gies are all banned, and we move all re­sources cur­rently spent on lux­ury goods to car­bon cap­ture.
Ob­jec­tion: Ac­cord­ing to the model, the scale effect, i.e. 𝜀 vs 𝛽, is cen­trally im­por­tant for de­ter­min­ing whether we will avert ex­is­ten­tial catas­tro­phe. But isn’t this just an arte­fact of the model, which fol­lows from the as­sumed re­la­tion­ship be­tween ex­is­ten­tial risk, to­tal con­sump­tion, and to­tal safety pro­duc­tion?
Re­sponse: We think these all point to­wards im­por­tant things, which we dis­cussed in the ear­lier sec­tion called “Con­clu­sions that de­pend on con­tentious model as­sump­tions” (note that we don’t nec­es­sar­ily agree with ev­ery as­ser­tion made in these ob­jec­tions).

Ob­jec­tion: Eliezer Yud­kowsky’s ar­gu­ment that work for build­ing an un­safe AGI par­allelizes bet­ter than work for build­ing a safe AGI, and that un­safe AGI benefits more in ex­pec­ta­tion from hav­ing more com­put­ing power than safe AGI, both im­ply that slower growth is bet­ter from an AI x-risk view­point.
Re­sponse: This can be thought of as a con­crete ex­am­ple of the gen­eral point (also raised in the three pre­vi­ous ob­jec­tions) that the haz­ard rate equa­tion in the model rep­re­sents quite a strong as­sump­tion about the re­la­tion­ship be­tween tech­nol­ogy and ex­is­ten­tial risk. If you think that the ar­gu­ment de­scribed in the ob­jec­tion is definitely right, the model prob­a­bly can’t tell you very much.

Why we think work in this area is useful

While we think that the work in the pa­per rep­re­sents very weak ev­i­dence for the con­clu­sions that fol­low only from the de­tailed as­sump­tions of the model, we want to make the ar­gu­ment that work of the kind done in the pa­per, on the­o­ret­i­cal mod­els con­nect­ing eco­nomic growth to ex­is­ten­tial risk, is use­ful.

We claim that weak ev­i­dence that shifts your be­liefs a bit is still valuable, es­pe­cially when it ad­dresses ques­tions as im­por­tant as the ones we’re con­sid­er­ing here. Similarly, there is value in con­clu­sions that are weaker (in the sense that as­sum­ing that they’re true changes your be­liefs about the world less than “stronger” con­clu­sions would) but more ro­bust, like the ones we’ve drawn in the “Con­clu­sions that we be­lieve are ro­bust” sec­tion of this post.

Cur­rently, most work on un­der­stand­ing ex­is­ten­tial risk makes gran­u­lar and spe­cific ar­gu­ments about par­tic­u­lar tech­nolo­gies or poli­ti­cal situ­a­tions. How­ever, more gen­eral and ab­stract mod­els, like Aschen­bren­ner’s, give us a differ­ent fla­vor of in­sight about ex­is­ten­tial risk. Whichever is your preferred ap­proach, it might be that a mixed strat­egy, where both ap­proaches are used, can give more ro­bust con­clu­sions (par­tic­u­larly where the ap­proaches con­verge).

Ex­plor­ing both ap­proaches in par­allel also has high in­for­ma­tion value given the high stakes and our un­cer­tainty about which ap­proach might be more fruit­ful.

Gen­er­ally, the stronger your be­liefs about the link be­tween eco­nomic growth and ex­is­ten­tial risk, the less use­ful work like this will be. For ex­am­ple, if you’re cer­tain that po­ten­tially dan­ger­ous Ar­tifi­cial Gen­eral In­tel­li­gence will ar­rive in the next 20 years, and that eco­nomic growth will speed up its ar­rival, but not speed up safety work, you prob­a­bly won’t learn as much from mod­els of eco­nomic growth and ex­is­ten­tial risk as some­one who is more ag­nos­tic on this point. On the other hand, if you’re more un­cer­tain about when trans­for­ma­tive tech­nolo­gies are likely to be de­vel­oped, and what the effect of eco­nomic growth on these tech­nolo­gies and their reg­u­la­tion might be, the­o­ret­i­cal mod­els con­nect­ing eco­nomic growth to ex­is­ten­tial risk would shift your be­liefs more.

Since there is a wide range of views out there, both within the Effec­tive Altru­ism com­mu­nity and out­side it, we think that this work is likely to be use­ful for many peo­ple.

A sep­a­rate con­sid­er­a­tion is one of sig­nal­ling /​ mar­ket­ing: it might be the case that cre­at­ing an aca­demic liter­a­ture on the im­pact of eco­nomic growth on ex­is­ten­tial risk will mean that aca­demics, philan­thropists and, even­tu­ally, poli­cy­mak­ers and the gen­eral pub­lic take ex­is­ten­tial risk more se­ri­ously.

Acknowledgements

Thanks to Toby New­berry and Hamish Hobbs (and prob­a­bly oth­ers) for feed­back on ear­lier drafts of this.

References

Martin, Ian W. R. and Robert S. Pindyck, “Avert­ing Catas­tro­phes: The Strange Eco­nomics of Scylla and Charyb­dis,” Amer­i­can Eco­nomic Re­view, Oc­to­ber 2015, 105 (10), 2947-2985.

_ and _, “Welfare Costs of Catas­tro­phes: Lost Con­sump­tion and Lost Lives,” July 2019. Work­ing Paper 26068, Na­tional Bureau of Eco­nomic Re­search.

Aurland-Bre­desen, Kine Josefine, “The Op­ti­mal Eco­nomic Man­age­ment of Catas­trophic Risk.” PhD dis­ser­ta­tion 2019.

Ap­pendix: the role of pop­u­la­tion growth

One pair of fea­tures of the model that you might find un­in­tu­itive is that

  1. pop­u­la­tion growth is nec­es­sary for sus­tained eco­nomic growth (with­out pop­u­la­tion growth, eco­nomic growth even­tu­ally stag­nates), and

  2. pop­u­la­tion grows at a con­stant rate for­ever.

Fea­ture (1) is in­her­ited from the eco­nomic growth model that the Ex­is­ten­tial Risk and Growth model is based on, which is known as the Jones model; whether (1) is true or not in the real world doesn’t seem to be a set­tled is­sue. Mean­while, fea­ture (2) con­tra­dicts pop­u­la­tion growth fore­casts pretty clearly.

At first glance, fea­ture (2) in par­tic­u­lar might seem to count against the model pretty strongly.

How­ever, the com­bined effect of these fea­tures on eco­nomic growth is just to make the econ­omy grow con­sis­tently over time at some fairly steady rate (un­less there’s a dra­matic change in the num­ber of peo­ple em­ployed in the con­sump­tion sec­tor). The ex­act rate of eco­nomic growth doesn’t seem to be very im­por­tant for the con­clu­sions we draw from the model, so it ap­pears that the effect that these fea­tures have on eco­nomic growth is pretty unim­por­tant, as long as you think that the econ­omy will con­tinue to grow at a roughly con­stant rate.

Still, we might worry that ex­po­nen­tially in­creas­ing pop­u­la­tion drives a lot of the re­sults through a means other than its effect on the eco­nomic growth rate. The ar­gu­ment is this: the haz­ard rate de­pends on to­tal con­sump­tion pro­duc­tion, while util­ity de­pends on per cap­ita con­sump­tion pro­duc­tion. As the pop­u­la­tion in­creases, ei­ther to­tal con­sump­tion has to in­crease in pro­por­tion, or per cap­ita con­sump­tion pro­duc­tion needs to de­crease. But there’s a min­i­mum ac­cept­able level of per cap­ita con­sump­tion pro­duc­tion (in or­der to keep util­ity pos­i­tive), so even­tu­ally we’ll reach a point where to­tal con­sump­tion has to in­crease in pro­por­tion to pop­u­la­tion in­crease. So it’s not sur­pris­ing that catas­tro­phe is in­evitable in the 𝜀 >> 𝛽 case—this only comes about be­cause we ar­tifi­cially forced pop­u­la­tion to keep in­creas­ing.

But, it turns out that catas­tro­phe would be in­evitable in the 𝜀 >> 𝛽 case even with­out pop­u­la­tion growth, be­cause with­out pop­u­la­tion growth you can’t shift peo­ple into the safety sec­tor fast enough to re­duce the haz­ard rate fast enough to pre­vent ex­is­ten­tial catas­tro­phe even­tu­ally hap­pen­ing.

How­ever, while the sim­ple ar­gu­ment given above is wrong, Phil has worked through the con­se­quences of a model with­out ex­po­nen­tially in­creas­ing pop­u­la­tion and with ex­oge­nous eco­nomic growth (so, eco­nomic growth that is not driven by pop­u­la­tion growth), and found that this does change some of the re­sults. In par­tic­u­lar, ex­is­ten­tial catas­tro­phe is no longer in­evitable for 𝜀 >> 𝛽, pro­vided that 𝛾 is not too small (but NB this re­sult is pro­vi­sional while po­ten­tial er­rors in Aschen­bren­ner’s pa­per are checked) .