Addressing Global Poverty as a Strategy to Improve the Long-Term Future

Introduction

There is a gen­eral ten­dency for peo­ple en­ter­ing the EA com­mu­nity to be ini­tially more in­ter­ested in high-im­pact global health in­ter­ven­tions like those en­dorsed by GiveWell, and to be at­tracted to the move­ment through or­ga­ni­za­tions like Giv­ing What We Can and The Life You Can Save dis­cussing is­sues pri­mar­ily re­lated to global poverty re­duc­tion. How­ever, with time in­side the move­ment, theEA Sur­vey shows a trend to be­come more in­ter­ested in work speci­fi­cally aimed at re­duc­ing ex­is­ten­tial risk. I rec­og­nize that some­times in­di­vi­d­u­als move in the op­po­site di­rec­tion, but the net flow of pri­ori­ti­za­tion is re­vealed in the sur­vey as origi­nat­ing in global poverty and drift­ing to­wards “long-term fu­ture/​ex­is­ten­tial (or catas­trophic) risk” as peo­ple be­come more deeply in­volved in the move­ment.

I credit this shift pri­mar­ily to the com­pel­ling philo­soph­i­cal ar­gu­ments for long-ter­mism and the im­por­tance of the dis­tant fu­ture in ex­pected value calcu­la­tions. I take is­sue with the in­fer­ence how­ever, that re­duc­ing global poverty is not a cause which can sig­nifi­cantly benefit the long-term fu­ture. Per­haps that is fair given the pre­dom­i­nant fo­cus around short-term mea­surable gains in the RCT-driven style of global health work en­gaged in by early EA al­igned orgs (cash trans­fers, bed nets, de­worm­ing, etc). But with a more long-ter­mist ap­proach to global poverty re­duc­tion, this ap­par­ent short­com­ing could per­haps be reme­died.

Another pos­si­bil­ity that could ex­plain the shift is that peo­ple who pri­mar­ily value global poverty re­duc­tion are dis­heart­ened by the amount of fo­cus within the move­ment on is­sues they per­ceive to be dis­con­nected from their ini­tial pur­poses and be­come dis­en­gaged with the move­ment; this is my con­cern. If this is oc­cur­ring then this would se­lect for EAs who em­brace causes tra­di­tion­ally cat­e­go­rized as long-term fu­ture fo­cused. If this is the case, it seems a great loss to the move­ment, and it seems im­por­tant to re­cruit­ing and or­ga­ni­za­tion build­ing to bring more mean­ingful dis­cus­sions re­gard­ing poverty re­duc­tion to fo­rums like this. Other ideas sur­round­ing pos­si­ble causes for this ap­par­ent dis­crep­ancy have been dis­cussed here.

My gen­eral sense is that this cre­ates an ar­tifi­cial and false di­chotomy be­tween these cause ar­eas and that, in dis­cus­sions which cen­ter around long term fu­ture vs. sav­ing lives now, peo­ple may be over sim­plify­ing the is­sue by failing to ac­knowl­edge the wide range of ac­tivi­ties that fall un­der “global de­vel­op­ment.” In other words, this work is of­ten im­plic­itly re­duced to the short-ter­mist work done by EA orgs like GiveWell’s top char­i­ties, rather than more holis­tic, growth ori­ented de­vel­op­ment efforts at a larger scale. Some in­tel­li­gent peo­ple within the move­ment seem to pre­fer a jus­tifi­ca­tion of global de­vel­op­ment on its own mer­its us­ing differ­ent kinds of philo­soph­i­cal rea­son­ing and leave long-ter­mism to those work­ing in ex­is­ten­tial risk re­duc­tion. Some even seem to sug­gest that global de­vel­op­ment may be harm­ful to the long-term-fu­ture (some rea­sons why will be dis­cussed be­low). But if cer­tain types of global de­vel­op­ment were truly harm­ful to the long-term fu­ture, this should at least be an im­por­tant area of re­search and ad­vo­cacy for long-ter­mists. Similarly, if strate­gic global de­vel­op­ment can be a use­ful tool for re­duc­ing ex­is­ten­tial risk then it makes sense to foster in­ter­dis­ci­plinary col­lab­o­ra­tion on this im­por­tant in­ter­sec­tion of ex­per­tise.

Here I will pre­sent the fol­low­ing 3 ar­gu­ments (in de­creas­ing or­der of con­fi­dence):

1. Re­duc­ing global poverty is un­likely to harm the long-term fu­ture.

2. Ad­dress­ing global poverty with a long-ter­mist per­spec­tive can sig­nifi­cantly benefit the long-term fu­ture.

3. Re­duc­ing global poverty with a long-ter­mist per­spec­tive may re­duce ex­is­ten­tial risk even bet­ter than other efforts in­tended speci­fi­cally to re­duce ex­is­ten­tial risk.

Re­duc­ing global poverty is un­likely to harm the long-term future

While this state­ment is widely ac­cepted in the gen­eral pub­lic it has been ques­tioned by many in­tel­li­gent peo­ple within the EA move­ment. The pri­mary rea­son I have seen cited is that with in­creas­ing hu­man growth and de­vel­op­ment come greater ex­is­ten­tial threats. Here I ad­dress a few com­mon spec­u­la­tive con­cerns and rea­sons why they seem not to be ro­bust cri­tiques to those fa­mil­iar with the down­stream effects of de­vel­op­ment efforts.

Prob­lem 1- Eco­nomic growth may ac­cel­er­ate par­alleliz­able work like AGI prefer­en­tially over highly se­rial work like friendly AI. Also, gen­eral tech­nolog­i­cal ad­vance­ment is likely to un­in­ten­tion­ally in­tro­duce new types of threats. (2 re­lated is­sues, grouped to­gether be­cause the re­sponse is es­sen­tially the same)

Re­sponse to Prob­lem 1- First, it’s far from proven that eco­nomic growth (par­tic­u­larly in im­pov­er­ished na­tions) would in­crease the risk posed by AI. A great deal has been writ­ten on the effects of eco­nomic growth in gen­eral on ex­is­ten­tial risk. One pa­per con­cludes “We may be eco­nom­i­cally ad­vanced enough to be able to de­stroy our­selves, but not eco­nom­i­cally ad­vanced enough that we care about this ex­is­ten­tial risk and spend on safety.” The pa­per it­self ex­am­ined differ­ent mod­els of growth vs. risk and in gen­eral pro­motes faster eco­nomic growth as a means of de­creas­ing risk over the long term. I would go one step fur­ther in sug­gest­ing that the op­ti­mal way to achieve the goal of ad­vanc­ing the ob­jec­tives of in­creased car­ing and stal­ling the in­dis­crim­i­nate tech­nolog­i­cal growth that threat­ens hu­man­ity is to di­vert re­sources from more tech­nolog­i­cally ad­vanced coun­tries to aid in the de­vel­op­ment of tech­nol­ogy poor na­tions (or per­haps bet­ter yet, end eco­nomic policy which per­pet­u­ates wealth ex­trac­tion).

If there is a nega­tive effect for this par­tic­u­lar risk (AI), it would most likely be from eco­nomic de­vel­op­ment in ad­vanced na­tions that are on the cut­ting (or po­ten­tially world end­ing) edge of de­vel­op­ing these new tech­nolo­gies. The re­dis­tri­bu­tion of re­sources from a coun­try with rel­a­tive abun­dance to one of rel­a­tive poverty would be more likely to have an over­all slow­ing effect on the ca­pac­ity of wealthy na­tions to ad­vance such tech­nolo­gies be­cause they will have less sur­plus re­sources within their econ­omy to do so. While there is ex­pected to be a global net in­crease in eco­nomic ca­pac­ity from im­proved uti­liza­tion of hu­man re­sources locked in poverty, the marginal effects of this re­dis­tri­bu­tion of re­sources would be to in­crease eco­nomic and tech­nolog­i­cal ca­pac­ity in the de­vel­op­ing na­tions where the ad­vance­ment of ground­break­ing tech­nol­ogy is rel­a­tively less likely to oc­cur in the short term. This would, in the­ory, al­low for more time in de­vel­oped na­tions to im­ple­mentstrate­gies for im­ple­ment­ing differ­en­tial tech­nol­ogy de­vel­op­ment. For more read­ing on this key con­cept of differ­en­tial de­vel­op­ment, Michael Aird has com­piled a few rele­vant re­sources here. (also quick shout out to him for giv­ing qual­ity feed­back on this pa­per!)

I think it is also im­por­tant to rec­og­nize that some strate­gies for poverty re­duc­tion may be con­sid­ered “win-win” and harder to rec­og­nize the benefits in terms of cre­at­ing space for differ­en­tial de­vel­op­ment. My un­der­stand­ing is that most of the true win-win sce­nar­ios have their roots in in­creased global con­nec­tivity and in­ter­na­tional col­lab­o­ra­tion, which also seems like a win for ex­is­ten­tial risk re­duc­tion. I think we should care­fully con­sider these sce­nar­ios in more de­tail and sus­pect that differ­ent strate­gies would be found to have differ­ent effects on the se­cu­rity of the long-term fu­ture.

Prob­lem 2- Ad­dress­ing global poverty could ex­ac­er­bate global cli­mate change as re­duc­ing pre­ma­ture mor­tal­ity leads to pop­u­la­tion growth and more peo­ple reach a stan­dard of liv­ing where con­sump­tion of fos­sil fuels is in­creased.

Re­sponse to Prob­lem 2- In the short term this is most likely true. The catch how­ever is that his­tor­i­cally, the most re­li­able meth­ods for sta­bi­liz­ing pop­u­la­tion growth have in­volved pro­mot­ing eco­nomic growth. The wealthier a coun­try be­comes; the smaller fam­ily size be­comes. This re­la­tion­ship be­tween de­vel­op­ment and pop­u­la­tion growth ac­tu­ally ar­gues in fa­vor of eco­nomic growth as a means of con­trol­ling cli­mate change in the long term.

That is not to say that effects of cli­mate change should be ig­nored. Ad­dress­ing cli­mate change is also of great im­por­tance to the work of re­duc­ing poverty as high­lighted in high level dis­cus­sions on this topic. Th­ese top­ics run to­gether in ways that are nearly in­sep­a­rable given their re­la­tion­ship and efforts to ad­dress one effec­tively should in­clude the other as ex­em­plified in plans like the “Global Green New Deal” and the work of the UN Devel­op­ment Pro­gramme. Th­ese are some real life ex­am­ples of how de­vel­op­ment work, with a long-ter­mist per­spec­tive, can con­tribute to re­duc­ing ex­is­ten­tial risk in im­por­tant ways. It seems clear that a world with less ex­treme poverty and eco­nomic in­equal­ity would be bet­ter op­ti­mized not only to sta­bi­lize pop­u­la­tion growth and with­stand the risks that come with cli­mate change but also for global co­or­di­na­tion and peace which are es­sen­tial for ad­dress­ing other x-risks (these are dis­cussed in more de­tail be­low).

Prob­lem 3: Ac­cord­ing to the BIP frame­work it can be harm­ful to the long term fu­ture to in­crease the in­tel­li­gence of ac­tors who are in­suffi­ciently benev­olent or to em­power peo­ple who are in­suffi­ciently benev­olent and/​or in­tel­li­gent. Efforts to in­dis­crim­i­nately em­power the world’s poor may be un­in­ten­tion­ally lead­ing to fu­ture harm based on these prin­ci­ples (this is a hy­po­thet­i­cal point and not some­thing I have seen ar­gued for by those who cre­ated the frame­work or other EA’s).

Re­sponse to Prob­lem 3: For this view to re­flect real harm it re­quires the as­sump­tion that peo­ple gen­er­ally do not pos­sess suffi­cient benev­olence and in­tel­li­gence in the ways speci­fied by the frame­work. How­ever, it is also im­por­tant to note that im­prov­ing peo­ple’s welfare may be an im­por­tant way to in­crease the propen­sity of peo­ple to value benev­olence and to be­come more ed­u­cated. At the very least, the strong as­so­ci­a­tion which is seen be­tween eco­nomic growth and ed­u­ca­tion is some­what bidi­rec­tional. The ev­i­dence be­tween benev­olence and eco­nomic op­por­tu­nity is much weaker but this may be due to a com­par­a­tive lack of re­search in this area. But sup­pos­ing a per­son holds a suffi­ciently pes­simistic view on the po­ten­tial of oth­ers to have a net pos­i­tive im­pact on the safety of the long term fu­ture, there should still be an un­der­stand­ing that se­lec­tively em­pow­er­ing good and in­tel­li­gent peo­ple can have an out­sized im­pact on re­duc­ing ex­is­ten­tial risk. If EA in­volve­ment in global de­vel­op­ment can cre­ate a ma­jor shift in de­vel­op­ment efforts which re­lies on this frame­work rather than in­dis­crim­i­nate efforts then at a min­i­mum the re­sult would be caus­ing less harm (over­all net benefit). This idea is also dis­cussed more be­low.

Ad­dress­ing global poverty thought­fully can sig­nifi­cantly benefit the long-term future

From “sav­ing lives” to holis­tic im­prove­ments to life more generally

The fo­cus on “sav­ing lives” and im­pact-fo­cused, ev­i­dence-based char­ity has been a per­sua­sive gate­way for many peo­ple (in­clud­ing my­self) to en­ter a larger realm of EA thought. As such, it is a pow­er­ful tool for move­ment build­ing. How­ever, I be­lieve that as long-ter­mist philos­o­phy gains trac­tion, this should nat­u­rally lead peo­ple to con­sider a broader range of de­vel­op­ment mod­els and com­pare their ex­pected long-term benefits. That as­sess­ment of benefit would in­clude an at­tempt to es­ti­mate their im­pacts on ex­is­ten­tial risk which is some­thing EA orgs may be uniquely po­si­tioned to do. Ul­ti­mately, once peo­ple in­clined to work on global poverty be­gin to em­brace the long-ter­mist ideals of the EA move­ment, the goal of “sav­ing lives” should prob­a­bly shift to­ward the goal of un­lock­ing hu­man po­ten­tial strate­gi­cally in ways that care­fully con­sider the down­stream im­pact of those ac­tors in se­cur­ing the long-term fu­ture.

The ul­ti­mate flow through effects of any given ac­tion are likely the prin­ci­pal drivers of to­tal im­pact but they are ex­tremely hard to eval­u­ate ac­cu­rately. There are sev­eral ap­proaches to deal­ing with flow through effects and I be­lieve the only method that is clearly wrong is to ig­nore them. We can some­times at least iden­tify clear gen­eral trends and rec­og­nize pat­terns in down­stream effects which help in­form right ac­tions. The idea that “sav­ing lives” through spe­cific, dis­ease ori­ented ap­proaches alone may not op­ti­mize for down­stream effects is a le­gi­t­i­mate cri­tique of “ran­domista de­vel­op­ment” work done by pop­u­lar EA or­ga­ni­za­tions like GiveWell. This is how­ever just one piece (albeit the most visi­ble) of the global de­vel­op­ment cause area. An ex­cel­lent dis­cus­sion on shift­ing away from “ran­domista de­vel­op­ment” in fa­vor of ap­proaches that tar­get less mea­surable but likely much more effec­tive de­vel­op­ment in­ter­ven­tions fo­cused on holis­tic growth can be found here. Also, groups like Open Philan­thropy are us­ing a more “hits based” ap­proach as dis­cussed here. Another great fo­rum piece on long-ter­mist global de­vel­op­ment work can be found here. I recom­mend that peo­ple already en­gaged in ad­dress­ing global poverty within EA con­sider a more long-ter­mist ap­proach which at­tempts to en­gage with root causes of poverty rather than just the short-medium term im­pact.

Un­lock­ing Hu­man Potential

GiveWell co-founder Holden Karnofsky sum­ma­rized the ba­sic prin­ci­ple of un­lock­ing hu­man po­ten­tial in an in­ter­view where this topic was touched on when he said, “You can make hu­man­ity smarter by mak­ing it wealthier and hap­pier.” I think it is im­por­tant to con­ceive of the op­po­site of poverty not just as hav­ing ma­te­rial goods but as be­ing rel­a­tively self-suffi­cient and ca­pa­ble of us­ing time, tal­ent and re­sources to help oth­ers and bet­ter so­ciety. It is this priv­ileged po­si­tion which al­lows most peo­ple read­ing this to be able to con­tem­plate is­sues like the long-term fu­ture. If we rec­og­nize the role of this rel­a­tive lack of poverty in our own jour­ney as be­ing an im­por­tant part of ar­riv­ing at an al­tru­is­tic mind­set, it is rea­son­able to sup­pose that oth­ers, if pro­vided with similar cir­cum­stances might be more will­ing and able to en­gage in this type of im­por­tant work as well. For more of Holden’s thoughts check out the rest of the in­ter­view linked above or this dis­cus­sion on flow through effects.

Clearly, reach­ing a cer­tain ma­te­rial stan­dard of liv­ing alone does not guaran­tee that peo­ple will en­gage in more al­tru­is­tic be­hav­iors. One use­ful frame­work for ap­proach­ing this could be the BIP frame­work (benev­olence, In­tel­li­gence, power). This seems like a tool which could greatly in­form the in­di­vi­d­ual donor in com­par­ing the marginal benefit of differ­ent in­ter­ven­tions on the long term-fu­ture. The ba­sic idea is:

“… that it’s likely good to:

  1. In­crease ac­tors’ benev­olence.

  2. In­crease the in­tel­li­gence of ac­tors who are suffi­ciently benevolent

  3. In­crease the power of ac­tors who are suffi­ciently benev­olent and in­tel­li­gent”

Ap­plied to the global de­vel­op­ment con­text this has sig­nifi­cant im­pli­ca­tions. I sus­pect that there are a sig­nifi­cant num­ber of good and in­tel­li­gent peo­ple liv­ing in poverty who could have an out­sized im­pact on the fu­ture. This may be par­tic­u­larly true if by se­lec­tively tar­get­ing benev­olent ac­tors for ed­u­ca­tional op­por­tu­ni­ties and fol­low­ing up the in­tel­lec­tual ad­vance­ments of those in­di­vi­d­u­als with em­pow­er­ing op­por­tu­ni­ties for em­ploy­ment in high-yield fields, it can in­spire oth­ers to fol­low in their foot­steps as well as em­power the right in­di­vi­d­u­als to con­tribute to their com­mu­ni­ties in mean­ingful ways. This seems true in coun­tries across the in­come dis­tri­bu­tion but these efforts may be par­tic­u­larly cost effec­tive in the world’s least de­vel­oped na­tions. Th­ese ap­pli­ca­tions may be of in­ter­est to long-ter­mist folks work­ing in global de­vel­op­ment as we seek to pro­mote the growth of lead­ers in the de­vel­op­ing world that will be best al­igned with the kind of sus­tain­able growth which will best se­cure the long term-fu­ture.

This type of frame­work may give re­newed power to causes like im­prov­ing ed­u­ca­tion and teach­ing good val­ues which are typ­i­cally ig­nored in de­vel­op­ment mod­els fo­cused on short term, mea­surable im­pact. Con­trary to the ex­pres­sions in the last link, there may be some ev­i­dence based ways to im­prove ed­u­ca­tional qual­ity in cost-effec­tive ways. I sug­gest that a sort of differ­en­tial de­vel­op­ment that pri­ori­tizes grow­ing hu­man ca­pac­ity to do good could be an im­por­tant area for fu­ture re­search within the EA move­ment. While this frame­work may be difficult to em­ploy at a policy level, it is rea­son­able to sup­pose that even mod­est shifts, in the gen­eral di­rec­tion of long-ter­mist strat­egy for de­vel­op­ment might have large im­pacts on the long-term fu­ture.

Global Catas­trophic Risks (GCRs) and Poverty

In dis­cussing GCRs it is im­por­tant to note that not all sce­nar­ios lead in­vari­ably to ex­tinc­tion or oth­er­wise crit­i­cal out­comes for the long-term fu­ture of hu­man­ity. One of the im­por­tant fac­tors that can make a differ­ence is that peo­ple liv­ing in poverty are dis­pro­por­tionately more at risk from GCRs and a thresh­old of peo­ple with re­sources suffi­cient to sur­vive a global catas­tro­phe may be what keeps it from be­com­ing a true crisis for the fu­ture of hu­man­ity. This is the case for nearly ev­ery type of non-ex­tinc­tion level catas­tro­phe (with the pos­si­ble ex­cep­tion of a pan­demic); that those with power and re­sources are much more likely to sur­vive such an event. To me, this clearly sug­gests that de­creas­ing poverty and in­creas­ing hu­man welfare can pro­mote re­silience against such fu­ture risks. A more ro­bust world through poverty re­duc­tion may well be the differ­ence be­tween a “crunch, shriek or whim­per” sce­nario and effec­tive re­cov­ery from a difficult situ­a­tion.

In­vest­ing Wisely in Hu­man­ity’s Future

One way to view this ap­proach is as an in­vest­ment in hu­man po­ten­tial. In the dis­cus­sions re­gard­ing giv­ing now vs. giv­ing later it is of­ten viewed as wise to in­vest and give larger amounts at a later time to have greater im­pact. If you be­lieve that the most un­der­val­ued re­source on the planet is the po­ten­tial of good and in­tel­li­gent peo­ple limited by poverty (at least more un­der­val­ued than the com­pa­nies in your stock port­fo­lio), then it makes sense, to in­vest in the abil­ity of those peo­ple to have im­pact on oth­ers which con­tinues rip­pling out­ward and ac­cru­ing a sort of com­pound­ing in­ter­est. If you have already come to the con­clu­sion that giv­ing now is the prefer­able op­tion then you may wish to pri­ori­tize this form of giv­ing in ways that op­ti­mize for the flow-through effects that have the high­est ex­pected value for the long-term fu­ture.

For those who fa­vor the role of the pa­tient philan­thropist, there is still a clear role for long-ter­mist con­sid­er­a­tions of global de­vel­op­ment. In that case, the dis­cus­sion of which spe­cific in­ter­ven­tions best op­ti­mise the long-term fu­ture are post­poned till later sup­pos­ing that im­proved knowl­edge and wealth will be more benefi­cial at a fu­ture date. One pa­per gave an in­ter­est­ing cri­tique of the idea of com­pound­ing in­ter­est of flow-through effects and fa­vors the pa­tient ap­proach (5.1.2 for the case for giv­ing now, 5.1.3 for the cri­tique).

Ul­ti­mately, the benefit to the long-term fu­ture from em­ploy­ing this philos­o­phy to global de­vel­op­ment seems ro­bust across sev­eral differ­ent moral de­ci­sion-mak­ing the­o­ries. The case for differ­ing de­vel­op­ment strate­gies con­sid­er­ing their most likely im­pact on things like ex­is­ten­tial risk seems like it should be an im­por­tant con­sid­er­a­tion. Groups like Open Philan­thropy have done some work look­ing at global catas­trophic events but this might be some­thing which could be more fully in­te­grated with think­ing about global de­vel­op­ment in its re­la­tion to these risks. Par­tic­u­larly, it seems like it would be helpful to guide in­di­vi­d­ual donors in mak­ing their con­tri­bu­tions in ways that bet­ter op­ti­mize for long-term benefit like how GiveWell does with short term, mea­surable re­sults. As an in­di­vi­d­ual donor with limited time and in­for­ma­tion these de­ter­mi­na­tions can seem even more daunt­ing but re­main es­sen­tial to the long-ter­mists in­clined to donate to­wards poverty re­duc­tion in­ter­ven­tions.

Shift­ing Global Priorities

Cur­rently the UN’s sus­tain­able de­vel­op­ment goals do not in­clude x-risks other than cli­mate change. The num­ber one pri­or­ity is elimi­nat­ing global poverty be­cause this is a unify­ing moral im­per­a­tive with broad sup­port across cul­tures and na­tion states. While co­or­di­nated global efforts on this front may not be ideal, it is the sys­tem which we have to ad­dress is­sues of global pri­or­ity which re­quire in­ter­na­tional col­lab­o­ra­tion/​co­or­di­na­tion.

Bar­ring a dra­matic global shift in val­ues, I sus­pect that ap­petite for in­ter­na­tional col­lab­o­ra­tion pri­ori­tiz­ing other x-risks at that level will not be pos­si­ble while ex­treme forms of poverty ex­ist at any sig­nifi­cant mag­ni­tude. A per­son who be­lieves that x-risk should be a global pri­or­ity and that a cer­tain thresh­old of in­ter­na­tional co­or­di­na­tion on these is­sues may be es­sen­tial for the sur­vival of hu­man­ity may find it at least plau­si­ble that ad­dress­ing ex­ist­ing global pri­ori­ties may be pre­req­ui­site to gain­ing rel­a­tive pop­u­lar in­ter­est for pri­ori­tiz­ing ex­is­ten­tial threats to hu­man­ity.

As sug­gested by Maslow’s hi­er­ar­chy of needs, is­sues con­cern­ing phys­iologic needs like food, shelter and wa­ter will nat­u­rally be pri­ori­tized by in­di­vi­d­u­als and gov­ern­ments over threats to safety. The ne­glect­ed­ness of x-risk may not be jus­tified, but it is cer­tainly un­der­stand­able why the per­sis­tence of these ba­sic prob­lems over­shad­ows those more ab­stract con­cerns and re­solv­ing some of these is­sues may be an effec­tive way to shift global priorities

Benefits of re­duc­ing global poverty thought­fully may re­duce ex­is­ten­tial risk EVEN BETTER THAN other efforts in­tended speci­fi­cally for that pur­pose.

Dis­claimer: This is al­most cer­tainly des­tined to be the most con­tro­ver­sial ar­gu­ment, so I’ll just pref­ace this by ad­mit­ting that this is an idea that I am not par­tic­u­larly con­fi­dent in but do feel that I have some thoughts on this which may add to more thought­ful dis­cus­sion. I am also at sig­nifi­cant risk for bias on this topic. I am a physi­cian with spe­cial­ized train­ing in pub­lic health and trop­i­cal medicine and much more fa­mil­iar with is­sues re­lat­ing to global poverty than with is­sues sur­round­ing ex­is­ten­tial risk which clearly col­ors my per­spec­tive (this could all be some form of post hoc ra­tio­nal­iza­tion of my ca­reer choice and pre­con­ceived val­ues). Given this bias, this con­ve­nient con­ver­gence may not be as strong as my in­tu­ition tells me.

Re­duc­ing ex­is­ten­tial risk seems to be a wor­thy en­deavor and the fol­low­ing com­par­i­son as­sumes that ar­gu­ments in fa­vor of long-ter­mism are ro­bust. To com­pare global de­vel­op­ment work with more stan­dard x-risk re­duc­tion strate­gies, I will use the pop­u­lar EA frame­work of as­sess­ing rel­a­tive Im­por­tance, Tractabil­ity and Ne­glect­ed­ness (ITN) but will do so in re­verse or­der to fa­cil­i­tate a more log­i­cal pro­gres­sion of ideas.

Neglectedness

While tra­di­tion­ally, global de­vel­op­ment has been viewed as less ne­glected than other spe­cific mechanisms for ex­is­ten­tial risk re­duc­tion, it may be use­ful to think in­stead of the ne­glect­ed­ness of the work of us­ing global de­vel­op­ment for ex­is­ten­tial risk re­duc­tion. This rep­re­sents a rel­a­tively small piece of the cur­rent de­vel­op­ment efforts and may be some­thing EA is uniquely po­si­tioned to ad­dress with ex­perts in both x-risk and de­vel­op­ment with over­lap­ping val­ues.

Mean­while, in­ter­est in things like AI safety, cli­mate change, and bioter­ror­ism seems to have been in­creas­ing in re­cent years. Th­ese are still ap­par­ently ne­glected things in the grand scheme of things but their rel­a­tive ne­glect­ed­ness seems many times less to­day than it may have been even just 5-10 years ago. As in­ter­est and re­search into other forms of ex­is­ten­tial risk in­creases, we have to re­mem­ber that this low­ers the marginal ex­pected util­ity of ad­dress­ing these is­sues through these in­creas­ingly pop­u­lar, spe­cific ar­eas of fo­cus.

Tractability

In this re­gard, most at­tempts to in­fluence the long-term fu­ture (in­clud­ing global de­vel­op­ment) face large amounts of un­cer­tainty. We have rel­a­tively lit­tle true, quan­tifi­able knowl­edge about the prob­a­bil­ity of these kinds of events, their root causes and the as­so­ci­ated solu­tions. Of all the threats to hu­man­ity, the great­est risks might still be black swan sce­nar­ios where we don’t even know enough about the things we don’t know to see them com­ing at the pre­sent time. Maybe the great­est risk to hu­man­ity will be ma­nipu­la­tion of dark mat­ter or of space­time us­ing some form of warp en­g­ines (cur­rently these are wildly spec­u­la­tive con­cerns) or some­thing of which we have yet to de­velop the abil­ity to imag­ine. Ap­pro­pri­ate es­ti­ma­tion of tractabil­ity for these kinds of events is greatly limited by the paucity of knowl­edge of all the risks in ex­pected value calcu­la­tions. Beliefs about the im­por­tance of any par­tic­u­lar x-risk is, by na­ture, on weak epistemic foot­ing and should re­main open to ad­di­tional in­for­ma­tion as it be­comes available.

An ap­pro­pri­ate re­sponse to this type of un­cer­tainty might then be an effort to in­crease the re­silience of all of these efforts by fo­cus­ing more on the gen­er­al­iz­able ca­pac­ity of hu­man­ity to ad­dress is­sues in the most col­lab­o­ra­tive and in­sight­ful way pos­si­ble. It is in this way that long-ter­mists work­ing on global de­vel­op­ment can im­prove the tractabil­ity of the cur­rent threats and even ones that we haven’t con­sid­ered yet. It is im­por­tant to note that global de­vel­op­ment is not the only gen­er­al­iz­able op­tion and other efforts that are more re­silient to the un­cer­tainty of spe­cific in­ter­ven­tions might in­clude things like elec­toral re­form, im­prov­ing in­sti­tu­tional de­ci­sion-mak­ing, and in­creas­ing in­ter­na­tional co­op­er­a­tion.

Importance

In the fol­low­ing graph I illus­trate three coun­ter­fac­tual fu­tures:

The or­ange line rep­re­sents a main­te­nance of the sta­tus quo where global de­vel­op­ment and ca­pac­ity of hu­man­ity to effec­tively ad­dress global threats pro­ceed at pre­sent rates.

The grey line rep­re­sents the ex­treme op­tion of di­vert­ing all re­sources cur­rently be­ing used for de­vel­op­ment to­wards a ded­i­cated wor­ld­wide effort to re­duce threats to hu­man ex­tinc­tion. In the near term this would likely in­crease ca­pac­ity greatly but given the limited num­ber of peo­ple liv­ing in cir­cum­stances where this kind of work would be fea­si­ble and limited poli­ti­cal will to ad­dress these is­sues, the sub­se­quent in­crease would come only from that priv­ileged minor­ity work­ing out­side of na­tional strate­gic goals as op­posed to an ex­pand­ing group of peo­ple, sup­ported by in­ter­na­tional pri­ori­ti­za­tion.

The blue line rep­re­sents the op­po­site ex­treme where we pause work ex­plic­itly tar­get­ing ex­tinc­tion risk and fo­cus on de­vel­op­ment of a more equitable world where the abun­dance cre­ated by tech­nolog­i­cal ad­vances are sup­port­ing hu­man­ity in a much more effi­cient man­ner prior to re-en­gag­ing the effort to re­duce spe­cific ex­tinc­tion risks. This would the­o­ret­i­cally lead to a su­pe­rior abil­ity to solve the rest of the world’s most press­ing prob­lems in­clud­ing re­duc­ing ex­is­ten­tial risks (and po­ten­tially move on to is­sues like in­ter­stel­lar coloniza­tion or yet un­fore­seen ways of en­sur­ing a very long-term fu­ture for hu­man­ity).

The ob­vi­ous draw­back of the blue line is that there may be slightly less chance of it ex­ist­ing as we may all go ex­tinct be­fore those lines cross over but the draw­back of the grey line is that it be­comes less likely to ex­ist at some point in the fu­ture. To be clear, I don’t be­lieve ei­ther ex­treme would be a wise course of ac­tion and these ex­am­ples are for illus­tra­tive/​com­par­a­tive pur­poses only. The idea is that there are prob­a­bly some im­por­tant trade-offs.

Ex­pected value calcu­la­tions could the­o­ret­i­cally es­ti­mate the ideal bal­ance of near term x-risk re­duc­tion vs. long-term x-risk re­duc­tion. That would be ex­ceed­ingly com­plex and in­cludes vari­ables which I do not have the ex­per­tise to prop­erly eval­u­ate (but I hope some­one in the fu­ture will make this a point of se­ri­ous re­search). This line of rea­son­ing raises the pos­si­bil­ity that fo­cus­ing too much on re­duc­ing spe­cific x-risks now can iron­i­cally and counter-in­tu­itively be a near­sighted effort which does not give enough weight to the im­por­tance of re­duc­ing risks that will come in the dis­tant fu­ture. The gen­er­al­iz­able ca­pac­ity build­ing in­ter­ven­tions like global de­vel­op­ment are geared more to­wards the lat­ter in ex­pec­ta­tion.

This is not a pos­si­bil­ity that is un­no­ticed by peo­ple work­ing on ex­is­ten­tial risk. Toby Ord wrote an ex­cel­lent piece with FHI which cau­tioned that some ex­is­ten­tial risk effort was too spe­cific and thereby “near­sighted”. The sug­ges­tions given there were to fo­cus on “course set­ting, self-im­prove­ment and growth.” I re­al­ize that I am tak­ing these com­ments out of the con­text that they were in­tended for but be­lieve that the gen­eral prin­ci­ples which ap­ply to in­di­vi­d­u­als may be ap­pli­ca­ble to hu­man­ity gen­er­ally as well. The work of course set­ting, self-im­prove­ment and growth on a global scale, seems to me, a fit­ting de­scrip­tion of a long-ter­mist’s ap­proach to global de­vel­op­ment.

Conclusions

It is my be­lief that global de­vel­op­ment is un­likely to nega­tively im­pact the long-term fu­ture and if done thought­fully, with a long-ter­mist per­spec­tive, may have a sig­nifi­cant pos­i­tive im­pact on re­duc­ing ex­is­ten­tial risk. I also humbly sub­mit the opinion that this ap­proach may be an even more suc­cess­ful strat­egy for re­duc­ing x-risk than many of the risk-spe­cific efforts to en­sure that hu­man­ity has a long and happy fu­ture.

As con­crete recom­men­da­tions for im­prove­ment I pro­pose the fol­low­ing:

  • Shift away from short-ter­mist poverty in­ter­ven­tions to­wards ob­jec­tives more al­igned with core EA value of long-ter­mism.

  • Ad­di­tion of re­sources for in­di­vi­d­ual donors in­clined to con­tribute to re­duc­ing global poverty in ways that also pri­ori­tize long-term fu­ture.

  • Ad­di­tional fo­cus and re­search at the in­ter­sec­tion be­tween x-risk and global de­vel­op­ment.

I hope that at the very least, this shows the in­ter-re­lat­ed­ness be­tween these two cause ar­eas which hope­fully will be in­ter­est­ing enough to some read­ers to pro­mote greater col­lab­o­ra­tive efforts be­tween these two silos of thought. I look for­ward to your feed­back and fu­ture dis­cus­sion on this im­por­tant topic.

(Th­ese are my own per­sonal opinions and do not re­flect the views of the US Public Health Ser­vice or Loma Linda Univer­sity)