AGI safety and losing electricity/​industry resilience cost-effectiveness

Cross posted on less­wrong https://​​www.less­wrong.com/​​posts/​​qkvk22oc3YeEJpEfC/​​agi-safety-and-los­ing-elec­tric­ity-in­dus­try-re­silience-cost

Below is a pa­per about to be sub­mit­ted. The fo­cus is on in­ter­ven­tions that could im­prove the long-term out­come given catas­tro­phes that dis­rupt elec­tric­ity/​in­dus­try, such as so­lar storm, high-al­ti­tude elec­tro­mag­netic pulse (HEMP), nar­row AI com­puter virus, and ex­treme pan­demic. Work on these in­ter­ven­tions is even more ne­glected than in­ter­ven­tions for feed­ing ev­ery­one if the sun is blocked. Cost-effec­tive­ness is com­pared to a slightly mod­ified AGI safety cost-effec­tive­ness model posted ear­lier on the EA fo­rum. Two differ­ent cost-effec­tive­ness es­ti­mates for los­ing in­dus­try in­ter­ven­tions were de­vel­oped: one by Denken­berger and a poll at EA Global San Fran­cisco 2018, and the other by An­ders Sand­berg at Fu­ture of Hu­man­ity In­sti­tute. There is great un­cer­tainty in both AGI safety and in­ter­ven­tions for los­ing in­dus­try. How­ever, the mod­els have ~99% con­fi­dence that fund­ing in­ter­ven­tions for los­ing in­dus­try now is more cost effec­tive than ad­di­tional fund­ing for AGI safety be­yond the ex­pected ~$3 billion. This does not take into ac­count model or the­ory un­cer­tainty, so the con­fi­dence would likely de­crease. How­ever, in or­der to make AGI safety more cost effec­tive, this re­quired chang­ing four vari­ables in the Sand­berg model to the 5th per­centile on the pes­simistic end si­mul­ta­neously. For the other model, it re­quired chang­ing seven vari­ables. There­fore, it is quite ro­bust that a sig­nifi­cant amount of money should be in­vested in los­ing in­dus­try in­ter­ven­tions now. There is closer to 50%-88% con­fi­dence that spend­ing the ~$40 mil­lion on in­ter­ven­tions for los­ing in­dus­try is more cost effec­tive than AGI safety. Over­all, AGI safety is more im­por­tant and more to­tal money should be spent on it. The mod­el­ing con­cludes that ad­di­tional fund­ing would be jus­tified on both causes even for the pre­sent gen­er­a­tion.


Long Term Cost-Effec­tive­ness of In­ter­ven­tions for Loss of Elec­tric­ity/​In­dus­try Com­pared to Ar­tifi­cial Gen­eral In­tel­li­gence Safety

David Denken­berger 1,2, An­ders Sand­berg 3, Ross Tie­man *1, and Joshua M. Pearce 4,5

1. Alli­ance to Feed the Earth in Disasters (ALLFED), Fair­banks, AK 99775, USA
2. Univer­sity of Alaska Fair­banks, Fair­banks, AK 99775, USA
3. Fu­ture of Hu­man­ity In­sti­tute, Univer­sity of Oxford, Oxford, UK
4. Depart­ment of Ma­te­rial Science and Eng­ineer­ing and Depart­ment of Elec­tri­cal and Com­puter Eng­ineer­ing, Michi­gan Tech­nolog­i­cal Univer­sity, Houghton, MI 49931, USA
5. Depart­ment of Elec­tron­ics and Na­no­eng­ineer­ing, School of Elec­tri­cal Eng­ineer­ing, Aalto Univer­sity, FI-00076 Espoo, Fin­land
* cor­re­spond­ing author


Ab­stract

Ex­treme so­lar storms, high-al­ti­tude elec­tro­mag­netic pulses, and co­or­di­nated cy­ber at­tacks could dis­rupt re­gional/​global elec­tric­ity. Since elec­tric­ity ba­si­cally drives in­dus­try, in­dus­trial civ­i­liza­tion could col­lapse with­out it. This could cause an­thro­polog­i­cal civ­i­liza­tion (cities) to col­lapse, from which hu­man­ity might not re­cover, hav­ing long-term con­se­quences. Pre­vi­ous work an­a­lyzed tech­ni­cal solu­tions to save nearly ev­ery­one de­spite in­dus­trial loss globally, in­clud­ing tran­si­tion to an­i­mals pow­er­ing farm­ing and trans­porta­tion. The pre­sent work es­ti­mates cost-effec­tive­ness for the long-term fu­ture with a Monte Carlo (prob­a­bil­is­tic) model. Model 1, partly based on a poll of Effec­tive Altru­ism con­fer­ence par­ti­ci­pants, finds a con­fi­dence that in­dus­trial loss prepa­ra­tion is more cost effec­tive than ar­tifi­cial gen­eral in­tel­li­gence safety of ~88% and ~99+% for the 30 mil­lionth dol­lar spent on in­dus­trial loss in­ter­ven­tions and the mar­gin now, re­spec­tively. Model 2 pop­u­lated by one of the au­thors pro­duces ~50% and ~99% con­fi­dence, re­spec­tively. Th­ese con­fi­dences are likely to be re­duced by model and the­ory un­cer­tainty, but the con­clu­sion of in­dus­trial loss in­ter­ven­tions be­ing more cost effec­tive was ro­bust to chang­ing the most im­por­tant 4-7 vari­ables si­mul­ta­neously to their pes­simistic ends. Both cause ar­eas save ex­pected lives cheaply in the pre­sent gen­er­a­tion and fund­ing to prepa­ra­tion for in­dus­trial loss is par­tic­u­larly ur­gent.

Dis­claimer/​Ac­knowl­edge­ments: Fund­ing was re­ceived from the Cen­tre for Effec­tive Altru­ism. An­ders Sand­berg re­ceived fund­ing from the Euro­pean Re­search Coun­cil (ERC) un­der the Euro­pean Union’s Hori­zon 2020 re­search and in­no­va­tion pro­gramme (grant agree­ment No 669751). The Oxford Pri­ori­ti­sa­tion Pro­ject de­vel­oped the ar­tifi­cial gen­eral in­tel­li­gence safety cost effec­tive­ness sub­model. Owen Cot­ton-Bar­ratt, Daniel Dewey, Sindy Li, Ozzie Gooen, Tim Fist, Aron Mill, Kyle Al­varado, Ratheka Storm­b­jorne, and Fi­nan Adam­son con­tributed helpful dis­cus­sions. This is not the offi­cial po­si­tion of the Cen­tre for Effec­tive Altru­ism, the Fu­ture of Hu­man­ity In­sti­tute, nor theAlli­ance to Feed the Earth in Disasters (ALLFED).

1. Introduction

The in­te­grated na­ture of the elec­tric grid, which is based on cen­tral­ized gen­er­a­tion makes the en­tire sys­tem vuln­er­a­ble to dis­rup­tion.(1) There are a num­ber of an­thro­pogenic and nat­u­ral catas­tro­phes that could re­sult in re­gional-scale elec­tri­cal grid failure, which would be ex­pected to halt the ma­jor­ity of in­dus­tries and ma­chines in that area. A high-al­ti­tude elec­tro­mag­netic pulse (HEMP) caused by a nu­clear weapon could dis­able elec­tric­ity over part of a con­ti­nent (Bern­stein, Bien­stock, Hay, Uzunoglu, & Zuss­man, 2012; Foster et al., 2004; Kelly-Detwiler, 2014; Oak Ridge Na­tional Lab­o­ra­tory, 2010). This could de­stroy the ma­jor­ity of elec­tri­cal grid in­fras­truc­ture, and as fos­sil fuel ex­trac­tion and in­dus­try is re­li­ant on elec­tric­ity (Foster, Jr et al., 2008), in­dus­try would be dis­abled. Similarly, so­lar storms have de­stroyed elec­tri­cal trans­form­ers con­nected to long trans­mis­sion lines in the past (Space Stud­ies Board, 2008). The Car­ring­ton event in 1859 dam­aged tele­graph lines, which was the only elec­tri­cal in­fras­truc­ture in ex­is­tence at the time. It also caused Aurora Bo­re­alis that was visi­ble in Cuba and Ja­maica (Klein, 2012). This could po­ten­tially dis­able elec­tri­cal sys­tems at high lat­i­tudes, which could rep­re­sent 10% of elec­tric­ity/​in­dus­try globally. Though so­lar storms may last less than the 12 hours that would be re­quired to ex­pose the en­tire earth with di­rect line of sight, the earth’s mag­netic field lines redi­rect the storm to af­fect the op­po­site side of the earth (Space Stud­ies Board, 2008).

Lastly, both phys­i­cal (M. Amin, 2002, 2005; Kin­ney, Crucitti, Albert, & La­tora, 2005; Mot­ter & Lai, 2002; Salmeron, Wood, & Baldick, 2004) and cy­ber at­tacks (Ai­tel, 2013; Hébert, 2013; Nai Fov­ino, Guidi, Masera, & Ste­fan­ini, 2011; Onyeji, Bazilian, & Bronk, 2014; Srid­har, Hahn, & Govin­darasu, 2012; Um­bach, 2013; Watts, 2003) could also com­pro­mise elec­tric grids. Phys­i­cal at­tacks in­clude tra­di­tional acts of ter­ror­ism such as bomb­ing or sab­o­tage (Watts, 2003) in ad­di­tion to EMP at­tacks. Sig­nifi­cant ac­tors could scale up phys­i­cal at­tacks, for ex­am­ple by us­ing drones. A sce­nario could in­clude ter­ror­ist groups hin­der­ing in­di­vi­d­ual power plants (Tzezana, 2016), while a large ad­ver­sary could un­der­take a similar op­er­a­tion phys­i­cally to all plants and elec­tri­cal grids in a re­gion.

Un­for­tu­nately, the tra­di­tional power grid in­fras­truc­ture is sim­ply in­ca­pable of with­stand­ing in­ten­tional phys­i­cal at­tacks (Na­tional Re­search Coun­cil, 2012). Da­m­age to the elec­tric grid re­sult­ing in phys­i­cal at­tack could be long last­ing, as most tra­di­tional power plants op­er­ate with large trans­form­ers that are difficult to move and source. Cus­tom re­built trans­form­ers re­quire time for re­place­ment rang­ing from months and even up to years (Na­tional Re­search Coun­cil, 2012). For ex­am­ple, a rel­a­tively mild 2013 sniper at­tack on Cal­ifor­nia’s Pa­cific Gas and Elec­tric (PG&E) sub­sta­tion, which in­jured no one di­rectly, was able to dis­able 17 trans­form­ers sup­ply­ing power to Sili­con Valley. Re­pairs and im­prove­ments cost PG&E roughly $100 mil­lion and lasted about a month (Avalos, 2014; Pagliery, 2015). A co­or­di­nated at­tack with rel­a­tively sim­ple tech­nol­ogy (e.g. guns) could cause a re­gional elec­tric­ity dis­rup­tion.

How­ever, a high-tech at­tack could be even fur­ther wide­spread. The Pen­tagon re­ports spend­ing roughly $100 mil­lion to re­pair cy­ber-re­lated dam­ages to the elec­tric grid in 2009 (Gor­man, 2009). There is also ev­i­dence that a com­puter virus caused an elec­tri­cal out­age in the Ukraine (Goodin, 2016). Un­like sim­plis­tic phys­i­cal at­tacks, cy­ber at­tack­ers are ca­pa­ble of pen­e­trat­ing crit­i­cal elec­tric in­fras­truc­ture from re­mote re­gions of the world, need­ing only com­mu­ni­ca­tion path­ways (e.g. the In­ter­net or in­fected mem­ory sticks) to in­stall malware into the con­trol sys­tems of the elec­tric power grid. For ex­am­ple, Stuxnet was a com­puter worm that de­stroyed Ira­nian cen­trifuges (Kush­ner, 2013) to dis­able their nu­clear in­dus­try. Many efforts are un­der­way to harden the grid from such at­tack (Gent & Costan­tini, 2003; Hébert, 2013). The U.S. Depart­ment of Home­land Se­cu­rity re­sponded to ~200 cy­ber in­ci­dents in 2012 and 41% in­volved the elec­tri­cal grid (Pre­hoda, Schelly, & Pearce, 2017). Na­tions rou­tinely have made at­tempts to map cur­rent crit­i­cal in­fras­truc­ture for fu­ture nav­i­ga­tion and con­trol of the U.S. elec­tri­cal sys­tem (Gor­man, 2009).

The elec­tric grid in gen­eral is grow­ing in­creas­ingly de­pen­dent upon the In­ter­net and other net­work con­nec­tions for data com­mu­ni­ca­tion and mon­i­tor­ing sys­tems (Bes­sani, Sousa, Cor­reia, Neves, & Veris­simo, 2008; Schainker, Dou­glas, & Kropp, 2006; Srid­har et al., 2012; Ulieru, 2007; Wu, Moslehi, & Bose, 2005). Although this con­ve­niently al­lows elec­tri­cal sup­pli­ers man­age­ment of sys­tems, it in­creases the sus­cep­ti­bil­ity of the grid to cy­ber-at­tack, through de­nial of web­page ser­vices to con­sumers, dis­rup­tion to su­per­vi­sory con­trol and data ac­qui­si­tion (SCADA) op­er­at­ing sys­tems, or sus­tained wide­spread power out­ages (Ai­tel, 2013; Krotofil, Car­de­nas, Larsen, & Gol­l­mann, 2014; Srid­har et al., 2012; Ten, Man­i­maran, & Liu, 2010). Thus global or re­gional loss of the In­ter­net could have similar im­pli­ca­tions.

A less ob­vi­ous po­ten­tial cause is a pan­demic that dis­rupts global trade. Coun­tries may ban trade for fear of the dis­ease en­ter­ing their coun­try, but many coun­tries are de­pen­dent on im­ports for the func­tion­ing of their in­dus­try. If the re­gion over which elec­tric­ity is dis­rupted had sig­nifi­cant agri­cul­tural pro­duc­tion, the catas­tro­phe could be ac­com­panied by a ~10% food pro­duc­tion short­fall as well. It is un­cer­tain whether coun­tries out­side the af­fected re­gion would help the af­fected coun­tries, do noth­ing, or con­quer the af­fected coun­tries.

Larger ver­sions of these catas­tro­phes could dis­rupt elec­tric­ity/​in­dus­try globally. For in­stance, it is pos­si­ble that mul­ti­ple HEMPs could be deto­nated around the world, due to a world nu­clear war (Pry, 2017) or due to ter­ror­ists gain­ing con­trol of nu­clear weapons. There is ev­i­dence that, in the last 2000 years, two so­lar storms oc­curred that were much stronger than the Car­ring­ton event (Mekhaldi et al., 2015). There­fore, it is pos­si­ble that an ex­treme so­lar storm could dis­able elec­tric­ity and there­fore in­dus­try globally. It is con­ceiv­able that a co­or­di­nated cy­ber or phys­i­cal at­tack (or a com­bi­na­tion) on many elec­tric grids could also dis­rupt in­dus­try globally. Many of the tech­niques to harden the elec­tric grid could help with this vuln­er­a­bil­ity as well as mov­ing to more dis­tributed gen­er­a­tion and micro­grids (Che & Shahideh­pour, 2014; Col­son, Nehrir, & Gun­der­son, 2011; Las­seter, 2007; Las­seter & Pi­agi, 2004; Pre­hoda et al., 2017; Shahideh­pour & Kho­da­yar, 2013). An ex­treme pan­demic could cause enough peo­ple to not show up to work such that in­dus­trial func­tion­ing could not be main­tained. Though this could be miti­gated by di­rect­ing mil­i­tary per­son­nel to fill va­cant po­si­tions, if the pan­demic were se­vere enough, it could be ra­tio­nal to re­treat from high hu­man con­tact in­dus­trial civ­i­liza­tion in or­der to limit dis­ease mor­tal­ity.

The global loss of elec­tric­ity could even be self-in­flicted as a way of stop­ping rogue ar­tifi­cial gen­eral in­tel­li­gence (AGI) (Turchin & Denken­berger, 2018a). As the cur­rent high agri­cul­tural pro­duc­tivity de­pends on in­dus­try (e.g. for fer­til­iz­ers) it has been as­sumed that there would be mass star­va­tion in these sce­nar­ios (Robin­son, 2007).

Re­pairing these sys­tems and re-es­tab­lish­ing elec­tri­cal in­fras­truc­ture would be a goal of the long term and work should ideally start on it im­me­di­ately af­ter a catas­tro­phe. How­ever, hu­man needs would need to be met im­me­di­ately (and con­tinu­ally) and since there is only a few months of stored food, it would likely run out be­fore in­dus­try is re­stored with the cur­rent state of pre­pared­ness. In some of the less challeng­ing sce­nar­ios, it may be pos­si­ble to con­tinue run­ning some ma­chines on the fos­sil fuels that had pre­vi­ously been brought to the sur­face or from the use micro­grids or shielded elec­tri­cal sys­tems. In ad­di­tion, it may be fea­si­ble to run some ma­chines on gasified wood (Dart­nell, 2014). How­ever, in the worst-case sce­nario, all un­shielded elec­tron­ics would be de­stroyed.

Here we fo­cus on catas­tro­phes that only dis­rupt elec­tric­ity/​in­dus­try, rather than catas­tro­phes that could dis­able in­dus­try and ob­scure the sun (Cole, Denken­berger, Gris­wold, Ab­delkhaliq, & Pearce, 2016) or catas­tro­phes that only ob­scure the sun (or af­fect crops di­rectly in other ways) ( Denken­berger & Pearce, 2015b). This pa­per an­a­lyzes the cost effec­tive­ness of in­ter­ven­tions from a long term per­spec­tive. First, this study will re­view in­ter­ven­tions to both avoid a loss of elec­tric­ity, but also to feed ev­ery­one with this loss. Then the benefits of ar­tifi­cial gen­eral in­tel­li­gence (AGI) safety on the long term fu­ture will be re­viewed and quan­tified. Next, two loss of in­dus­try in­ter­ven­tions sub­mod­els are de­vel­oped. The cost for an in­ter­ven­tion based on al­ter­na­tive food com­mu­ni­ca­tion is es­ti­mated.

2. Background

2.1 Re­view of Po­ten­tial Solu­tions

An ob­vi­ous in­ter­ven­tion for HEMP is pre­vent­ing a nu­clear ex­change, which would be the best out­come. How­ever, it is not ne­glected, as it has been worked on for many decades (Bar­rett, Baum, & Hostetler, 2013; D. C. Denken­berger & Pearce, 2018; Helfand, 2013; McIn­tyre, 2016a; Turchin & Denken­berger, 2018b) and is cur­rently funded at billions of dol­lars per year qual­ity ad­justed (McIn­tyre, 2016b). Other ob­vi­ous in­ter­ven­tions for HEMP that would also work for so­lar storms, and co­or­di­nated phys­i­cal or cy­ber threats would be hard­en­ing the elec­tri­cal grid against these threats. How­ever, hard­en­ing just the U.S. elec­tri­cal grid against so­lar storm and HEMP would cost roughly $20 billion (Pry, 2014). There­fore globally, just from these two threats it would be around $100 billion. Fur­ther­more, adding hard­en­ing to cy­ber threats would be even more ex­pen­sive. Again, pre­vent­ing the col­lapse of elec­tric­ity/​in­dus­try would be the prefer­able op­tion, but given the high cost, it may not hap­pen. Even if it oc­curs even­tu­ally, it would still be prefer­able to have a backup plan in the near term and in the case that hard­en­ing is un­suc­cess­ful at stop­ping loss of in­dus­try.

A sig­nifi­cant prob­lem in loss of in­dus­try catas­tro­phes is that of food sup­ply (Cole et al., 2016). One in­ter­ven­tion is stor­ing years worth of food, but it is too ex­pen­sive to have com­pet­i­tive cost effec­tive­ness (and it would take many years so it would not pro­tect hu­man­ity right away, and it would ex­ac­er­bate cur­rent malnu­tri­tion) (Baum, Denken­berger, & Pearce, 2016). Fur­ther­more, if elec­tric­ity/​in­dus­try is dis­abled for many years, food stor­age would be im­prac­ti­cal. Stock­piling of in­dus­trial goods could be an­other in­ter­ven­tion, but again it would be much more ex­pen­sive than the in­ter­ven­tions con­sid­ered here.

In­ter­ven­tions for food pro­duc­tion given the loss of in­dus­try in­clude burn­ing wood from land­fills to provide fer­til­izer and high use of ni­tro­gen fix­ing crops in­clud­ing legumes (peas, beans, peanuts, etc.) (Cole et al., 2016). Also, non­in­dus­trial pest con­trol could be used. De­spite pre-in­dus­trial agri­cul­tural pro­duc­tivity (~1.3 dry tons per hectare per year) (Cole et al., 2016), this could feed ev­ery­one globally. How­ever, not ev­ery­one would be nearby the food sources, and los­ing in­dus­try would severely ham­per trans­porta­tion ca­pa­bil­ity. Solu­tions for this prob­lem in­clude backup plans for pro­duc­ing more food lo­cally, in­clud­ing ex­pand­ing planted area (while min­i­miz­ing im­pact to bio­di­ver­sity e.g. by ex­pand­ing into the bo­real for­est/​tun­dra en­hanced by the nu­tri­ents from tree de­com­po­si­tion/​com­bus­tion) and fa­vor­ing high calorie per hectare foods such as pota­toes, yams, sweet pota­toes, lentils, and ground­nuts (Oke, Red­head, & Hus­sain, 1990). Though clear­ing large ar­eas of for­est with hand saws would not be prac­ti­cal, it is pos­si­ble to gir­dle the trees (re­move a strip of bark around the cir­cum­fer­ence), let the trees dry out, and burn them. This has the ad­van­tage of re­leas­ing fer­til­izer to the soils. Another op­tion in­volves pro­duc­ing “al­ter­na­tive foods,” which were pro­posed for sun-block­ing catas­tro­phes (D. Denken­berger & Pearce, 2014). Some of these al­ter­na­tive foods would re­quire in­dus­try, but pro­duc­ing non-in­dus­trial lower cost ones such as ex­tract­ing calories from leaves (D. Denken­berger, Pearce, Tay­lor, & Black, 2019) could be fea­si­ble. For trans­port­ing the food and other goods, ships could be mod­ified to be wind pow­ered and an­i­mals could pull ve­hi­cles (Ab­delkhaliq, Denken­berger, Gris­wold, Cole, & Pearce, 2016). A global net­work of short­wave ra­dio trans­mit­ters and re­ceivers would fa­cil­i­tate dis­sem­i­nat­ing the mes­sage that there is a plan and peo­ple need not panic, and also al­low for con­tin­u­ing co­or­di­na­tion globally (see be­low).

Cur­rent aware­ness of in­ter­ven­tions given loss of elec­tric­ity/​in­dus­try (here­after “in­ter­ven­tions”) is very low, likely in the thou­sands of peo­ple. Also, many of the in­ter­ven­tions are the­o­ret­i­cal only and need to be tested ex­per­i­men­tally. There may be a sig­nifi­cant amount of short­wave ra­dio sys­tems that are shielded from HEMP and have shielded backup power sys­tems, but likely some ad­di­tion to this ca­pac­ity would be benefi­cial. This pa­per an­a­lyzes the cost effec­tive­ness of in­ter­ven­tions from a long term per­spec­tive. It is un­likely that the loss of in­dus­try would di­rectly cause hu­man ex­tinc­tion. How­ever, by defi­ni­tion, there would be a loss of in­dus­trial civ­i­liza­tion for the global catas­tro­phes. Fur­ther­more, there could be a loss of an­thro­polog­i­cal civ­i­liza­tion (ba­si­cally cities or co­op­er­a­tion out­side the clan). One defi­ni­tion of the col­lapse of civ­i­liza­tion in­volves short-term fo­cus, loss of long dis­tance trade, wide­spread con­flict, and col­lapse of gov­ern­ment (Coates, 2009). Rea­sons that civ­i­liza­tion might not re­cover in­clude: i) eas­ily ac­cessible fos­sil fuels and min­er­als are ex­hausted (Mote­shar­rei, Ri­vas, & Kal­nay, 2014) (though there would be min­er­als in land­fills), ii) the fu­ture cli­mate might not be as sta­ble as it has been for the last 10,000 years (Gre­gory et al., 2007), or iii) tech­nolog­i­cal and eco­nomic data and in­for­ma­tion might be lost per­ma­nently be­cause of the trauma and ge­netic se­lec­tion of the catas­tro­phe (Bostrom, 2013). If the loss of civ­i­liza­tion were pro­longed, a nat­u­ral catas­tro­phe, such as a su­per vol­canic erup­tion or an as­ter­oid/​comet im­pact, could cause the ex­tinc­tion of hu­man­ity. Another way to far fu­ture im­pact is the trauma as­so­ci­ated with the catas­tro­phe mak­ing fu­ture catas­tro­phes more likely, e.g. global to­tal­i­tar­i­anism (Bostrom & Cirkovic, 2008). A fur­ther route is worse val­ues caused by the catas­tro­phe could be locked in by ar­tifi­cial gen­eral in­tel­li­gence (AGI) (Bostrom, 2014), though with the loss of in­dus­trial civ­i­liza­tion, the ad­vent of AGI would be sig­nifi­cantly de­layed, so the bad val­ues could have de­cayed out by then.

2.2 Ar­tifi­cial Gen­eral Intelligence

AGI it­self rep­re­sents a ma­jor, in­de­pen­dent risk. The ar­tifi­cial in­tel­li­gence available now is nar­row AI, i.e. it can gen­er­ally only do a spe­cific task, such as play­ing Jeop­ardy! (Schaul, To­gelius, & Sch­mid­hu­ber, 2011). How­ever, there are con­cerns that as AI sys­tems be­come more ad­vanced, AGI will even­tu­ally be achieved (Bostrom, 2014). Since AGI could perform all hu­man tasks as well as or bet­ter than hu­mans, this would in­clude re­pro­gram­ming the AGI. This would en­able re­cur­sive self-im­prove­ment, so there could be an in­tel­li­gence ex­plo­sion (Good, 1966). Since the goals of the in­tel­li­gence may not be al­igned with hu­man in­ter­ests (Bostrom, 2014) and could be pur­sued with great power, this im­plies a po­ten­tially se­ri­ous risk (Good, 1966). AGI safety is a top pri­or­ity in the ex­is­ten­tial risk com­mu­nity that seeks to im­prove hu­man­ity’s long term fu­ture (Turchin & Denken­berger, 2018b). Though there is un­cer­tainty in when and how AGI may be de­vel­oped, there are con­crete ac­tions that can be taken now to in­crease the prob­a­bil­ity of a good out­come (Amodei et al., 2016).

We seek to com­pare the cost effec­tive­ness of los­ing in­dus­try in­ter­ven­tions with AGI safety to dis­cover whether these in­ter­ven­tions should also be a top pri­or­ity. Com­par­i­sons to other risks, such as as­ter­oids (Ma­theny, 2007), cli­mate change (Halstead, 2018) and pan­demics (Millett & Sny­der-Beat­tie, 2017), are pos­si­ble, though these are gen­er­ally re­garded by the ex­is­ten­tial risk com­mu­nity as lower pri­or­ity and there­fore less in­for­ma­tive.

3. Methods

Given the large un­cer­tain­ties in in­put pa­ram­e­ters, we model cost-effec­tive­ness us­ing a Monte Carlo simu­la­tion, pro­duc­ing a prob­a­bil­ity dis­tri­bu­tion of cost-effec­tive­ness. Prob­a­bil­is­tic un­cer­tainty anal­y­sis is used widely in in­surance, de­ci­sion-sup­port and cost-effec­tive­ness mod­el­ling (Gar­rick, 2008). In these mod­els, un­cer­tain pa­ram­e­ters are rep­re­sented by sam­ples drawn from defined dis­tri­bu­tions that are com­bined into out­put sam­ples that form a re­sul­tant dis­tri­bu­tion.

The mod­els con­sist of a loss of in­dus­try sub­model es­ti­mat­ing the risk and miti­ga­tion costs of in­dus­trial loss, and an AGI risk sub­model es­ti­mat­ing risk and miti­ga­tion costs of AGI sce­nar­ios. Th­ese two sub­mod­els then al­low us to es­ti­mate the ra­tio and con­fi­dence of cost-effec­tive­nesses.

Monte Carlo es­ti­ma­tion was se­lected be­cause the prob­a­bil­ity dis­tri­bu­tions for var­i­ous pa­ram­e­ters do not come in a form that pro­vides an­a­lyt­i­cally tractable com­bi­na­tions. It also al­lows ex­plor­ing pa­ram­e­ter sen­si­tivity.

The open source soft­ware called Guessti­mate(2) was origi­nally used to im­ple­ment the mod­els, and they are available on­line. How­ever, to en­able more pow­er­ful anal­y­sis and plot­ting, the mod­els were also im­ple­mented on the soft­ware An­a­lyt­ica 5.2.9. Com­bin­ing the un­cer­tain­ties in all the in­puts was performed uti­liz­ing a Me­dian Latin Hyper­cube anal­y­sis (similar to Monte Carlo, but bet­ter perform­ing (Kera­mat & Kielbasa, 1997)) with the max­i­mum un­cer­tainty sam­ple of 32,000 (run time on a per­sonal com­puter was sec­onds). The re­sults from the two soft­ware agreed within un­cer­tain­ties due to finite num­ber of sam­ples, giv­ing greater con­fi­dence in the re­sults.

Figures 1 to 4 illus­trate the in­ter­re­la­tion­ships of the nodes for Model 1; Model 2 is iden­ti­cal with the fol­low­ing ex­cep­tion. The in­put vari­able Miti­ga­tion of far fu­ture im­pact of in­dus­trial loss from ALLFED so far for 10% in­dus­trial loss node was re­moved from Model 1 due to the poll ques­tion not re­quiring this in­put.

Figure 1. Model overview

Figure 2. 100% In­dus­try loss catas­tro­phes sub­model (10% in­dus­try loss is nearly iden­ti­cal)

Figure 3. AGI safety cost effec­tive­ness submodel

Figure 4. Over­all cost effec­tive­ness ratios

3.1 Loss of In­dus­try In­ter­ven­tions Submodel

Table 1 shows the key in­put pa­ram­e­ters for Model 1 (largely Denken­berger and con­fer­ence poll of effec­tive al­tru­ists)(D. Denken­berger, Cot­ton-Bar­rat, Dewey, & Li, 2019a) and Model 2 (D. Denken­berger, Cot­ton-Bar­ratt, Dewey, & Li, 2019) (Sand­berg in­puts)(3). Though the au­thors here are as­so­ci­ated with re­search on loss of in­dus­try, two out of four also pub­lished in AGI safety. Also, opinions out­side of the loss of in­dus­try field have been so­lic­ited for one of the mod­els. There­fore, we be­lieve the re­sults are rep­re­sen­ta­tive. All dis­tri­bu­tions are log­nor­mal un­less oth­er­wise in­di­cated. The ab­solute value of the long term fu­ture is very difficult to quan­tify, so losses are ex­pressed as a per­cent.

Table 1. Los­ing in­dus­try in­ter­ven­tions in­put vari­ables

The po­ten­tial causes of the dis­abling of 110 of global in­dus­try in­clude Car­ring­ton-type so­lar storm, sin­gle HEMP, co­or­di­nated phys­i­cal or cy­ber at­tack, con­ven­tional world war, loss of the In­ter­net, and pan­demic dis­rupt­ing trade. We are not aware of quan­ti­ta­tive es­ti­mates of the prob­a­bil­ity of a co­or­di­nated cy­ber at­tack, loss of the In­ter­net, a pan­demic that sig­nifi­cantly dis­rupts trade, or a con­ven­tional world war that de­stroys sig­nifi­cant in­dus­try and does not es­ca­late to the use of nu­clear weapons. Quan­ti­ta­tive model es­ti­mates of the prob­a­bil­ity of full-scale nu­clear war be­tween the U.S. and Rus­sia such as (Bar­rett et al., 2013) may give some in­di­ca­tion of the prob­a­bil­ity of HEMP. HEMP could ac­com­pany nu­clear weapons de­stroy­ing cities, and this would be a com­bi­na­tion los­ing in­dus­try/​los­ing the sun sce­nario, which would benefit from the prepa­ra­tion con­sid­ered here. Asym­met­ric war­fare, where one coun­try is sig­nifi­cantly less pow­er­ful than an­other, could use HEMP be­cause it only re­quires one or two nu­clear weapons to dis­able an en­tire coun­try. There are sig­nifi­cantly more nu­clear pairs that could re­sult in HEMP than could re­sult in full-scale nu­clear war (the lat­ter is ba­si­cally the dyads be­tween US, Rus­sia, and China). And yet one quan­ti­ta­tive model es­ti­mate of the prob­a­bil­ity of full-scale nu­clear war only be­tween U.S. and Rus­sia was 1.7% per year mean (Bar­rett et al., 2013). In 2012, there was a near miss of a so­lar storm similar size to the Car­ring­ton event (Baker et al., 2013). One prob­a­bil­ity es­ti­mate of a Car­ring­ton-sized event is ~0.033% per year (Rood­man, 2015). How­ever, an es­ti­mate of the prob­a­bil­ity per year of a su­perflare 20 times as pow­er­ful as the Car­ring­ton event is 0.1%/​year (Lingam & Loeb, 2017), which dis­agrees by or­ders of mag­ni­tude for the same in­ten­sity. Another study pro­poses that a Car­ring­ton-sized event re­cur­rence in­ter­val is less than one cen­tury (Hayakawa et al., 2019). Given the large un­cer­tainty of so­lar storms and sig­nifi­cant prob­a­bil­ity of sin­gle EMP, pan­demic and re­gional cy­ber at­tack, Model 1 uses a mean of 3% per year. Model 2 uses a mean of 0.4% per year.

In­tu­itively, one would ex­pect that the prob­a­bil­ity of near-to­tal loss of in­dus­try would be sig­nifi­cantly lower than 10% loss of in­dus­try. Com­plete loss of in­dus­try may cor­re­spond to the su­perflares that may have oc­curred in the first mil­len­nium A.D. (~0.1% per year). We are not aware of quan­ti­ta­tive es­ti­mates of the prob­a­bil­ity of mul­ti­ple EMP, in­dus­try-halt­ing pan­demic or global cy­ber at­tack. Model 1 mean is 0.3% per year for near-to­tal loss of in­dus­try. Model 2 mean is 0.09% per year.

At the Effec­tive Altru­ism Global 2018 San Fran­cisco con­fer­ence, with sig­nifi­cant rep­re­sen­ta­tion of peo­ple with knowl­edge of ex­is­ten­tial risk, a pre­sen­ta­tion was given and the au­di­ence was asked about the 100% loss of in­dus­try catas­tro­phes. The ques­tions in­volved the re­duc­tion in far fu­ture po­ten­tial due to the catas­tro­phes with cur­rent prepa­ra­tion and if ~$30 mil­lion were spent to get pre­pared. The data from the poll were used di­rectly in­stead of con­struct­ing con­tin­u­ous dis­tri­bu­tions.

To de­ter­mine the marginal im­pact of ad­di­tional fund­ing, the con­tri­bu­tion due to work so far should be quan­tified. The Alli­ance to Feed the Earth in Disasters (ALLFED)(ALLFED, 2019) (and ALLFED re­searchers be­fore the or­ga­ni­za­tion was offi­cially formed) have pub­lished sev­eral pa­pers on in­ter­ven­tions for los­ing in­dus­try. They have a web­site with these pa­pers and sum­maries. They have also run work­shops to in­ves­ti­gate plan­ning for these in­ter­ven­tions. How­ever, we ex­pect the con­tri­bu­tion of ALLFED to re­duc­ing the long term im­pact of loss of in­dus­try to be sig­nifi­cantly lower than in the case of ob­scur­ing of the sun be­cause the loss of the In­ter­net may be im­me­di­ate if there are mul­ti­ple si­mul­ta­neous EMPs. How­ever, the loss of elec­tric­ity may not be si­mul­ta­neous globally due to cy­ber at­tack. Fur­ther­more, there may be sev­eral days warn­ing for an ex­treme so­lar storm. The other rea­son why cur­rent work may be less valuable in a global loss of in­dus­try sce­nario is that fewer peo­ple know about the loss of in­dus­try work of ALLFED than the food with­out the sun work. Model 1 es­ti­mates a re­duc­tion in long-term fu­ture po­ten­tial loss from a global loss of in­dus­try due to ALLFED so far as a mean of 0.1%. Model 2 uses 0.004% due to em­pha­siz­ing lack of com­mu­ni­ca­tion sce­nar­ios.

In the case of a 10% loss of in­dus­try, with the ex­cep­tion of the sce­nario of loss of In­ter­net ev­ery­where, the In­ter­net in most places would be func­tion­ing. Even if the In­ter­net is not func­tion­ing, mass me­dia would gen­er­ally be func­tion­ing. There­fore, pos­si­ble mechanisms for im­pact due to work so far in­clude the peo­ple already aware of the in­ter­ven­tions get­ting the mes­sage to de­ci­sion mak­ers/​me­dia in a catas­tro­phe, de­ci­sion mak­ers find­ing the three pa­pers (Ab­delkhaliq et al., 2016; Cole et al., 2016; David C Denken­berger et al., 2017) on these in­ter­ven­tions, or the peo­ple in the me­dia who know about these in­ter­ven­tions spread­ing the mes­sage. How­ever, even though peo­ple out­side of the af­fected coun­tries could get the in­for­ma­tion, it may not be fea­si­ble to get the in­for­ma­tion to the peo­ple who need it most. Model 2 es­ti­mates a re­duc­tion in long-term fu­ture po­ten­tial loss from a global loss of in­dus­try due to ALLFED so far as a mean of 0.004%, again due to the likely lack of com­mu­ni­ca­tions in the af­fected re­gion. Model 1 does not use a value in its calcu­la­tion.

The mean es­ti­mate of the con­fer­ence par­ti­ci­pants was 16% re­duc­tion in the long-term fu­ture of hu­man­ity due to loss of global in­dus­try with cur­rent pre­pared­ness. Model 2 es­ti­mate mean was 7%.

The 10% in­dus­try loss catas­tro­phes could re­sult in in­sta­bil­ity and full scale nu­clear war or other routes to far fu­ture im­pact. Though the poll was not taken for this level of catas­tro­phe, a sur­vey of GCR re­searchers es­ti­mated a mean of 13% re­duc­tion in long-term po­ten­tial of hu­man­ity due to a 10% food short­fall (Denken­berger, Sand­berg, & Pearce, un­pub­lished re­sults). Some 10% loss of in­dus­try catas­tro­phes could cause a ~10% global food short­fall. How­ever, if the af­fected area were largely de­vel­oped coun­tries, since they would likely need to be­come near ve­gan to sur­vive, hu­man ed­ible food de­mand could fall 10% be­cause of the re­duc­tion of feed­ing an­i­mals. Still, given the pos­si­ble over­lap of these catas­tro­phes, this anal­y­sis uses the sur­vey es­ti­mate for Model 1. Model 2 es­ti­mate mean is 0.4% re­duc­tion in long-term po­ten­tial due to 10% loss of in­dus­try.

The means of the per­cent fur­ther re­duc­tion in far fu­ture loss due to global loss of in­dus­try due to spend­ing ~$30 mil­lion were 40% for the poll and 3% for Model 2. Note that in Model 1, the poll did not ask for the fur­ther re­duc­tion in far fu­ture loss from spend­ing money, but in­stead a new far fu­ture loss af­ter the money was spent. There­fore, the 40% mean fur­ther re­duc­tion is a calcu­lated value and does not ap­pear in Table 1. For the 10% in­dus­trial short­falls, our es­ti­mate of the mean re­duc­tion is 12% for Model 1 be­cause the con­tri­bu­tion of ad­di­tional spend­ing on the aid from out­side the af­fected re­gion would be smaller. On the other hand, it was 5% for Model 2 be­cause he thought the like­li­hood of suc­cess would be greater than for the global loss of in­dus­try given the out­side aid.

Mo­ral haz­ard would oc­cur if aware­ness of in­ter­ven­tions makes catas­tro­phes more likely or more in­tense. Global use of EMP or co­or­di­nated cy­ber at­tack could be per­pe­trated by a ter­ror­ist or­ga­ni­za­tion try­ing to de­stroy civ­i­liza­tion. How­ever, if the or­ga­ni­za­tion knew of backup plans that could main­tain civ­i­liza­tion, the ter­ror­ist might ac­tu­ally be de­terred from at­tempt­ing such an at­tack. This would re­sult in nega­tive moral haz­ard (ad­di­tional benefit of prepa­ra­tion). How­ever, it is pos­si­ble that knowl­edge of a backup plan could re­sult in peo­ple ex­pend­ing less effort to harden sys­tems to EMP, so­lar storm or cy­ber at­tack, cre­at­ing moral haz­ard. There­fore, Model 1 uses a mean moral haz­ard of zero, and Model 2 uses a point value of zero.

For the 10% loss of in­dus­try sce­nar­ios, the same moral haz­ard val­ues are used as for the global loss of in­dus­try.

3.2 Costs of Interventions

The costs of the pro­posed in­ter­ven­tions are made up of a backup com­mu­ni­ca­tion sys­tem, de­vel­op­ing in­struc­tions and test­ing them for dis­tributed food pro­duc­tion, and mak­ing re­sponse plans at differ­ent lev­els of gov­ern­ments.

Cur­rently the long dis­tance short­wave ra­dio fre­quen­cies are used by gov­ern­ment and mil­i­tary sta­tions, ships at sea, and by am­a­teur (ham) ra­dio op­er­a­tors. Be­cause of se­cu­rity con­sid­er­a­tions, data on the num­ber of gov­ern­ment/​mil­i­tary sta­tions is difficult to com­pile. The use by ships has de­clined be­cause of the availa­bil­ity of low cost satel­lite phones but there are an es­ti­mated three mil­lion ham op­er­a­tors wor­ld­wide (Silver, 2004). Not all of those are li­censed to use the short­wave bands, how­ever. In the U.S., about half of the ap­prox­i­mately 800,000 Amer­i­can ham op­er­a­tors do hold the nec­es­sary li­cense. As­sum­ing such a pat­tern wor­ld­wide that would mean po­ten­tially about 1.5 mil­lion ham ra­dio short­wave sta­tions globally.

How­ever, this anal­y­sis con­ser­va­tively ig­nores the pos­si­bil­ity that there would be ex­ist­ing ham ra­dios that are dis­con­nected with un­plugged backup power sys­tems. There­fore, the cost of the backup com­mu­ni­ca­tion sys­tem of 5 mil­lion USD is based on the cost of 10 larger two-way short­wave com­mu­ni­ca­tion sys­tems (with backup power) that can trans­mit across oceans (see Ap­pendix A). Then there would be 4000 smaller one-way short­wave re­ceivers (with backup power) that, when con­nected to a lap­top com­puter and printer, would have the abil­ity to print out in­for­ma­tion. This could be called RE­cov­er­ing Civ­i­liza­tion Us­ing Ra­dio (RECUR). This would cover 80% of the world’s pop­u­la­tion within one day non­mo­tor­ized trans­porta­tion dis­tance (~40 km) ac­cord­ing to Geo­graph­i­cal In­for­ma­tion Sys­tems (GIS) anal­y­sis (Fist et al., un­pub­lished re­sults). It is crit­i­cal to very quickly get the mes­sage out that there is a plan and not to panic. Sub­se­quent com­mu­ni­ca­tion would be in­struc­tions for meet­ing ba­sic needs im­me­di­ately like food, shelter, and wa­ter. This ini­tial plan­ning would be con­sid­ered open-loop con­trol be­cause it would not have im­me­di­ate feed­back (Lip­tak, 2018).

In the en­su­ing months, as re­al­ity always de­vi­ates from plans, feed­back would be re­quired. This could be ac­com­plished by co­or­di­nat­ing ad­di­tional un­dam­aged short­wave and elec­tri­cal gen­er­a­tion equip­ment to al­low two-way com­mu­ni­ca­tion for many cities. Also, de­pend­ing on dis­tance, some mes­sages could be com­mu­ni­cated through non-elec­tronic means such as horses, smoke sig­nals, and sun re­flect­ing he­lio­graphs of the kind that were used in the Western USA be­fore tele­graphs (Ro­lak, 1975; Ster­ling, 2008).

In­struc­tions would in­clude how to get safe wa­ter or treat it (e.g. by filling con­tain­ers in­clud­ing cleaned bath­tubs with wa­ter in wa­ter tow­ers and treat­ing with bleach for a limited amount of time, so­lar wa­ter pas­teur­iza­tion (Burch et al., 1998; ) or boiling). Ad­di­tional in­struc­tions would be on how to keep warm if it is cold out­side (Ab­delkhaliq et al., 2016). Other in­struc­tions would be how to retrofit a light duty ve­hi­cle to be pul­led by a large an­i­mal. Be­cause cat­tle and horses can eat food that is not ed­ible to hu­mans and be­cause the wheel is so effi­cient, this would be a much more effec­tive way of mov­ing peo­ple than peo­ple walk­ing. Ad­di­tional in­struc­tions would be how to cre­ate wood-burn­ing stoves and hand and an­i­mal farm­ing tools, e.g. from re­pur­posed or land­fill ma­te­ri­als. A similar pro­ject is Open Source Ecol­ogy, where blueprints have been de­vel­oped of es­sen­tial equip­ment for civ­i­liza­tion that can be made from scratch (Open Source Ecol­ogy, 2019). All of this should be tested on re­al­is­ti­cally un­trained peo­ple and the in­struc­tions should be mod­ified ac­cord­ingly.

Plan­ning in­volves de­ter­min­ing where differ­ent peo­ple would need to be re­lo­cated in or­der to have their ba­sic needs met. The crit­i­cal short-term fac­tors are shelter and wa­ter, while food is slightly longer term. The eco­nom­i­cally op­ti­mal plan could be achieved with GIS anal­y­sis. How­ever, in or­der for this to be poli­ti­cally fea­si­ble, there would need to be ne­go­ti­a­tions and pre­com­mit­ments. This may have similar cost to the gov­ern­ment plan­ning for food with­out the sun of $1 mil­lion to $30 mil­lion (Denken­berger & Pearce, 2016).

Over­all, Model 1 es­ti­mates the com­mu­ni­ca­tions, in­struc­tions/​test­ing, and plan­ning for global in­dus­try loss would cost roughly 30 mil­lion USD (see Table 1). For the re­gional loss of in­dus­try, it is difficult to pre­dict where it might oc­cur, so gen­er­ally com­mu­ni­ca­tions and plan­ning should be done for the en­tire world, and thus the in­struc­tions/​ex­per­i­ments would be similar. There­fore, there is a high cor­re­la­tion of prepa­ra­tion for the two catas­tro­phes, so this is as­sumed to be the cost of the prepa­ra­tion to both scales of catas­tro­phe. Model 2 has some­what higher costs ($50 mil­lion mean).

The time hori­zon of effec­tive­ness of the in­ter­ven­tions would de­pend on the in­ter­ven­tion. Modern short­wave ra­dio com­mu­ni­ca­tions equip­ment has few mov­ing parts (chiefly cool­ing fans and mo­tors to ro­tate di­rec­tional an­ten­nas) and ser­vice­abil­ity mea­sured in decades.(5)

Fur­ther­more, these sys­tems need to be dis­con­nected from the grid to be pro­tected from HEMP. This would re­duce wear and tear, but reg­u­lar test­ing would be pru­dent. Some of the bud­get could be used for this and for re­pair of the units. As for the in­struc­tions, since the hand and an­i­mal tools are not chang­ing, di­rec­tions should stay rele­vant. Plan­ning within gov­ern­ments is sus­cep­ti­ble to turnover, but some money could be used to trans­fer the knowl­edge to new em­ploy­ees. Model 1 es­ti­mates a 25 year mean for the time hori­zon. Model 2 has a slightly shorter time hori­zon mean of 20 years driven by a con­ser­va­tive es­ti­mate of the com­mu­ni­ca­tions equip­ment life­time.

3.3 Ar­tifi­cial In­tel­li­gence Submodel

The sub­model for AGI safety cost-effec­tive­ness was based on work of the Oxford Pri­ori­ti­sa­tion Pro­ject, Owen Cot­ton-Bar­ratt and Daniel Dewey (both while at the Fu­ture of Hu­man­ity In­sti­tute at the Univer­sity of Oxford) (D. Denken­berger, Cot­ton-Bar­rat, Dewey, & Li, 2019b; Li, 2017). We mod­ified it (Denken­berger et al., un­pub­lished re­sults), with ma­jor changes in­clud­ing in­creas­ing the cost of an AGI safety re­searcher, mak­ing bet­ter be­haved dis­tri­bu­tions, re­mov­ing one method of calcu­la­tion and chang­ing the anal­y­sis from av­er­age to marginal for num­ber of re­searchers. Th­ese changes in­creased the cost effec­tive­ness of AGI safety by roughly a fac­tor of two and in­creased the un­cer­tainty con­sid­er­ably (be­cause the method of calcu­la­tion re­tained had much greater un­cer­tainty than the one re­moved). The cost-effec­tive­ness was found at the mar­gin as­sum­ing $3 billion ex­pen­di­ture.

4. Re­sults and Discussion

4.1 Results

In or­der to con­vert av­er­age cost effec­tive­ness to marginal for in­ter­ven­tions, we use log­a­r­ith­mic re­turns (Cot­ton-Bar­ratt, 2014), which re­sults in the rel­a­tive marginal cost effec­tive­ness be­ing one di­vided by the cu­mu­la­tive money spent. An es­ti­mate is needed of the cu­mu­la­tive money spent so far for in­ter­ven­tions. Un­der $100,000 equiv­a­lent (mostly vol­un­teer time) has been spent so far di­rectly on this effort, nearly all by ALLFED. A very large amount of money has been spent on try­ing to pre­vent nu­clear war, hard­en­ing mil­i­tary in­stal­la­tions to HEMP, and on cy­ber se­cu­rity. How­ever, note that even though US mil­i­tary in­fras­truc­ture is sup­pos­edly hard­ened to EMP, it may not be able to with­stand a “su­per” EMP weapon that some coun­tries may pos­sess (P. Pry, 2017) or so­phis­ti­cated cy­ber at­tacks. More rele­vant, money has been spent on farm­ing or­gan­i­cally and less in­dus­tri­ally for tra­di­tional sus­tain­abil­ity rea­sons. Also, Open Source Ecol­ogy has de­vel­oped in­struc­tions for crit­i­cal equip­ment. Th­ese could be tens of mil­lions of dol­lars that would have needed to be spent for catas­tro­phe prepa­ra­tion. So this would be rele­vant for the marginal $30 mil­lion case. How­ever, there are still very high value in­ter­ven­tions that should be done first, such as col­lect­ing in­struc­tions for pro­duc­ing hand/​an­i­mal farm tools with­out in­dus­try and giv­ing them to at least some gov­ern­ments and own­ers of dis­con­nected short­wave ra­dios and backup power sources. Though the in­ter­ven­tions would not work as well as with ~$30 mil­lion of re­search/​com­mu­ni­ca­tions backup, sim­ply hav­ing some crit­i­cal peo­ple know about them and im­ple­ment them in their own com­mu­ni­ties/​coun­tries with­out trade could still sig­nifi­cantly in­crease the chance of re­tain­ing an­thro­polog­i­cal civ­i­liza­tion. The cost of these first in­ter­ven­tions would be very low, so they would have very high cost effec­tive­ness.

Table 2 shows the ranges of the far fu­ture po­ten­tial in­crease per $ due to loss of in­dus­try prepa­ra­tion av­er­age over ~$30 mil­lion Model 1, av­er­age over ~$50 mil­lion for Model 2, and AGI safety re­search at the $3 billion mar­gin. The dis­tri­bu­tions are shown in Figure 5. Be­cause the var­i­ance of Model 1 is very high, the mean cost-effec­tive­ness is high, driven by the small prob­a­bil­ity of very high cost-effec­tive­ness.

Table 2. Cost-effec­tive­ness comparison

Figure 5. Far fu­ture po­ten­tial in­crease per $ due to loss of in­dus­try prepa­ra­tion av­er­age over ~$30 mil­lion Model 1, due to loss of in­dus­try prepa­ra­tion av­er­age over ~$50 mil­lion Model 2, and AGI safety re­search at the $3 billion mar­gin. Fur­ther to the right is more cost-effec­tive.

With log­a­r­ith­mic re­turns, cost-effec­tive­nesses of the marginal dol­lar now (100,000th dol­lar) and of the last dol­lar are about 50 times greater than, and 6 times less than, the av­er­age cost effec­tive­ness of spend­ing $30 mil­lion, re­spec­tively. For Model 2, the cor­re­spond­ing num­bers are about 70 times greater than and 6 times less than the av­er­age cost effec­tive­ness of spend­ing $50 mil­lion. Ra­tios of mean of the dis­tri­bu­tions of cost effec­tive­nesses are re­ported in Table 3.6 Com­par­ing to AGI safety at the mar­gin, Model 1 yields the 30 mil­lionth dol­lar on los­ing in­dus­try be­ing 20 times more cost effec­tive, the av­er­age $30 mil­lion on in­ter­ven­tions be­ing 100 times more cost effec­tive, and the marginal dol­lar now on in­ter­ven­tions be­ing 5000 times more cost effec­tive (Table 3). Model 2 yields the last dol­lar on in­ter­ven­tions be­ing 0.05 times as cost effec­tive, the av­er­age ~$50 mil­lion on in­ter­ven­tions be­ing 0.2 times as cost effec­tive, and the marginal dol­lar now on in­ter­ven­tions be­ing 20 times as cost effec­tive. Given or­ders of mag­ni­tude un­cer­tainty and sen­si­tivity of these ra­tios to the rel­a­tive un­cer­tainty of the in­ter­ven­tions, likely more ro­bust are the prob­a­bil­ities that one is more cost effec­tive than the other. Com­par­ing to AGI safety at the mar­gin, Model 1 finds ~88% prob­a­bil­ity that the 30 mil­lionth dol­lar on in­ter­ven­tions is more cost effec­tive, ~95% prob­a­bil­ity that the av­er­age $30 mil­lion on in­ter­ven­tions is more cost effec­tive, and ~99+% prob­a­bil­ity that the marginal dol­lar now on in­ter­ven­tions is more cost effec­tive (see Table 3). Model 2 finds ~50% prob­a­bil­ity that the 50 mil­lionth dol­lar on in­ter­ven­tions is more cost effec­tive than AGI safety, ~76% prob­a­bil­ity that the av­er­age $50 mil­lion on in­ter­ven­tions is more cost effec­tive, and ~99% prob­a­bil­ity that the marginal dol­lar now on in­ter­ven­tions is more cost effec­tive. Note that the greater than 50% prob­a­bil­ity for the av­er­age cost effec­tive­ness de­spite the ra­tio of the means of cost-effec­tive­ness be­ing less than one is due to the rel­a­tively smaller var­i­ance of Model 2 cost-effec­tive­ness es­ti­mate (see Figure 5).

Table 3. Key cost effec­tive­ness out­puts of los­ing in­dus­try interventions

Over­all, the mean cost-effec­tive­ness of Model 1 is about 2.5 or­ders of mag­ni­tude higher than Model 2. How­ever, due to the smaller var­i­ance in Model 2 dis­tri­bu­tions, there was similar con­fi­dence that los­ing in­dus­try in­ter­ven­tions at the mar­gin now are more cost-effec­tive than AGI safety. Another large differ­ence is that Model 1 found that 10% loss of in­dus­try sce­nar­ios are similar cost effec­tive­ness for the far fu­ture as global loss. This was be­cause the greater prob­a­bil­ity of these catas­tro­phes coun­ter­acted the smaller far fu­ture im­pact. How­ever, Model 2 rated the cost-effec­tive­ness of the 10% in­dus­try loss as ~1.5 or­ders of mag­ni­tude lower than for global loss. Given the agree­ment of high con­fi­dence that fur­ther work is jus­tified at this point, some of this fur­ther work could be used to re­solve the sig­nifi­cant un­cer­tain­ties to de­ter­mine if more money is jus­tified: value of in­for­ma­tion (Bar­rett, 2017).

Be­ing pre­pared for loss of in­dus­try might pro­tect against un­known risks, mean­ing the cost-effec­tive­ness would in­crease.

Ac­cord­ing to Model 1, ev­ery year ac­cel­er­a­tion in prepa­ra­tion for los­ing in­dus­try would in­crease the long term value of hu­man­ity by 0.00009% to 0.4% (mean of 0.07%). The cor­re­spond­ing Model 2 num­bers are 0.00006% to 0.0004% (mean of 0.00017%). Either way, there is great ur­gency to get pre­pared.

It is not nec­es­sary for in­ter­ven­tions to be more cost effec­tive than AGI safety in or­der to fund los­ing in­dus­try in­ter­ven­tions on a large scale. Fund­ing in the ex­is­ten­tial risk com­mu­nity goes to other causes, e.g. an en­g­ineered pan­demic. One es­ti­mate of cost effec­tive­ness of biose­cu­rity was much lower than for AGI safety and los­ing in­dus­try in­ter­ven­tions, but the au­thors were be­ing very con­ser­va­tive (Millett & Sny­der-Beat­tie, 2017). Another area of ex­is­ten­tial risk that has re­ceived in­vest­ment is as­ter­oid im­pact, which again has much lower cost-effec­tive­ness than for los­ing in­dus­try in­ter­ven­tions (Ma­theny, 2007).

Theim­por­tance, tractabil­ity, ne­glect­ed­ness (ITN) frame­work (Effec­tive Altru­ism Con­cepts, 2019) is use­ful for pri­ori­tiz­ing cause ar­eas. The im­por­tance is the ex­pected im­pact on the long-term fu­ture of the risk. Tractabil­ity mea­sures the ease of mak­ing progress. Ne­glect­ed­ness quan­tifies how much effort is be­ing di­rected to­wards re­duc­ing the risk. Un­for­tu­nately this frame­work can­not be ap­plied to in­ter­ven­tions straight­for­wardly. This is be­cause ad­dress­ing a risk could have many po­ten­tial in­ter­ven­tions. Nev­er­the­less, some semi-quan­ti­ta­tive in­sights can be gleaned. The im­por­tance of AGI is larger than in­dus­try loss catas­tro­phes, but in­dus­try loss in­ter­ven­tions are far more ne­glected.

Though these in­ter­ven­tions for the loss of in­dus­try are not com­pared di­rectly to food with­out the sun in­ter­ven­tions, they are both com­pared to the same AGI safety sub­model. Over­all, Model 2 in­di­cates that spend­ing $50 mil­lion on in­ter­ven­tions for the loss of in­dus­try is com­pet­i­tive with AGI safety. How­ever, Model 1 here and both mod­els for the food with­out sun in­di­cate that sig­nifi­cantly larger than the pro­posed amount to be spent (~$100 mil­lion) would be jus­tified from the long-term fu­ture per­spec­tive.

The AGI safety sub­model was used to es­ti­mate the cost effec­tive­ness of sav­ing ex­pected lives in the pre­sent gen­er­a­tion, find­ing $16-$12,000 per ex­pected life saved ((Denken­berger et al., un­pub­lished re­sults). This is gen­er­ally more cost effec­tive than GiveWell es­ti­mates for global health in­ter­ven­tions: $900-$7,000 (GiveWell, 2017). Food with­out the sun is sig­nifi­cantly bet­ter ($0.20-$400 per ex­pected life) for only 10% global food pro­duc­tion short­falls ( Denken­berger & Pearce, 2016) and gen­er­ally bet­ter only con­sid­er­ing one coun­try ($1-$20,000 per ex­pected life) and only nu­clear win­ter ( Denken­berger & Pearce, 2016). Model 2 for in­ter­ven­tions for los­ing in­dus­try has similar long term fu­ture cost-effec­tive­ness to AGI safety, in­di­cat­ing that the life­sav­ing cost-effec­tive­ness of in­ter­ven­tions for los­ing in­dus­try would likely be com­pet­i­tive with AGI safety and global health, but this re­quires fu­ture work. Model 1 for in­ter­ven­tions for los­ing in­dus­try has similar long term fu­ture cost-effec­tive­ness to food with­out the sun, in­di­cat­ing that loss of in­dus­try prepa­ra­tions may save lives in the pre­sent gen­er­a­tion less ex­pen­sively than AGI safety and global health. Since AGI safety ap­pears to be un­der­funded from the pre­sent gen­er­a­tion per­spec­tive, it would be ex­tremely un­der­funded when tak­ing into ac­count fu­ture gen­er­a­tions. If this were cor­rected, then in or­der for in­ter­ven­tions for los­ing in­dus­try to stay similar cost-effec­tive­ness to AGI safety, more fund­ing for los­ing in­dus­try in­ter­ven­tions would be jus­tified.

4.2 Timing of Funding

If one agrees that in­ter­ven­tions for los­ing in­dus­try should be a sig­nifi­cant part of the ex­is­ten­tial risk re­duc­tion port­fo­lio, there re­mains the ques­tion of how to al­lo­cate fund­ing to the differ­ent causes over time. For AGI safety, there are ar­gu­ments both for fund­ing later and fund­ing now (Ord, 2014). For in­ter­ven­tions for los­ing in­dus­try, since most of the catas­tro­phes could hap­pen right away, there is sig­nifi­cantly greater ur­gency to fund in­ter­ven­tions for los­ing in­dus­try now. Fur­ther­more, it is rel­a­tively more effec­tive to scale up the fund­ing quickly be­cause, through re­quests for pro­pos­als, the effort could co-opt rele­vant ex­ist­ing ex­per­tise (e.g. in short­wave ra­dio). Since we have not mon­e­tized the value of the far fu­ture, we can­not use con­ven­tional cost-effec­tive­ness met­rics such as the benefit to cost ra­tio, net pre­sent value, pay­back time, and re­turn on in­vest­ment. How­ever, in the case of sav­ing ex­pected lives in the pre­sent gen­er­a­tion for the global case and 10% food short­falls, the re­turn on in­vest­ment was from 100% to 5,000,000% per year (Denken­berger & Pearce, 2016) based on mon­e­tized life sav­ings. This sug­gests that the $40 mil­lion or so for in­ter­ven­tions for los­ing in­dus­try should be mostly spent in the next few years to op­ti­mally re­duce ex­is­ten­tial risk (a smaller amount would main­tain pre­pared­ness into the fu­ture).

4.3 Uncer­tainty and pa­ram­e­ter sensitivity

Pa­ram­e­ter sen­si­tivi­ties of Model 1 and Model 2 were in­ves­ti­gated us­ing the An­a­lyt­ica im­por­tance anal­y­sis func­tion. This uses the ab­solute rank-or­der cor­re­la­tion be­tween each in­put and the out­put as a mea­sure of the strength of mono­tonic re­la­tions be­tween each un­cer­tain in­put and a se­lected out­put, both lin­ear and oth­er­wise (Chris­man et al., 2007; Mor­gan & Hen­rion, 1990). Anal­y­sis was fo­cused on the al­ter­na­tive foods sub­model i.e. Global loss of in­dus­try and 10% in­dus­try loss catas­tro­phes. Pa­ram­e­ter sen­si­tivity within AGI safety was not in­ves­ti­gated as this sub­model was adapted from pre­vi­ous work by the Oxford Pri­ori­ti­sa­tion Pro­ject, which dis­cussed un­cer­tain­ties within the AGI safety cost effec­tive­ness sub­model (Denken­berger et al., 2019b; Li, 2017)).

The key out­puts nodes in Table 3 were un­able to be in­ves­ti­gated di­rectly us­ing the im­por­tance anal­y­sis func­tion due to the node out­puts be­ing point val­ues, a re­sult of calcu­lat­ing the ra­tio of means (the An­a­lyt­ica im­por­tance anal­y­sis func­tion re­quires the vari­able be a chance vari­able to perform ab­solute rank-or­der cor­re­la­tion). There­fore the pre­vi­ous node in the mod­els Far fu­ture po­ten­tial in­crease per $ due to loss of in­dus­try prepa­ra­tion was used to in­ves­ti­gate the im­por­tance of in­put vari­ables of the al­ter­nate foods sub­model.

Im­por­tance anal­y­sis of node: Far fu­ture po­ten­tial in­crease per $ due to loss of in­dus­try prepa­ra­tion showed Model 1 had great­est sen­si­tivity to in­put vari­ables Re­duc­tion in far fu­ture po­ten­tial due to 10% in­dus­trial loss with cur­rent prepa­ra­tion closely fol­lowed by Re­duc­tion in far fu­ture po­ten­tial due to global loss of in­dus­try with cur­rent prepa­ra­tion (Figure 6). Model 2 showed great­est sen­si­tivity to in­put vari­able Cost of in­ter­ven­tions ($ mil­lion) (global loss of in­dus­try) (Figure 7).

Figure 6. Im­por­tance anal­y­sis re­sults for Far fu­ture po­ten­tial in­crease per $ due to loss of in­dus­try prepa­ra­tion for Model 1.

Figure 7. Im­por­tance anal­y­sis re­sults for Far fu­ture po­ten­tial in­crease per $ due to loss of in­dus­try prepa­ra­tion for Model 2.

Suc­ces­sive rounds of para­met­ric anal­y­sis were performed to de­ter­mine com­bi­na­tions of in­put pa­ram­e­ters suffi­ciently un­fa­vor­able to al­ter­na­tive foods, un­til cost effec­tive­ness ra­tios (Table 3) switched to fa­vor­ing AGI safety. Un­fa­vor­able in­put val­ues were limited to 5th or 95th per­centile val­ues of origi­nal in­put dis­tri­bu­tions. Model 1 re­quired 7 un­fa­vor­able in­put pa­ram­e­ters to switch to AGI safety be­ing more cost effec­tive than los­ing in­dus­try in­ter­ven­tions at the mar­gin now while Model 2 re­quired 4 in­put vari­ables (see Table 4).

Table 4: Com­bi­na­tion of in­put vari­ables re­sult­ing in AGI safety be­ing more cost effec­tive than los­ing in­dus­try in­ter­ven­tions at the mar­gin now.

5. Con­clu­sions and Fu­ture Work

There are a num­ber of ex­is­ten­tial risks that have the po­ten­tial to re­duce the long-term po­ten­tial of hu­man­ity. Th­ese in­clude AGI and elec­tric­ity/​in­dus­try dis­rupt­ing catas­tro­phes in­clud­ing ex­treme so­lar storm, EMP, and co­or­di­nated cy­ber at­tack. Here we pre­sent the first long term fu­ture cost-effec­tive­ness analy­ses for in­ter­ven­tions for los­ing in­dus­try. There is great un­cer­tainty in both AGI safety and in­ter­ven­tions for los­ing in­dus­try. How­ever, the mod­els have 99%-99+% con­fi­dence that fund­ing in­ter­ven­tions for los­ing in­dus­try now is more cost effec­tive than ad­di­tional fund­ing for AGI safety be­yond the ex­pected $3 billion. In or­der to make AGI safety more cost effec­tive than los­ing in­dus­try in­ter­ven­tions ac­cord­ing to the mean of their dis­tri­bu­tions, this re­quired chang­ing four vari­ables in Model 2 to the 5th per­centile on the pes­simistic end si­mul­ta­neously. For Model 1, it re­quired chang­ing seven vari­ables. There­fore, it is quite ro­bust that a sig­nifi­cant amount of money should be in­vested in los­ing in­dus­try in­ter­ven­tions now. There is closer to 50%-88% con­fi­dence that spend­ing the ~$40 mil­lion on in­ter­ven­tions for los­ing in­dus­try is more cost effec­tive than AGI safety. Th­ese in­ter­ven­tions ad­dress catas­tro­phes that have sig­nifi­cant like­li­hood of oc­cur­ring in the next decade, so fund­ing is par­tic­u­larly ur­gent. Both AGI safety and in­ter­ven­tions for los­ing in­dus­try save ex­pected lives in the pre­sent gen­er­a­tion more cheaply than global poverty in­ter­ven­tions, so fund­ing should in­crease for both. The cost-effec­tive­ness at the mar­gin of in­ter­ven­tions for the loss of in­dus­try is similar to that for food with­out the sun (for in­dus­try ver­sus sun, Model 1 is ~1 or­der of mag­ni­tude more cost effec­tive, but Model 2 is ~1 or­der of mag­ni­tude less cost effec­tive). Be­cause the elec­tric­ity/​in­dus­try catas­tro­phes could hap­pen im­me­di­ately and be­cause ex­ist­ing ex­per­tise rele­vant to food with­out in­dus­try could be co-opted by char­i­ta­ble giv­ing, it is likely op­ti­mal to spend most of this money in the next few years.

Since there may be sce­nar­ios of peo­ple eat­ing pri­mar­ily one food, micronu­tri­ent suffi­ciency should be checked, though it would be less of an is­sue than for food with­out the sun (D. Denken­berger & Pearce, 2018; Gris­wold et al., 2016). Higher pri­or­ity fu­ture re­search in­cludes as­cer­tain­ing the num­ber and dis­tri­bu­tion of un­plugged short­wave ra­dio sys­tems with un­plugged power sys­tems that could be uti­lized in a catas­tro­phe. Ad­di­tional re­search in­cludes the fea­si­bil­ity of the con­tinu­a­tion of im­proved crop va­ri­eties de­spite loss of in­dus­try. Fur­ther re­search is es­ti­mat­ing the ra­pidity of scale up of hand and an­i­mal pow­ered farm tools. Es­ti­mat­ing the effi­cacy of pest con­trol with­out in­dus­try would be valuable. Bet­ter quan­tify­ing the ca­pa­bil­ity of us­ing fer­til­izer based on ash would be aided by GIS anal­y­sis. Ad­di­tional work is sur­vey­ing whether there have been ex­per­i­ments of the agri­cul­tural pro­duc­tivity pro­duced by peo­ple in­ex­pe­rienced in farm­ing by hand.

Another piece of fu­ture work would be to an­a­lyze the cost-effec­tive­ness of AGI safety and prepa­ra­tion for the loss of in­dus­try in terms of species saved. Rogue AGI could cause the ex­tinc­tion of nearly all life on earth. If there were mass star­va­tion due to the loss of elec­tric­ity/​in­dus­try, hu­mans would likely eat many species to ex­tinc­tion. There­fore, be­ing able to meet hu­man needs would save species. Th­ese cost effec­tive­nesses could be com­pared to the cost effec­tive­ness of con­ven­tional meth­ods of sav­ing species. Fi­nally, ad­di­tional fu­ture work in­volves bet­ter quan­tify­ing the cost of pre­pared­ness to the loss of in­dus­try. Fur­ther­more, re­search for the ac­tual pre­pared­ness should be done, in­clud­ing es­ti­mat­ing the amount of un­plugged com­mu­ni­ca­tions hard­ware and backup power, test­ing the backup com­mu­ni­ca­tions sys­tem, ex­per­i­ments demon­strat­ing the ca­pa­bil­ity to quickly con­struct hand/​an­i­mal farm tools and de­vel­op­ing quick train­ing to use them. Also in­ves­ti­gat­ing al­ter­na­tive food sources that do not re­quire in­dus­try would be benefi­cial, such as sea­weed (Mill et al., un­pub­lished re­sults).

Footnotes

(1) This vuln­er­a­bil­ity can be ad­dressed with dis­tributed gen­er­a­tion and micro­grids (S. M. Amin, 2010; Lov­ins & Lov­ins, 1982; Pre­hoda et al., 2017; Zer­riffi, Dowlatabadi, & Stra­chan, 2002), but these tech­nolo­gies are still far from ubiquitous.

(2) One can change num­bers in view­ing mode to see how out­puts change, but al­ter­a­tions will not save. If one wants to save a new ver­sion, one can make a copy of the model. Click View, visi­ble to show ar­rows of re­la­tion­ships be­tween cells. Drag mouse over cells to see com­ments. Click on the cell to show the equa­tion.

(3) Log­nor­mal re­sults in the me­dian be­ing the ge­o­met­ric mean of the bounds (mul­ti­ply the 5th and 95th per­centiles and raise to the 0.5 power). Note that with large var­i­ances, the mean is gen­er­ally much higher than the me­dian.

(4) The global loss poll gave peo­ple ranges, in­clud­ing <0.1%, 0.1% to 1%, 1% to 10%, and 10% to 100%. All re­sponses in the range were recorded as ap­prox­i­mately the ge­o­met­ric mean of the range. Half of peo­ple were there­fore recorded as 30% loss of the far fu­ture. If the peo­ple had been able to provide ex­act val­ues, likely one of them would have recorded greater than 40%, which was the up­per bound for the 10% loss of in­dus­try, mak­ing these re­sults con­sis­tent. How­ever, even with the con­straints of the data, the mean and me­dian are higher for the global loss of in­dus­try than the 10% loss of in­dus­try.

(5) On any given day Ebay lists nu­mer­ous used short­wave ra­dio trans­mit­ter/​re­ceivers still in fully op­er­a­tional con­di­tion, some of them man­u­fac­tured in the 1960s.

(6) Ra­tios of means re­quire man­ual changes in Guessti­mate, which we note in all caps in the model.

Ap­pendix A: Ra­dio com­po­nent costs—available on https://​​osf.io/​​rgq2z/​​

References

Ab­delkhaliq, M., Denken­berger, D., Gris­wold, M., Cole, D., & Pearce, J. (2016). Pro­vid­ing Non-food Needs if In­dus­try is Dis­abled.

Ai­tel, D. (2013). Cy­ber­se­cu­rity Essen­tials for Elec­tric Oper­a­tors. The Elec­tric­ity Jour­nal, 26(1), 52–58. https://​​doi.org/​​10.1016/​​j.tej.2012.11.014

ALLFED. (2019, April 10). Home. Retrieved April 10, 2019, from ALLFED web­site: http://​​al­lfed.info/​​

Amin, M. (2002). Se­cu­rity challenges for the elec­tric­ity in­fras­truc­ture. Com­puter, 35(4), supl8–supl10. https://​​doi.org/​​10.1109/​​MC.2002.1012423

Amin, M. (2005). En­ergy In­fras­truc­ture Defense Sys­tems. Pro­ceed­ings of the IEEE, 93(5), 861–875. https://​​doi.org/​​10.1109/​​JPROC.2005.847257

Amin, S. M. (2010). Elec­tric­ity in­fras­truc­ture se­cu­rity: Toward re­li­able, re­silient and se­cure cy­ber-phys­i­cal power and en­ergy sys­tems. IEEE PES Gen­eral Meet­ing, 1–5. IEEE.

Amodei, D., Olah, C., Stein­hardt, J., Chris­ti­ano, P., Schul­man, J., & Mané, D. (2016). Con­crete Prob­lems in AI Safety. ArXiv:1606.06565 [Cs]. Retrieved from http://​​arxiv.org/​​abs/​​1606.06565

Avalos, G. (2014, Au­gust 27). PG&E sub­sta­tion in San Jose that suffered a sniper at­tack has a new se­cu­rity breach. Retrieved Au­gust 8, 2019, from The Mer­cury News web­site: https://​​www.mer­curynews.com/​​2014/​​08/​​27/​​pge-sub­sta­tion-in-san-jose-that-suffered-a-sniper-at­tack-has-a-new-se­cu­rity-breach/​​

Baker, D. N., Li, X., Pulkki­nen, A., Ng­wira, C. M., Mays, M. L., Galvin, A. B., & Si­munac, K. D. C. (2013). A ma­jor so­lar erup­tive event in July 2012: Defin­ing ex­treme space weather sce­nar­ios. Space Weather, 11(10), 585–591. https://​​doi.org/​​10.1002/​​swe.20097

Bar­rett, A. M. (2017). Value of GCR In­for­ma­tion: Cost Effec­tive­ness-Based Ap­proach for Global Catas­trophic Risk (GCR) Re­duc­tion. Forth­com­ing in De­ci­sion Anal­y­sis.

Bar­rett, A. M., Baum, S. D., & Hostetler, K. R. (2013). An­a­lyz­ing and re­duc­ing the risks of in­ad­ver­tent nu­clear war be­tween the United States and Rus­sia. Sci. Global Se­cur., 21(2), 106–133.

Baum, S. D., Denken­berger, D. C., & Pearce, J. M. (2016). Alter­na­tive Foods as a Solu­tion to Global Food Sup­ply Catas­tro­phes. Solu­tions.

Bern­stein, A., Bien­stock, D., Hay, D., Uzunoglu, M., & Zuss­man, G. (2012). Sen­si­tivity anal­y­sis of the power grid vuln­er­a­bil­ity to large-scale cas­cad­ing failures. ACM SIGMETRICS Perfor­mance Eval­u­a­tion Re­view, 40(3), 33. https://​​doi.org/​​10.1145/​​2425248.2425256

Bes­sani, A. N., Sousa, P., Cor­reia, M., Neves, N. F., & Veris­simo, P. (2008). The CRUTIAL way of crit­i­cal in­fras­truc­ture pro­tec­tion. IEEE Se­cu­rity & Pri­vacy, (6), 44–51.

Bostrom, N. (2013). Ex­is­ten­tial Risk Preven­tion as Global Pri­or­ity. Global Policy, 4(1), 15–31. https://​​doi.org/​​10.1111/​​1758-5899.12002

Bostrom, N. (2014). Su­per­in­tel­li­gence: paths, dan­gers, strate­gies (First edi­tion). Oxford: Oxford Univer­sity Press.

Bostrom, N., & Cirkovic, M. M. (Eds.). (2008). Global Catas­trophic Risks. New York: Oxford Univer­sity Press.

Burch, J.D. and Thomas, K.E., 1998. Water dis­in­fec­tion for de­vel­op­ing coun­tries and po­ten­tial for so­lar ther­mal pas­teur­iza­tion. So­lar En­ergy, 64(1-3), pp.87-97.

Che, L., & Shahideh­pour, M. (2014). DC micro­grids: Eco­nomic op­er­a­tion and en­hance­ment of re­silience by hi­er­ar­chi­cal con­trol. IEEE Trans­ac­tions on Smart Grid, 5(5), 2517–2526.

Chris­man, L., Hen­rion, M., Mor­gan, R., Arnold, B., Brun­ton, F., Eszter­gar, A., & Har­lan, J. (2007). An­a­lyt­ica user guide. Los Gatos, CA: Lu­mina De­ci­sion Sys­tems.

Coates, J. F. (2009). Risks and threats to civ­i­liza­tion, hu­mankind, and the earth. Fu­tures, 41(10), 694–705. https://​​doi.org/​​10.1016/​​j.fu­tures.2009.07.010

Cole, D. D., Denken­berger, D., Gris­wold, M., Ab­delkhaliq, M., & Pearce, J. (2016). Feed­ing Every­one if In­dus­try is Dis­abled. Pro­ceed­ings of the 6th In­ter­na­tional Disaster and Risk Con­fer­ence. Pre­sented at the 6th In­ter­na­tional Disaster and Risk Con­fer­ence, Davos, Switzer­land.

Col­son, C., Nehrir, M., & Gun­der­son, R. (2011). Distributed multi-agent micro­grids: a de­cen­tral­ized ap­proach to re­silient power sys­tem self-heal­ing. 83–88. IEEE.

Cot­ton-Bar­ratt, O. (2014, Oc­to­ber). The law of log­a­r­ith­mic re­turns. Retrieved April 10, 2019, from The Fu­ture of Hu­man­ity In­sti­tute web­site: http://​​www.fhi.ox.ac.uk/​​law-of-log­a­r­ith­mic-re­turns/​​

Dart­nell, L. (2014). The Knowl­edge: How to Re­build Our World from Scratch. Ran­dom House.

Denken­berger, D. C., & Pearce, J. M. (2018, June 14). A Na­tional Prag­matic Safety Limit for Nu­clear Weapon Quan­tities. Safety 2018, 4(2), 25; https://​​doi.org/​​10.3390/​​safety4020025

Denken­berger, D. and Pearce, J., 2018. De­sign op­ti­miza­tion of polymer heat ex­changer for au­to­mated house­hold-scale so­lar wa­ter pas­teur­izer. De­signs, 2(2), 11; https://​​doi.org/​​10.3390/​​de­signs2020011

Denken­berger, D., Cot­ton-Bar­rat, O., Dewey, D., & Li, S. (2019a, Au­gust 10). Foods with­out in­dus­try and AI X risk cost effec­tive­ness gen­eral far fu­ture im­pact Denken­berger. Retrieved Au­gust 10, 2019, from Guessti­mate web­site: https://​​www.getguessti­mate.com/​​mod­els/​​11599

Denken­berger, D., Cot­ton-Bar­rat, O., Dewey, D., & Li, S. (2019b, Au­gust 12). Ma­chine In­tel­li­gence Re­search In­sti­tute—Oxford Pri­ori­ti­sa­tion Pro­ject. Retrieved Au­gust 12, 2019, from Guessti­mate web­site: https://​​www.getguessti­mate.com/​​mod­els/​​8789

Denken­berger, D., Cot­ton-Bar­ratt, O., Dewey, D., & Li, S. (2019, April 10). Food with­out the sun and AI X risk cost effec­tive­ness gen­eral far fu­ture im­pact pub­li­ca­tion. Retrieved April 10, 2019, from Guessti­mate web­site: https://​​www.getguessti­mate.com/​​mod­els/​​13082

Denken­berger, D., & Pearce, J. (2018). Micronu­tri­ent availa­bil­ity in al­ter­na­tive foods dur­ing agri­cul­tural catas­tro­phes. Agri­cul­ture, 8(11), 169.

Denken­berger, D., & Pearce, J. M. (2014). Feed­ing Every­one No Mat­ter What: Manag­ing Food Se­cu­rity After Global Catas­tro­phe. Aca­demic Press.

Denken­berger, D., Pearce, J., Tay­lor, A. R., & Black, R. (2019). Food with­out sun: Price and life-sav­ing po­ten­tial. Fore­sight, 21(1), 118–129.

Denken­berger, D., Sand­berg, A., & Pearce, J. M. (un­pub­lished re­sults). Long Term Cost-Effec­tive­ness of Alter­na­tive Foods for Global Catas­tro­phes.

Denken­berger, D. C, Cole, D. D., Ab­delkhaliq, M., Gris­wold, M., Hundley, A. B., & Pearce, J. M. (2017). Feed­ing ev­ery­one if the sun is ob­scured and in­dus­try is dis­abled. In­ter­na­tional Jour­nal of Disaster Risk Re­duc­tion, 21, 284–290.

Denken­berger, D.C. and Pearce, J.M., 2016. Cost-effec­tive­ness of in­ter­ven­tions for al­ter­nate food to ad­dress agri­cul­tural catas­tro­phes globally. In­ter­na­tional Jour­nal of Disaster Risk Science, 7(3), pp.205-215.

Denken­berger, D. C, & Pearce, J. M. (2015b). Feed­ing ev­ery­one: Solv­ing the food crisis in event of global catas­tro­phes that kill crops or ob­scure the sun. Fu­tures, 72, 57–68.

Denken­berger, D. C., & Pearce, J. M. (2016). Cost-Effec­tive­ness of In­ter­ven­tions for Alter­nate Food to Ad­dress Agri­cul­tural Catas­tro­phes Globally. In­ter­na­tional Jour­nal of Disaster Risk Science, 7(3), 205–215. https://​​doi.org/​​10.1007/​​s13753-016-0097-2

Effec­tive Altru­ism Con­cepts. (2019, April 10). Im­por­tance, tractabil­ity, ne­glect­ed­ness frame­work. Retrieved April 10, 2019, from Effec­tive Altru­ism Con­cepts web­site: https://​​con­cepts.effec­tivealtru­ism.com/​​con­cepts/​​im­por­tance-ne­glect­ed­ness-tractabil­ity/​​

Foster, J. S., Gjelde, E., Gra­ham, W. R., Her­mann, R. J., Kluepfel, H. (Hank) M., Law­son, R. L., … Woodard, J. B. (2004, July 22). Re­port of the Com­mis­sion to Assess the Threat to the United States from Elec­tro­mag­netic Pulse (EMP) At­tack. Retrieved June 30, 2016, from Com­mit­tee on Armed Ser­vices House of Rep­re­sen­ta­tives web­site: http://​com­m­docs.house.gov/​com­mit­tees/​se­cu­rity/​has204000.000/​has204000_0.HTM

Foster, Jr, J. S., Gjelde, E., Gra­ham, W. R., Her­mann, R. J., Kluepfel, H. (Hank) M., Law­son, R. L., … Woodard, J. B. (2008). Re­port of the com­mis­sion to as­sess the threat to the united states from elec­tro­mag­netic pulse (emp) at­tack: Crit­i­cal na­tional in­fras­truc­tures. Retrieved from DTIC Doc­u­ment web­site: http://​www.em­p­com­mis­sion.org/​docs/​A2473-EMP_Com­mis­sion-7MB.pdf

Gar­rick, B. J. (2008). Quan­tify­ing and con­trol­ling catas­trophic risks. Aca­demic Press.

Gent, M. R., & Costan­tini, L. P. (2003). Reflec­tions on se­cu­rity [power sys­tems]. IEEE Power and En­ergy Magaz­ine, 1(1), 46–52.

GiveWell. (2017, Novem­ber). Cost-Effec­tive­ness. Retrieved April 10, 2019, from GiveWell web­site: https://​​www.givewell.org/​​how-we-work/​​our-crite­ria/​​cost-effectiveness

Good, I. J. (1966). Spec­u­la­tions con­cern­ing the first ul­train­tel­li­gent ma­chine. In Ad­vances in com­put­ers (Vol. 6, pp. 31–88). El­se­vier.

Goodin, D. (2016, Jan­uary 4). First known hacker-caused power out­age sig­nals trou­bling es­ca­la­tion. Retrieved from http://​​ar­stech­nica.com/​​se­cu­rity/​​2016/​​01/​​first-known-hacker-caused-power-out­age-sig­nals-trou­bling-es­ca­la­tion/​​

Gor­man, S. (2009, April 9). Elec­tric­ity Grid in U.S. Pen­e­trated By Spies. Wall Street Jour­nal. Retrieved from https://​​www.wsj.com/​​ar­ti­cles/​​SB123914805204099085

Gre­gory, J., Stouffer, R. J., Molina, M., Chidthaisong, A., Solomon, S., Raga, G., … Stone, D. A. (2007). Cli­mate Change 2007: The Phys­i­cal Science Ba­sis. Retrieved from http://​copa.acgua­nacaste.ac.cr:8080/​han­dle/​11606/​461

Gris­wold, M., Denken­berger, D., Ab­delkhaliq, M., Cole, D., Pearce, J., & Tay­lor, A. R. (2016). Vi­tam­ins in Agri­cul­tural Catas­tro­phes. Pro­ceed­ings of the 6th In­ter­na­tional Disaster and Risk Con­fer­ence. Pre­sented at the 6th In­ter­na­tional Disaster and Risk Con­fer­ence, Davos, Switzer­land.

Halstead, J. (2018, May). Cli­mate Change Cause Area Re­port. Founders Pledge.

Hayakawa, H., Ebihara, Y., Willis, D. M., To­ri­umi, S., Iju, T., Hat­tori, K., … Ribeiro, J. R. (2019). Tem­po­ral and Spa­tial Evolu­tions of a Large Sunspot Group and Great Auro­ral Storms around the Car­ring­ton Event in 1859. Space Weather.

Hébert, C. (2013). The Most Crit­i­cal of Eco­nomic Needs (Risks): A Quick Look at Cy­ber­se­cu­rity and the Elec­tric Grid. The Elec­tric­ity Jour­nal, 26(5), 15–19. https://​​doi.org/​​10.1016/​​j.tej.2013.05.009

Helfand, I. (2013). Nu­clear famine: Two billion peo­ple at risk. In­ter­na­tional Physi­ci­ans for the Preven­tion of Nu­clear War, 20.

Kelly-Detwiler, P. (2014, July 31). Failure to Pro­tect U.S. Against Elec­tro­mag­netic Pulse Threat Could Make 9/​11 Look Triv­ial Some­day. Retrieved Au­gust 7, 2019, from https://​​www.forbes.com/​​sites/​​pe­ter­de­twiler/​​2014/​​07/​​31/​​pro­tect­ing-the-u-s-against-the-elec­tro­mag­netic-pulse-threat-a-con­tinued-failure-of-lead­er­ship-could-make-911-look-triv­ial-some­day/​​#2ed092db7a14

Kera­mat, M., & Kielbasa, R. (1997). Latin hy­per­cube sam­pling Monte Carlo es­ti­ma­tion of av­er­age qual­ity in­dex for in­te­grated cir­cuits. In Ana­log De­sign Is­sues in Digi­tal VLSI Cir­cuits and Sys­tems (pp. 131–142). Springer.

Kin­ney, R., Crucitti, P., Albert, R., & La­tora, V. (2005). Model­ing cas­cad­ing failures in the North Amer­i­can power grid. The Euro­pean Phys­i­cal Jour­nal B, 46(1), 101–107. https://​​doi.org/​​10.1140/​​epjb/​​e2005-00237-9

Klein, C. (2012, March 14). A Perfect So­lar Su­per­storm: The 1859 Car­ring­ton Event. Retrieved Au­gust 7, 2019, from HISTORY web­site: https://​​www.his­tory.com/​​news/​​a-perfect-so­lar-su­per­storm-the-1859-car­ring­ton-event

Krotofil, M., Car­de­nas, A., Larsen, J., & Gol­l­mann, D. (2014). Vuln­er­a­bil­ities of cy­ber-phys­i­cal sys­tems to stale data—Deter­min­ing the op­ti­mal time to launch at­tacks. In­ter­na­tional Jour­nal of Crit­i­cal In­fras­truc­ture Pro­tec­tion, 7(4), 213–232.

Kush­ner, D. (2013). The real story of stuxnet. IEEE Spec­trum, 50(3), 48–53. https://​​doi.org/​​10.1109/​​MSPEC.2013.6471059

Las­seter, R. H. (2007). Micro­grids and dis­tributed gen­er­a­tion. Jour­nal of En­ergy Eng­ineer­ing, 133(3), 144–149.

Las­seter, R. H., & Pi­agi, P. (2004). Micro­grid: A con­cep­tual solu­tion. 6, 4285–4291. Cite­seer.

Li, S. (2017, May 12). A model of the Ma­chine In­tel­li­gence Re­search In­sti­tute—Oxford Pri­ori­ti­sa­tion Pro­ject—EA Fo­rum. Retrieved Au­gust 12, 2019, from https://​​fo­rum.effec­tivealtru­ism.org/​​posts/​​NbFZ9yewJHoicp­kBr/​​a-model-of-the-ma­chine-in­tel­li­gence-re­search-institute

Lingam, M., & Loeb, A. (2017). Risks for life on hab­it­able planets from su­perflares of their host stars. The Astro­phys­i­cal Jour­nal, 848(1), 41.

Lip­tak, B. G. (2018). In­stru­ment Eng­ineers’ Hand­book, Vol­ume Two: Pro­cess Con­trol and Op­ti­miza­tion. CRC Press.

Lov­ins, A. B., & Lov­ins, L. H. (1982). Brit­tle power. Brick House Pub­lish­ing Com­pany.

Ma­theny, J. G. (2007). Re­duc­ing the risk of hu­man ex­tinc­tion. Risk Anal­y­sis: An In­ter­na­tional Jour­nal, 27(5), 1335–1344.

McIn­tyre, P. (2016a, April 12). How you can lower the risk of a catas­trophic nu­clear war. Retrieved Au­gust 13, 2019, from 80,000 Hours web­site: https://​​80000hours.org/​​prob­lem-pro­files/​​nu­clear-se­cu­rity/​​

McIn­tyre, P. (2016b, April 12). How you can lower the risk of a catas­trophic nu­clear war. Retrieved Au­gust 9, 2019, from 80,000 Hours web­site: https://​​80000hours.org/​​prob­lem-pro­files/​​nu­clear-se­cu­rity/​​

Mekhaldi, F., Muscheler, R., Adolphi, F., Al­da­han, A., Beer, J., McCon­nell, J. R., … Sy­nal, H.-A. (2015). Mul­tira­dionu­clide ev­i­dence for the so­lar ori­gin of the cos­mic-ray events of ᴀᴅ 7745 and 9934. Na­ture Com­mu­ni­ca­tions, 6.

Millett, P., & Sny­der-Beat­tie, A. (2017). Ex­is­ten­tial Risk and Cost-Effec­tive Biose­cu­rity. Health Se­cu­rity, 15(4), 373–383. https://​​doi.org/​​10.1089/​​hs.2017.0028

Mor­gan, M. G., & Hen­rion, M. (1990). Uncer­tainty: a Guide to deal­ing with un­cer­tainty in quan­ti­ta­tive risk and policy anal­y­sis Cam­bridge Univer­sity Press. New York, New York, USA.

Mote­shar­rei, S., Ri­vas, J., & Kal­nay, E. (2014). Hu­man and na­ture dy­nam­ics (HANDY): Model­ing in­equal­ity and use of re­sources in the col­lapse or sus­tain­abil­ity of so­cieties. Ecolog­i­cal Eco­nomics, 101, 90–102. https://​​doi.org/​​10.1016/​​j.ecole­con.2014.02.014

Mot­ter, A. E., & Lai, Y.-C. (2002). Cas­cade-based at­tacks on com­plex net­works. Phys­i­cal Re­view E, 66(6), 065102. https://​​doi.org/​​10.1103/​​PhysRevE.66.065102

Nai Fov­ino, I., Guidi, L., Masera, M., & Ste­fan­ini, A. (2011). Cy­ber se­cu­rity as­sess­ment of a power plant. Elec­tric Power Sys­tems Re­search, 81(2), 518–526. https://​​doi.org/​​10.1016/​​j.epsr.2010.10.012

Na­tional Re­search Coun­cil. (2012). Ter­ror­ism and the elec­tric power de­liv­ery sys­tem. Na­tional Academies Press.

Oak Ridge Na­tional Lab­o­ra­tory. (2010). Elec­tro­mag­netic Pulse: Effects on the U.S. Power Grid. 6.

Oke, O., Red­head, J., & Hus­sain, M. (1990). Roots, tu­bers, plan­tains and ba­nanas in hu­man nu­tri­tion. FAO Food and Nutri­tion Series, 24, 182.

Onyeji, I., Bazilian, M., & Bronk, C. (2014). Cy­ber Se­cu­rity and Crit­i­cal En­ergy In­fras­truc­ture. The Elec­tric­ity Jour­nal, 27(2), 52–60. https://​​doi.org/​​10.1016/​​j.tej.2014.01.011

Open Source Ecol­ogy. (2019, Au­gust 10). Open Source Ecol­ogy. Retrieved Au­gust 10, 2019, from https://​​www.open­sourcee­col­ogy.org/​​

Ord, T. (2014, July 3). The timing of labour aimed at re­duc­ing ex­is­ten­tial risk. Retrieved April 10, 2019, from The Fu­ture of Hu­man­ity In­sti­tute web­site: https://​​www.fhi.ox.ac.uk/​​the-timing-of-labour-aimed-at-re­duc­ing-ex­is­ten­tial-risk/​​

Pagliery, J. (2015, Oc­to­ber 16). Sniper at­tack on Cal­ifor­nia power grid may have been “an in­sider,” DHS says. Retrieved Au­gust 8, 2019, from CNNMoney web­site: https://​​money.cnn.com/​​2015/​​10/​​16/​​tech­nol­ogy/​​sniper-power-grid/​​in­dex.html

Pre­hoda, E. W., Schelly, C., & Pearce, J. M. (2017). US strate­gic so­lar pho­to­voltaic-pow­ered micro­grid de­ploy­ment for en­hanced na­tional se­cu­rity. Re­new­able and Sus­tain­able En­ergy Re­views, 78, 167–175.

Pry, P. (2017). NUCLEAR EMP ATTACK SCENARIOS AND COMBINED-ARMS CYBER WARFARE. 65.

Pry, P. V. (2014, May 8). - ELECTROMAGNETIC PULSE (EMP): THREAT TO CRITICAL INFRASTRUCTURE. Retrieved Au­gust 14, 2019, from https://​​www.gov­info.gov/​​con­tent/​​pkg/​​CHRG-113hhrg89763/​​html/​​CHRG-113hhrg89763.htm

Robin­son, R. A. (2007). Crop his­to­ries. Share­books Pub.

Ro­lak, B. J. (1975). Gen­eral Miles’ Mir­rors: The He­lio­graph in the Geromino Cam­paign of 1886. The Jour­nal of Ari­zona His­tory, 16(2), 145–160.

Rood­man, D. (2015). The risk of ge­o­mag­netic storms to the grid. 56.

Salmeron, J., Wood, K., & Baldick, R. (2004). Anal­y­sis of Elec­tric Grid Se­cu­rity Un­der Ter­ror­ist Threat. IEEE Trans­ac­tions on Power Sys­tems, 19(2), 905–912. https://​​doi.org/​​10.1109/​​TPWRS.2004.825888

Schainker, R., Dou­glas, J., & Kropp, T. (2006). Elec­tric util­ity re­sponses to grid se­cu­rity is­sues. IEEE Power and En­ergy Magaz­ine, 4(2), 30–37.

Schaul, T., To­gelius, J., & Sch­mid­hu­ber, J. (2011). Mea­sur­ing in­tel­li­gence through games. ArXiv Preprint ArXiv:1109.1314.

Shahideh­pour, M., & Kho­da­yar, M. (2013). Cut­ting cam­pus en­ergy costs with hi­er­ar­chi­cal con­trol: The eco­nom­i­cal and re­li­able op­er­a­tion of a micro­grid. IEEE Elec­trifi­ca­tion Magaz­ine, 1(1), 40–56.

Silver, W. H. (2004). Ham Ra­dio for Dum­mies. Wiley Pub­lish­ing, Inc.

Space Stud­ies Board (Ed.). (2008). Se­vere Space Weather Events—Un­der­stand­ing So­cietal and Eco­nomic Im­pacts: A Work­shop Re­port. Na­tional Academies Press.

Srid­har, S., Hahn, A., & Govin­darasu, M. (2012). Cy­ber–Phys­i­cal Sys­tem Se­cu­rity for the Elec­tric Power Grid. Pro­ceed­ings of the IEEE, 100(1), 210–224. https://​​doi.org/​​10.1109/​​JPROC.2011.2165269

Ster­ling, C. H. (2008). Mili­tary Com­mu­ni­ca­tions: From An­cient Times to the 21st Cen­tury. ABC-CLIO.

Ten, C.-W., Man­i­maran, G., & Liu, C.-C. (2010). Cy­ber­se­cu­rity for crit­i­cal in­fras­truc­tures: At­tack and defense mod­el­ing. IEEE Trans­ac­tions on Sys­tems, Man, and Cy­ber­net­ics-Part A: Sys­tems and Hu­mans, 40(4), 853–865.

Turchin, A., & Denken­berger, D. (2018a). Clas­sifi­ca­tion of global catas­trophic risks con­nected with ar­tifi­cial in­tel­li­gence. AI & SOCIETY. https://​​doi.org/​​10.1007/​​s00146-018-0845-5

Turchin, A., & Denken­berger, D. (2018b). Global catas­trophic and ex­is­ten­tial risks com­mu­ni­ca­tion scale. Fu­tures, 102, 27–38. https://​​doi.org/​​10.1016/​​j.fu­tures.2018.01.003

Tzezana, R. (2016). Sce­nar­ios for crime and ter­ror­ist at­tacks us­ing the in­ter­net of things. Euro­pean Jour­nal of Fu­tures Re­search, 4, 18. https://​​doi.org/​​10.1007/​​s40309-016-0107-z

Ulieru, M. (2007). De­sign for re­silience of net­worked crit­i­cal in­fras­truc­tures. 540–545. IEEE.

Um­bach, F. (2013, June 29). World Re­view | En­ergy in­fras­truc­ture tar­geted as cy­ber at­tacks in­crease globally. Retrieved Au­gust 8, 2019, from https://​​web.archive.org/​​web/​​20130629041842/​​https://​​wor­l­dreview.info/​​con­tent/​​en­ergy-in­fras­truc­ture-tar­geted-cy­ber-at­tacks-in­crease-globally

Watts, D. (2003). Se­cu­rity & Vuln­er­a­bil­ity in Elec­tric Power Sys­tems. Th North Amer­i­can Power Sym­po­sium, 8.

Wu, F. F., Moslehi, K., & Bose, A. (2005). Power sys­tem con­trol cen­ters: Past, pre­sent, and fu­ture. Pro­ceed­ings of the IEEE, 93(11), 1890–1908.

Zer­riffi, H., Dowlatabadi, H., & Stra­chan, N. (2002). Elec­tric­ity and con­flict: Ad­van­tages of a dis­tributed sys­tem. The Elec­tric­ity Jour­nal, 15(1), 55–65.