Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before?


As part of a Cen­tre for Effec­tive Altru­ism (CEA) grant, I have up­dated the cost effec­tive­ness of prepar­ing for agri­cul­tural catas­tro­phes such as nu­clear win­ter (pre­vi­ous anal­y­sis here). This largely in­volves plan­ning and re­search and de­vel­op­ment of al­ter­nate foods (roughly those not de­pen­dent on sun­light such as mush­rooms, nat­u­ral gas di­gest­ing bac­te­ria, and ex­tract­ing food from leaves). I have re­fined a model that uses Monte Carlo (prob­a­bil­is­tic) sam­pling to es­ti­mate un­cer­tain re­sults us­ing open source soft­ware (Guessti­mate) that in­cor­po­rates an ear­lier model of ar­tifi­cial gen­eral in­tel­li­gence safety (here­after AI) cost-effec­tive­ness. A ma­jor change is broad­en­ing the routes to far fu­ture im­pact from only loss of civ­i­liza­tion and non-re­cov­ery to in­clude mak­ing other catas­tro­phes more likely (e.g. to­tal­i­tar­i­anism) or worse val­ues end­ing up in AGI. Ad­di­tional changes in­clude the pro­vi­sion of moral haz­ard, perform­ing a sur­vey of global catas­trophic risk (GCR) re­searchers for key pa­ram­e­ters, and us­ing bet­ter be­haved dis­tri­bu­tions in the AI model (in­creas­ing the cost-effec­tive­ness of AI by a fac­tor of two).

Over­all, al­ter­nate foods performs about an or­der of mag­ni­tude more fa­vor­ably rel­a­tive to AI than in the pre­vi­ous anal­y­sis, with the ra­tio of al­ter­nate foods cost effec­tive­ness to AI at the mar­gin vary­ing from ~3x to ~300x for the 100 mil­lionth dol­lar and the mar­gin now, re­spec­tively. This cor­re­sponds to ~60% con­fi­dence of greater cost-effec­tive­ness than AI for the 100 mil­lionth dol­lar, and ~95% con­fi­dence of greater cost-effec­tive­ness at the mar­gin now than AI. An­ders Sand­berg’s ver­sion of the model pro­duced ~80% and ~100% con­fi­dence, re­spec­tively.

Be­cause the agri­cul­tural catas­tro­phes could hap­pen im­me­di­ately and be­cause ex­ist­ing ex­per­tise rele­vant to al­ter­nate foods could be co-opted by char­i­ta­ble giv­ing, it is likely op­ti­mal to spend most of the $100 mil­lion in the next few years. I con­tinue to be­lieve that AI is ex­tremely im­por­tant, and do not ad­vo­cate a re­duc­tion in AI fund­ing. As be­fore, both AI and al­ter­nate foods save lives in the pre­sent gen­er­a­tion or­ders of mag­ni­tude more cheaply than global poverty in­ter­ven­tions. So I be­lieve one source of more fund­ing for both should be those peo­ple who do not highly value the long-term fu­ture. Hav­ing al­ter­nate foods as a top pri­or­ity would be a sig­nifi­cant re­al­ign­ment of fo­cus in the X risk com­mu­nity, so I in­vite more feed­back and dis­cus­sion (in­clud­ing play­ing with the model).1

Dis­claimer/​Ac­knowl­edge­ments: I would like to ac­knowl­edge CEA for fund­ing the EA grant to perform re­search on solu­tions to agri­cul­tural catas­tro­phes, Ozzie Gooen for de­vel­op­ing Guessti­mate, Oxford Pri­ori­ti­sa­tion Pro­ject for the AI model, and Joshua Pearce, An­ders Sand­berg and Owen Cot­ton-Bar­ratt for re­view­ing con­tent. Spe­cial thanks go to Fi­nan Adam­son who pre­sented an ear­lier model at EA Global San Fran­cisco 2018 with a poster. Opinions are my own and this is not the offi­cial po­si­tion of CEA, Fu­ture of Hu­man­ity In­sti­tute, the Global Catas­trophic Risk In­sti­tute nor the Alli­ance to Feed the Earth in Disasters (ALLFED).


The great­est catas­trophic threat to global agri­cul­ture is full-scale nu­clear war be­tween US and Rus­sia, with cor­re­spond­ing burn­ing of cities and block­ing of the sun for 5-10 years. The pur­chas­ing power par­ity of an econ­omy is a proxy for the com­bustible ma­te­rial. This is now greater for China than the US. Also, China may have a larger econ­omy now than NATO plus the War­saw Pact in the 1980s. There­fore, even though China only has ap­prox­i­mately 300 nu­clear weapons, an ex­change with Rus­sia or the US could po­ten­tially block the sun. This is be­cause thou­sands of nu­clear weapons could come from the US or Rus­sia, and the hun­dreds of Chi­nese nu­clear weapons would likely hit the dens­est ar­eas in the US or Rus­sia. The ob­vi­ous in­ter­ven­tion is pre­ven­tion of nu­clear war, which would be the best out­come. How­ever, it is not ne­glected, as it has been worked on for many decades and is cur­rently funded at billions of dol­lars per year qual­ity ad­justed. The next most ob­vi­ous solu­tion is stor­ing food, which is far too ex­pen­sive (~tens of trillions of dol­lars) to have com­pet­i­tive cost effec­tive­ness (and it would take many years so it would not pro­tect us right away, and it would ex­ac­er­bate cur­rent malnu­tri­tion). I have posted be­fore about get­ting pre­pared for al­ter­nate foods (roughly those not de­pen­dent on sun­light that ex­ploit bio­mass or fos­sil fuels). This could save ex­pected lives in the pre­sent gen­er­a­tion for $0.20 to $400 for only 10% global agri­cul­tural short­falls like the year with­out a sum­mer in 1816 caused by a vol­canic erup­tion, and would be even more cost effec­tive if sun block­ing sce­nar­ios were con­sid­ered. Of course al­ter­nate foods would not save the lives of those peo­ple di­rectly im­pacted by the nu­clear weapons, which is po­ten­tially hun­dreds of mil­lions. But since about 6 billion peo­ple would die with our cur­rent ~half a year of food stor­age if the sun were blocked for 5 years, al­ter­nate foods could solve ~90% of the prob­lem.

Cur­rent aware­ness of al­ter­nate foods is rel­a­tively low: about 700,000 peo­ple globally have heard about the con­cept based on im­pres­sion coun­ters for the ~10 ar­ti­cles, pod­casts, and pre­sen­ta­tions for which there were data in­clud­ing Science (out of more than 100 me­dia men­tions). Also, many of the tech­nolo­gies need to be bet­ter de­vel­oped. Plan­ning, re­search and de­vel­op­ment are three in­ter­ven­tions, which could dra­mat­i­cally in­crease the prob­a­bil­ity of suc­cess of feed­ing ev­ery­one, each cost­ing in the tens of mil­lions of dol­lars. This post an­a­lyzes the cost effec­tive­ness of al­ter­nate foods from a long term per­spec­tive. It is gen­er­ally thought to be very un­likely that the agri­cul­tural catas­tro­phes such as nu­clear war with the burn­ing of cities (nu­clear win­ter), su­per vol­canic erup­tion, or a large as­ter­oid/​comet im­pact would di­rectly cause hu­man ex­tinc­tion.2 How­ever, there is sig­nifi­cant prob­a­bil­ity that by block­ing the sun for about 5 years, these catas­tro­phes could cause the col­lapse of civ­i­liza­tion. Rea­sons that civ­i­liza­tion might not re­cover in­clude: Easily ac­cessible fos­sil fuels and min­er­als are ex­hausted, we might not have the sta­ble cli­mate of last 10,000 years, or we might lose trust or IQ per­ma­nently be­cause of the trauma and ge­netic se­lec­tion of the catas­tro­phe. If the loss of civ­i­liza­tion per­sists long enough, a nat­u­ral catas­tro­phe could cause the ex­tinc­tion of hu­man­ity, such as a su­per vol­canic erup­tion or an as­ter­oid/​comet im­pact. Another route to far fu­ture im­pact is the trauma as­so­ci­ated with the catas­tro­phe mak­ing fu­ture catas­tro­phes more likely, such as global to­tal­i­tar­i­anism. A fur­ther route is worse val­ues caused by the catas­tro­phe could be locked in by AGI.

AI has been a top pri­or­ity in the X risk com­mu­nity. EAs have been an im­por­tant part of rais­ing aware­ness and fund­ing for this cause. I seek to com­pare the cost effec­tive­ness of al­ter­nate foods with AI to see if al­ter­nate foods should also be a top pri­or­ity. The Guessti­mate model for AI cost-effec­tive­ness was de­vel­oped by the Oxford Pri­ori­ti­sa­tion Pro­ject (which uses in­put from Owen Cot­ton-Bar­rett’s and Daniel Dewey’s model). I use a sub­set of this model, be­cause I do not try to quan­tify the ab­solute value of the far fu­ture (mea­sur­ing in­stead im­pact on the far fu­ture as per­centage saved/​im­proved). I do not dis­cuss the as­sump­tions in the AI model here, but I did use bet­ter be­haved func­tions to pro­duce more rea­son­able re­sults (like re­mov­ing nega­tive prob­a­bil­ities of X risk), and this in­creased the cost-effec­tive­ness of AI by a fac­tor of two. Another pos­si­ble AI model to com­pare to would be Michael Dick­ens’, but this is fu­ture work.

Up­dated model

Table 1 shows the key in­put pa­ram­e­ters. The struc­ture of the model is very similar to be­fore. I will dis­cuss the key up­dates to the model.

Table 1. In­put variables

Bar­rett 2013 an­a­lyzes only in­ad­ver­tent full scale nu­clear war (at­tack­ing when think­ing you are be­ing at­tacked). Many fear that with the cur­rent lead­ers of Rus­sia and the United States, an in­ten­tional strike has a sig­nifi­cant prob­a­bil­ity. There is also ac­ci­den­tal nu­clear ex­plo­sion that could es­ca­late and many other routes to nu­clear war. How­ever, oth­ers ar­gue that the lack of nu­clear war in the last 72 years should up­date the prob­a­bil­ity dis­tri­bu­tion. Tech­ni­cally there has been 1 nu­clear war in the last 73 years (1.4% per year, though not di­rectly com­pa­rable). In­clud­ing Rus­sia and China or US and China ex­changes sig­nifi­cantly in­creases the prob­a­bil­ity of “full-scale” nu­clear war. Nine out of 13 times that there’s been a switch in which is the most mil­i­tar­ily pow­er­ful coun­try in the world, there has been war (though we should not take that liter­ally for the cur­rent situ­a­tion). So I think Bar­rett’s origi­nal calcu­la­tion is rea­son­able.

A num­ber of catas­trophic events could cause a roughly 10% global agri­cul­tural short­fall, in­clud­ing a medium-sized as­ter­oid/​comet im­pact, a large but not su­per vol­canic erup­tion (like the one that caused the year with­out a sum­mer in 1816), re­gional nu­clear war (for ex­am­ple, In­dia-Pak­istan), abrupt re­gional cli­mate change (10°C in a decade, which has hap­pened in the past mul­ti­ple times), com­plete global loss of bees as pol­li­na­tors, a su­per crop pest or pathogen, and co­in­ci­dent ex­treme weather, re­sult­ing in mul­ti­ple bread­bas­ket failures. Ac­cord­ing to UK gov­ern­ment study, the lat­ter sce­nario has ~1% per year chance now and in­creas­ing through­out the cen­tury. Though it would be tech­ni­cally straight­for­ward to re­duce food con­sump­tion by 10% by mak­ing less food go to waste, an­i­mals, and biofuels, the prices would go so high that those in poverty may not be able to af­ford food. We found an ex­pected 500 mil­lion lives lost in such a catas­tro­phe. There could also be ex­treme global cli­mate change of >5°C that hap­pens over a cen­tury (so slow in com­par­i­son to “abrupt” cli­mate change). This could make con­ven­tional agri­cul­ture im­pos­si­ble in the trop­ics, which could be a larger than 10% agri­cul­tural im­pact (de­pend­ing on how agri­cul­ture in­creased at high lat­i­tudes), but it would oc­cur over ~1 cen­tury, so the im­pact might be similar to the abrupt 10% short­falls. Other events would not di­rectly af­fect food pro­duc­tion, but still could have similar im­pacts on hu­man nu­tri­tion. Some of these in­clude a con­ven­tional world war or pan­demic that dis­rupts global food trade, and causes famine in food-im­port­ing coun­tries.

In­tu­itively, one would ex­pect that the prob­a­bil­ity of 10% short­falls would be sig­nifi­cantly greater than full-scale nu­clear war. There are many more po­ten­tial com­bi­na­tions of re­gional nu­clear war than for full-scale. My mean es­ti­mate is 3% per year for 10% of agri­cul­tural short­falls.

I sent a sur­vey to 31 GCR re­searchers, and got seven re­sponses (in­clud­ing my­self). The ques­tions in­volved the re­duc­tion in far fu­ture po­ten­tial due to the catas­tro­phes, the con­tri­bu­tion of ALLFED so far, and the ad­di­tional con­tri­bu­tion of spend­ing roughly $100 mil­lion to get pre­pared.

The mean es­ti­mate of these GCR re­searchers was 17% re­duc­tion in the long-term fu­ture of hu­man­ity due to full-scale nu­clear war if there were no ALLFED, which com­pares to a 30% es­ti­mate by 80,000 Hours. The 10% food short­fall catas­tro­phes could re­sult in in­sta­bil­ity and full scale nu­clear war or other routes to far fu­ture im­pact. The poll of GCR re­searchers found a mean of 5% re­duc­tion in long-term po­ten­tial of hu­man­ity due to these catas­tro­phes. This is lower than the 80,000 Hours’ es­ti­mate of ~20%.

The sur­vey also in­di­cated the means of the dis­tri­bu­tions of per­cent re­duc­tion in far fu­ture loss due to ALLFED (and the work done by ALLFED re­searchers be­fore the or­ga­ni­za­tion was offi­cially formed) were 4% and 5% for full-scale nu­clear war and 10% agri­cul­tural short­falls, re­spec­tively.

Fur­ther­more, the sur­vey in­di­cated the means of the dis­tri­bu­tions of per­cent fur­ther re­duc­tion in far fu­ture loss due to spend­ing $100 mil­lion were 17% and 25% for full-scale nu­clear war and 10% agri­cul­tural short­falls, re­spec­tively.

Mo­ral haz­ard would be if aware­ness of a food backup plan makes nu­clear war more likely or more in­tense. I think it un­likely that, in the heat of the mo­ment, the de­ci­sion to go to nu­clear war (whether ac­ci­den­tal, in­ad­ver­tent, or in­ten­tional) gives much con­sid­er­a­tion to the non­tar­get coun­tries. How­ever, aware­ness of a backup plan could re­sult in in­creased ar­se­nals rel­a­tive to busi­ness as usual, as aware­ness of the threat of nu­clear win­ter likely con­tributed to the re­duc­tion in ar­se­nals. I es­ti­mate the mean loss in net effec­tive­ness of the in­ter­ven­tions for full-scale nu­clear war to be 4%. For the 10% agri­cul­tural short­falls, I es­ti­mate a mean 2% loss in net effec­tive­ness, be­cause I think the moral haz­ard would ap­ply less strongly to non-nu­clear sce­nar­ios, such as co­in­ci­dent ex­treme weather and vol­canic erup­tions. I sup­port re­duc­ing nu­clear stock­piles and have co-au­thored a pa­per ar­gu­ing that more than 100 nu­clear weapons used on an­other coun­try even with­out re­tal­i­a­tion poses un­ac­cept­able en­vi­ron­men­tal blow­back.


As be­fore, in or­der to con­vert av­er­age cost effec­tive­ness to marginal, I as­sume that re­turns to dona­tions are log­a­r­ith­mic, which re­sults in the marginal cost effec­tive­ness be­ing just one di­vided by the cu­mu­la­tive money spent. Ra­tios of mean cost effec­tive­nesses are re­ported in Table 2.3 With the new num­bers com­par­ing to AI at the mar­gin, I find the 100 mil­lionth dol­lar on al­ter­nate food is 3 times more cost effec­tive, the av­er­age $100 mil­lion on al­ter­nate food is 15 times more cost effec­tive, and the marginal dol­lar now on al­ter­nate food is 300 times more cost effec­tive. One way of think­ing about the high marginal cost effec­tive­ness now is spend­ing some money to figure out if more money is jus­tified: value of in­for­ma­tion. Th­ese ra­tios are about an or­der of mag­ni­tude higher than the 2017 ver­sion. This is largely driven by the greater long term fu­ture im­pact of the catas­tro­phes (com­pared to only con­sid­er­ing loss of civ­i­liza­tion and non-re­cov­ery). Given or­ders of mag­ni­tude un­cer­tainty, more ro­bust is likely the prob­a­bil­ities that one is more cost effec­tive than the other. With the new num­bers com­par­ing to AI at the mar­gin, I find ~60% prob­a­bil­ity that the 100 mil­lionth dol­lar on al­ter­nate food is more cost effec­tive, ~80% prob­a­bil­ity that the av­er­age $100 mil­lion on al­ter­nate food is more cost effec­tive, and ~90% prob­a­bil­ity that the marginal dol­lar now on al­ter­nate food is more cost effec­tive (see Table 2).

Table 2. Key cost effec­tive­ness outputs

My per­sonal es­ti­mates for these pa­ram­e­ters tended to be close to the me­dian of the sur­vey, so the mean value is more cost-effec­tive than my es­ti­mate. How­ever, I would note that be­ing pre­pared for agri­cul­tural catas­tro­phes might pro­tect against un­known risks, mean­ing the cost-effec­tive­ness would in­crease.

The im­por­tance, tractabil­ity, ne­glect­ed­ness (ITN) frame­work is use­ful for screen­ing cause ar­eas. One up­date from the pre­vi­ous anal­y­sis is that be­cause al­ter­nate foods ap­pear to be rel­a­tively more cost-effec­tive now, this would mean they are more tractable than AI, which was my origi­nal in­tu­ition (ver­sus about the same tractabil­ity with my pre­vi­ous anal­y­sis).

Steel­man­ning the op­po­si­tion to fund­ing al­ter­nate foods

Th­ese are gen­er­ally the same as be­fore. One ad­di­tion could be that there could be some pub­lic re­la­tions de­ba­cle that hurts the field. This could be con­sid­ered within the moral haz­ard pa­ram­e­ter. I think this in­di­cates that we should be cau­tious with the mass me­dia, but I doubt this is a rea­son not to do the work at all.

An­ders’ model

An­ders’ model differed in a num­ber of ways to mine. The mean cost-effec­tive­ness was similar to mine (though he has not taken into ac­count the pos­si­bil­ity of con­flict with China be­ing full scale nu­clear war), but be­cause of the smaller var­i­ance in his dis­tri­bu­tions, there was greater con­fi­dence that al­ter­nate foods are more cost-effec­tive than AI (~80% at the 100 mil­lionth dol­lar, and ~100% for the marginal dol­lar now). Another large differ­ence is that I (and the sur­vey) found that 10% agri­cul­tural short­falls are similar cost effec­tive­ness for the far fu­ture as full scale nu­clear war. This was be­cause the greater prob­a­bil­ity of these catas­tro­phes coun­ter­acted the smaller far fu­ture im­pact. How­ever, An­ders rated the cost-effec­tive­ness of the 10% short­falls as two or­ders of mag­ni­tude lower than for full-scale nu­clear war. I tend to be some­where in be­tween, with my in­tu­ition that the far fu­ture im­pact scales stronger than lin­early with the short-term im­pact.


Since my last post, there has been sig­nifi­cant sup­port from EA in this cause area, most no­tably Adam Gleave through the EA lot­tery. A forth­com­ing post will ex­plain the near-term pro­jects we think are the high­est pri­or­ity. We have not yet sub­mit­ted this model for pub­li­ca­tion, so your feed­back can still in­fluence the pa­per. As be­fore, both AI and al­ter­nate foods save lives in the pre­sent gen­er­a­tion or­ders of mag­ni­tude more cheaply than global poverty in­ter­ven­tions. One way of quan­tify­ing the ur­gency of al­ter­nate foods is the value of ac­cel­er­at­ing full pre­pared­ness. At the bot­tom of the model, a calcu­la­tion shows that each day ac­cel­er­a­tion of pre­pared­ness could in­crease the value of the far fu­ture by 0.000002%-0.002%.


1 You can change num­bers in view­ing model to see how out­puts change, but they will not save. If you want to save, you can make a copy of the model. Click View, visi­ble to show ar­rows. Mouse over cells to see com­ments. Click on the cell to see the equa­tion.

2 Though there were con­cerns that full scale nu­clear war would kill ev­ery­one with ra­dioac­tivity, it turns out that most of the ra­dioac­tivity is rained out within a few days. One pos­si­ble mechanism for ex­tinc­tion would be that the hunter gath­er­ers would die out be­cause they do not have food stor­age. And peo­ple in de­vel­oped coun­tries would have food stor­age, but might not be able to figure out how to go back to be­ing hunter gath­er­ers.

3 Ra­tios of means re­quire man­ual up­dates in Guessti­mate, which I note in all caps in the model.