Should we be spending no less on alternate foods than AI now?

Sum­mary: As part of a Cen­tre for Effec­tive Altru­ism (CEA) grant, I have es­ti­mated the cost effec­tive­ness of prepar­ing for agri­cul­tural catas­tro­phes such as nu­clear win­ter. This largely in­volves plan­ning and re­search and de­vel­op­ment of al­ter­nate foods (roughly those not de­pen­dent on sun­light such as mush­rooms, nat­u­ral gas di­gest­ing bac­te­ria, and ex­tract­ing food from leaves). Sun-block­ing catas­tro­phes could cause the col­lapse of civ­i­liza­tion, and there are a num­ber of rea­sons why hu­man­ity might not re­cover. Not re­cov­er­ing from the col­lapse of civ­i­liza­tion is one form of ex­is­ten­tial (X) risk be­cause hu­man­ity would not fulfill its po­ten­tial. I have de­vel­oped a model that uses Monte Carlo (prob­a­bil­is­tic) sam­pling to es­ti­mate un­cer­tain re­sults us­ing open source soft­ware (Guessti­mate) that in­cor­po­rates an ear­lier model of ar­tifi­cial gen­eral in­tel­li­gence safety (here­after AI) cost-effec­tive­ness. With a num­ber of as­sump­tions un­fa­vor­able to al­ter­nate foods, spend­ing ap­prox­i­mately $100 mil­lion on al­ter­nate foods has a similar cost effec­tive­ness to AI safety. Be­cause the agri­cul­tural catas­tro­phes could hap­pen im­me­di­ately and be­cause ex­ist­ing ex­per­tise rele­vant to al­ter­nate foods could be co-opted by char­i­ta­ble giv­ing, it is likely op­ti­mal to spend most of this money in the next few years. I con­tinue to be­lieve that AI is ex­tremely im­por­tant, and do not ad­vo­cate a re­duc­tion in AI fund­ing. I think that this al­ter­nate foods fund­ing gap could be filled by large and small EA donors with ad­di­tional ca­pac­ity, and pos­si­bly donors who are con­cerned about X risk but find claims about AI im­plau­si­ble. The big­ger pic­ture is that even more fund­ing is jus­tified for both AI and al­ter­nate foods even from the per­spec­tive of the pre­sent gen­er­a­tion, let alone fu­ture gen­er­a­tions. Hav­ing al­ter­nate foods as a top pri­or­ity would be a sig­nifi­cant re­al­ign­ment of fo­cus in the X risk com­mu­nity, so I in­vite more feed­back and dis­cus­sion (in­clud­ing play­ing with the model).1

Dis­claimer/​Ac­knowl­edge­ments: I would like to ac­knowl­edge CEA for fund­ing the EA grant to perform re­search on solu­tions to agri­cul­tural catas­tro­phes, Ozzie Gooen for de­vel­op­ing Guessti­mate, Oxford Pri­ori­ti­sa­tion Pro­ject for the AI model, and Joshua Pearce, Alexey Turchin, Michael Dick­ens, Owen Cot­ton-Bar­ratt, Fi­nan Adam­son, An­ders Sand­berg, Allen Hundley, and An­thony Bar­rett for re­view­ing con­tent. Opinions are my own and this is not the offi­cial po­si­tion of CEA, the Global Catas­trophic Risk In­sti­tute nor the Alli­ance to Feed the Earth in Disasters (ALLFED).


The great­est catas­trophic threat to global agri­cul­ture is full-scale nu­clear war be­tween US and Rus­sia, with cor­re­spond­ing burn­ing of cities and block­ing of the sun for 5-10 years. The ob­vi­ous in­ter­ven­tion is pre­ven­tion of nu­clear war, which would be the best out­come. How­ever, it is not ne­glected, as it has been worked on for many decades and is cur­rently funded at billions of dol­lars per year qual­ity ad­justed. The next most ob­vi­ous solu­tion is stor­ing food, which is far too ex­pen­sive (~tens of trillions of dol­lars) to have com­pet­i­tive cost effec­tive­ness (and it would take many years so it would not pro­tect us right away, and it would ex­ac­er­bate cur­rent malnu­tri­tion). I have posted be­fore about get­ting pre­pared for al­ter­nate foods (roughly those not de­pen­dent on sun­light that ex­ploit bio­mass or fos­sil fuels). This could save ex­pected lives in the pre­sent gen­er­a­tion for $0.20 to $400 for only 10% global agri­cul­tural short­falls like the year with­out a sum­mer in 1816 caused by a vol­canic erup­tion, and would be even more cost effec­tive if sun block­ing sce­nar­ios were con­sid­ered. Of course al­ter­nate foods would not save the lives of those peo­ple di­rectly im­pacted by the nu­clear weapons, which is po­ten­tially hun­dreds of mil­lions. But since about 6 billion peo­ple would die with our cur­rent ~half a year of food stor­age if the sun were blocked for 5 years, al­ter­nate foods would solve ~90% of the prob­lem. Cur­rent aware­ness of al­ter­nate foods is rel­a­tively low: about 700,000 peo­ple globally have heard about the con­cept based on im­pres­sion coun­ters for the ~10 ar­ti­cles, pod­casts, and pre­sen­ta­tions for which there were data in­clud­ing Science(out of more than 100 me­dia men­tions). Also, many of the tech­nolo­gies need to be bet­ter de­vel­oped. Plan­ning, re­search and de­vel­op­ment are three in­ter­ven­tions, which could dra­mat­i­cally in­crease the prob­a­bil­ity of suc­cess of feed­ing ev­ery­one, each cost­ing in the tens of mil­lions of dol­lars. This post an­a­lyzes the cost effec­tive­ness of al­ter­nate foods from an X risk per­spec­tive. It is gen­er­ally thought to be very un­likely that the agri­cul­tural catas­tro­phes such as nu­clear war with the burn­ing of cities (nu­clear win­ter), su­per vol­canic erup­tion, or a large as­ter­oid/​comet im­pact would di­rectly cause hu­man ex­tinc­tion.2 How­ever, there is sig­nifi­cant prob­a­bil­ity that by block­ing the sun for about 5 years, these catas­tro­phes could cause the col­lapse of civ­i­liza­tion. One defi­ni­tion of the col­lapse of civ­i­liza­tion in­volves short-term fo­cus, col­lapse of long dis­tance trade, wide­spread con­flict, and loss of gov­ern­ment (Coates, 2009). Not re­cov­er­ing from the col­lapse of civ­i­liza­tion is one form of X risk be­cause hu­man­ity would not fulfill its po­ten­tial. Rea­sons that civ­i­liza­tion might not re­cover in­clude: Easily ac­cessible fos­sil fuels and min­er­als are ex­hausted, we might not have the sta­ble cli­mate of last 10,000 years, we might lose trust or IQ per­ma­nently be­cause of the trauma and ge­netic se­lec­tion of the catas­tro­phe, an en­demic dis­ease could pre­vent high hu­man pop­u­la­tion den­sity, and a per­ma­nent loss of grains (e.g. from an en­g­ineered crop dis­ease that af­fects the en­tire grass fam­ily) could pre­clude high hu­man pop­u­la­tion den­sity. If the loss of civ­i­liza­tion per­sists long enough, a nat­u­ral catas­tro­phe could cause the ex­tinc­tion of hu­man­ity, such as a su­per vol­canic erup­tion or an as­ter­oid/​comet im­pact.

AI has been a top pri­or­ity in the X risk com­mu­nity. EAs have been very im­por­tant in rais­ing aware­ness and fund­ing for this cause. I seek to com­pare the cost effec­tive­ness of al­ter­nate foods with AI to see if al­ter­nate foods should also be a top pri­or­ity. The Guessti­mate model for AI cost-effec­tive­ness was de­vel­oped by the Oxford Pri­ori­ti­sa­tion Pro­ject (which uses in­put from Owen Cot­ton-Bar­rett’s and Daniel Dewey’s model). I use a sub­set of this model, be­cause I do not try to quan­tify the value of the far fu­ture. I do not dis­cuss the as­sump­tions in the AI model here. Another pos­si­ble AI model to com­pare to would be Michael Dick­ens’, but this is fu­ture work.

Model of al­ter­nate foods

I im­ple­mented the model in Guessti­mate here. Be aware that with the large un­cer­tain­ties, when you load the model, some val­ues could be differ­ent by a fac­tor of two (which I have noted in the model and shown as grey in the ta­bles). Ozzie Gooen (the de­vel­oper of Guessti­mate) is con­cerned about run­time and Chrome com­pat­i­bil­ity with in­creas­ing the sam­ple size, so I may need to move back to An­a­lyt­ica for pub­li­ca­tion re­peata­bil­ity. But for now, get­ting the or­der of mag­ni­tude right is all that is needed.

I am in­ter­ested in fur­ther feed­back on the as­sump­tions. Most of the num­bers are closely analo­gous to num­bers in the pub­lished liter­a­ture. There­fore, I will fo­cus here on the num­bers with less sup­port here. Table 1 has the in­put vari­ables for the sun-block­ing sce­nar­ios.3

Table 1. In­put vari­ables for the sun-block­ing scenarios

In­put variable

5th percentile

95th percentile


Prob­a­bil­ity per year of full-scale nu­clear war



Bar­rett et al. 2013

Prob­a­bil­ity of agri­cul­tural col­lapse given full scale nu­clear war



Denken­berger 2017 with var­i­ance added

Prob­a­bil­ity of loss of civ­i­liza­tion given agri­cul­tural collapse



In­for­mal pol­ling at con­fer­ences has yielded >50% of peo­ple think­ing civ­i­liza­tion will col­lapse with­out al­ter­nate foods

Prob­a­bil­ity of not re­cov­er­ing civilization



Es­ti­mate of poll at “Ex­is­ten­tial Risk to Hu­man­ity” in Gothen­burg, Swe­den 2017

Cost of plan­ning, R&D for al­ter­nate foods ($ mil­lion)



Denken­berger 2016

Time hori­zon of effec­tive­ness of plan­ning and R&D for al­ter­nate foods (years)



Denken­berger 2017

Prob­a­bil­ity al­ter­nate foods with cur­rent prepa­ra­tion will pre­vent col­lapse of civilization



Denken­berger 2017

Prob­a­bil­ity al­ter­nate foods with plan­ning and R&D will pre­vent col­lapse of civilization



Denken­berger 2017

For the prob­a­bil­ity of full-scale nu­clear war, Bar­rett 2013 an­a­lyzes only ac­ci­den­tal nu­clear war with a fault tree look­ing at close calls. Many fear that with the cur­rent lead­ers of Rus­sia and the United States, an in­ten­tional strike has a sig­nifi­cant prob­a­bil­ity, so I am be­ing con­ser­va­tive (un­fa­vor­able to al­ter­nate foods) by ig­nor­ing this. Also, this does not in­clude su­per vol­cano or as­ter­oid/​comet risk, but they are rel­a­tively small. For the prob­a­bil­ity of agri­cul­tural col­lapse given full-scale nu­clear war, many have as­sumed this is near 100%. I have done some con­ser­va­tive mod­el­ing in (Denken­berger 2017) that pro­duced roughly 20% prob­a­bil­ity. I have added some var­i­ance around this. For the prob­a­bil­ity of col­lapse of civ­i­liza­tion given the col­lapse of agri­cul­ture, this is based on ~10 work­shop par­ti­ci­pants at EA Global 2016 in San Fran­cisco, and ~10 at the In­ter­na­tional Disaster Risk Re­duc­tion con­fer­ence in Davos, Switzer­land in 2016. Most thought civ­i­liza­tion would not sur­vive the loss of agri­cul­ture for ~5 years. For the prob­a­bil­ity of not re­cov­er­ing civ­i­liza­tion, this is based on an es­ti­mate of an oral poll of ~20 par­ti­ci­pants taken at “Ex­is­ten­tial Risk to Hu­man­ity” in Gothen­burg, Swe­den in 2017. I have re­duced the me­dian some­what and in­creased the vari­a­tion. I used the same source for the prob­a­bil­ity of loss of civ­i­liza­tion given 10% agri­cul­tural short­fall in Table 2. For the prob­a­bil­ity of sav­ing civ­i­liza­tion, I use the prob­a­bil­ity from the pa­pers of feed­ing ev­ery­one.4 It is much eas­ier to save civ­i­liza­tion than feed ev­ery­one, so this is quite con­ser­va­tive.

A num­ber of catas­trophic events could cause a roughly 10% global agri­cul­tural short­fall, in­clud­ing a medium-sized as­ter­oid/​comet im­pact, a large but not su­per vol­canic erup­tion (like the one that caused the year with­out a sum­mer in 1816), re­gional nu­clear war (for ex­am­ple, In­dia-Pak­istan), abrupt re­gional cli­mate change (10°C in a decade, which has hap­pened in the past mul­ti­ple times), com­plete global loss of bees as pol­li­na­tors, a su­per crop pest or pathogen, and co­in­ci­dent ex­treme weather, re­sult­ing in mul­ti­ple bread­bas­ket failures. Though it would be tech­ni­cally straight­for­ward to re­duce food con­sump­tion by 10% by mak­ing less food go to waste, an­i­mals, and biofuels, the prices would go so high that the poor may not be able to af­ford food. We found an ex­pected 500 mil­lion lives lost in such a catas­tro­phe. There could also be ex­treme global cli­mate change of >5°C that hap­pens over a cen­tury (so slow in com­par­i­son to “abrupt” cli­mate change). This could make con­ven­tional agri­cul­ture im­pos­si­ble in the trop­ics, which would be a larger than 10% agri­cul­tural im­pact, but it would oc­cur over ~1 cen­tury, so the im­pact might be similar to the abrupt 10% short­falls. Other events would not di­rectly af­fect food pro­duc­tion, but still could have similar im­pacts on hu­man nu­tri­tion. Some of these in­clude a con­ven­tional world war or pan­demic that dis­rupts global food trade, and causes famine in food-im­port­ing coun­tries. Though sig­nifi­cantly less likely, it is pos­si­ble that these catas­tro­phes could re­sult in in­sta­bil­ity and full scale nu­clear war, pos­si­bly col­laps­ing civ­i­liza­tion. The prepa­ra­tion for 10% agri­cul­tural short­falls would be very similar as to agri­cul­tural col­lapse, es­pe­cially be­cause the former could lead to the lat­ter. In the Table 2, I list the vari­ables for this sce­nario. The prob­a­bil­ity per year of 10% agri­cul­tural short­fall leaves many risks un­quan­tified, so it is con­ser­va­tive. The other vari­ables are the same as the sun block­ing case.

Table 2. In­put vari­ables for the 10% global agri­cul­tural shortfalls

In­put variable

5th percentile

95th percentile


Prob­a­bil­ity per year of 10% agri­cul­tural shortfall



Denken­berger 2016

Prob­a­bil­ity of loss of civ­i­liza­tion given 10% agri­cul­tural short­fall



Es­ti­mate of poll at “Ex­is­ten­tial Risk to Hu­man­ity” in Gothen­burg, Swe­den 2017

Prob­a­bil­ity of pre­vent­ing col­lapse of civ­i­liza­tion from 10% agri­cul­tural shortfall



In the 10% short­fall, al­ter­nate foods have some chance of pre­vent­ing worse out­comes like full scale nu­clear war, but then even if full scale nu­clear war oc­curs, al­ter­nate foods re­duce the chance of los­ing civ­i­liza­tion.


Table 3 shows the mean and prob­a­bil­ity bounds of the out­put vari­ables.

Table 3. Risks and cost effec­tive­ness for the av­er­age of spend­ing $100 mil­lion (grey have lower re­peata­bil­ity).

Out­put variable

5th percentile

95th percentile


Prob­a­bil­ity of ex­is­ten­tial catas­tro­phe per year from full scale nu­clear war




Prob­a­bil­ity of avert­ing ex­is­ten­tial catas­tro­phe per $ for full scale nu­clear war




Prob­a­bil­ity of ex­is­ten­tial catas­tro­phe per year from 10% agri­cul­tural shortfall




Prob­a­bil­ity of avert­ing ex­is­ten­tial catas­tro­phe per $ for 10% agri­cul­tural short­fall




Prob­a­bil­ity of avert­ing ex­is­ten­tial catas­tro­phe per $ overall





The AI model pro­duced a prob­a­bil­ity of avert­ing ex­is­ten­tial catas­tro­phe per dol­lar of 4E-13 to 4E-11 with a mean (ex­pec­ta­tion) of 8E-12. This is sig­nifi­cantly smaller var­i­ance than al­ter­nate foods, which was the op­po­site of what I ex­pected. It could be that the gen­eral pop­u­la­tion would es­ti­mate AI as more un­cer­tain than nu­clear war (e.g. by giv­ing sig­nifi­cant weight to the im­pos­si­bil­ity of AGI) or that even with ex­perts, the AI model should have greater un­cer­tainty (I am not mod­ify­ing the AI model). The ex­pected cost effec­tive­nesses of al­ter­nate foods and AI are similar. In ex­pec­ta­tion, al­ter­nate foods from very small fund­ing to $100 mil­lion is about three times as cost effec­tive as AI at the mar­gin (see Table 4).

In or­der to jus­tify the full $100 mil­lion for al­ter­nate foods, the marginal cost effec­tive­ness of al­ter­nate foods would need to be com­pet­i­tive with the marginal cost effec­tive­ness of AI. When I looked at the marginal cost effec­tive­ness of each of the three in­ter­ven­tions of plan­ning, re­search, and de­vel­op­ment, I found lit­tle de­clines in cost effec­tive­ness. How­ever, differ­ent amounts of money could be spent in each of these cat­e­gories, so I would ex­pect some de­clin­ing cost-effec­tive­ness. Re­turns to dona­tions may be log­a­r­ith­mic fairly gen­er­ally, which means the marginal cost effec­tive­ness is just one di­vided by the cu­mu­la­tive money spent. In this case, the marginal cost effec­tive­ness on the last dol­lar for al­ter­nate foods would be about ⅙ the av­er­age cost-effec­tive­ness (see the bot­tom of the Guessti­mate model). Then the ex­pected cost-effec­tive­ness of al­ter­nate foods on the last dol­lar would be about half as cost-effec­tive as marginal AI (see Table 4). If we move to fund­ing al­ter­nate foods at the mar­gin right now, we need an es­ti­mate of the cu­mu­la­tive money spent on al­ter­nate foods. Un­der $1 mil­lion equiv­a­lent (mostly vol­un­teer time) has been spent so far di­rectly on this effort, nearly all by the Alli­ance to Feed the Earth in Disasters (ALLFED) (dis­claimer, which I cofounded).5 The cost-effec­tive­ness of the marginal dol­lar now is about 20 times greater than av­er­age of $100 mil­lion as­sum­ing log­a­r­ith­mic re­turns. Then the ex­pected cost effec­tive­ness of marginal dol­lar now for al­ter­nate foods would be nearly two or­ders of mag­ni­tude greater than AI (see Table 4). So there is an even stronger case for a small amount of money now.

But it is not re­quired for al­ter­nate foods to be more cost effec­tive than AI in or­der to fund al­ter­nate foods on a large scale. Fund­ing of X risk in the EA com­mu­nity goes to other causes, no­tably an en­g­ineered pan­demic. It is fu­ture work to perform a de­tailed com­par­i­son of cost effec­tive­ness with en­g­ineered pan­demic. Here is a pa­per on biose­cu­rity with sig­nifi­cantly lower cost effec­tive­ness than for AI and al­ter­nate foods, but the au­thors were be­ing very con­ser­va­tive.

Table 4. Mean cost effec­tive­ness (prob­a­bil­ity of avert­ing ex­is­ten­tial catas­tro­phe per dol­lar) and ra­tios (grey is ex­trap­o­la­tion from de­tailed es­ti­mates)


Mean cost effec­tive­ness (prob­a­bil­ity of avert­ing ex­is­ten­tial catas­tro­phe per dol­lar)

Ra­tio (al­ter­nate foods mean di­vided by AI mean cost effec­tive­ness)

AI marginal at $3 billion6



Alter­nate foods av­er­age over $100 million



Alter­nate foods marginal at $100 million



Alter­nate foods marginal now



There are ad­di­tional sources of con­ser­vatism for al­ter­nate foods. Be­ing pre­pared for agri­cul­tural catas­tro­phes might pro­tect against un­known risks, mean­ing the cost-effec­tive­ness would be even higher. Also, 80,000 Hours es­ti­mates that global cli­mate change of >5℃ would re­duce the fu­ture po­ten­tial of hu­man­ity by ~20% through a risk of ex­tinc­tion, worse val­ues, in­ter­na­tional con­flict or so­cial break­down and a failure to re­cover. My model cur­rently has the re­duc­tion in fu­ture hu­man po­ten­tial at only ~0.13% given these types of “10%” global agri­cul­tural short­falls (los­ing civ­i­liza­tion and not re­cov­er­ing). Us­ing their num­bers would in­crease over­all cost effec­tive­ness of al­ter­nate foods by an or­der of mag­ni­tude just from chang­ing the 10% short­fall num­bers. Ad­ding 80,000 Hours es­ti­mate of the pos­si­bil­ity of worse val­ues to the sun block­ing sce­nar­ios mean­ing a 30% re­duc­tion in fu­ture po­ten­tial of hu­man­ity would ~triple over­all cost effec­tive­ness again, mean­ing ~40 times as cost effec­tive as my model.

Steel­man­ning the op­po­si­tion to fund­ing al­ter­nate foods:

The Open Philan­thropy Pro­ject (OPP) is fund­ing a de­tailed in­ves­ti­ga­tion of the im­pact of nu­clear war. Shouldn’t we wait un­til those re­sults be­fore fund­ing al­ter­nate foods? The biggest source of un­cer­tainty in the model here is the chance of nu­clear war. The OPP in­ves­ti­ga­tion will ex­am­ine which nu­clear deto­na­tion sce­nar­ios are plau­si­ble, but they are not plan­ning on as­sign­ing quan­ti­ta­tive prob­a­bil­ities to these sce­nar­ios. Fur­ther­more, ev­ery year we wait to get pre­pared with al­ter­nate foods, we ex­pose our­selves to an ~0.01% ad­di­tional chance of ex­is­ten­tial catas­tro­phe. In ad­di­tion, even if the prob­a­bil­ity of los­ing civ­i­liza­tion turns out to be zero from nu­clear war, spend­ing $100 mil­lion on al­ter­nate foods would still be com­pet­i­tive with AI from a long term fu­ture per­spec­tive. Also, this spend­ing is already highly jus­tified for the pre­sent gen­er­a­tion even not in­clud­ing nu­clear win­ter risk, so it is likely a no-re­grets policy (at least with some un­cer­tainty about what to value or if flow through effects to the far fu­ture of sav­ing lives now are sig­nifi­cant). Of course AI safety would save ex­pected lives in the pre­sent gen­er­a­tion, but it would be 1-2 or­ders of mag­ni­tude less than al­ter­nate foods (see be­low). This is be­cause an AI catas­tro­phe is likely to kill ev­ery­one, while agri­cul­tural catas­tro­phes can kill many peo­ple with­out caus­ing an ex­is­ten­tial catas­tro­phe.

Another steel­man is that the es­ti­mate of cost effec­tive­ness of AI is too low. Some think that the to­tal ex­is­ten­tial risk as­so­ci­ated with de­vel­op­ing highly ca­pa­ble AI sys­tems, bear­ing in mind all of the work on safety that will be done, is higher than the cur­rent 95th per­centile of 7% chance.7 This could re­duce the op­ti­mal amount of fund­ing for al­ter­nate foods if they have to be com­pet­i­tive with AI, but it would be very un­likely to elimi­nate ad­di­tional fund­ing for al­ter­nate foods, and al­ter­nate foods do not nec­es­sar­ily have to be com­pet­i­tive with AI for it to be op­ti­mal to fund them.

A fur­ther steel­man is that cost effec­tive­ness es­ti­mates tend to worsen over time, as GiveWell found for global poverty in­ter­ven­tions. This could ap­ply both to AI and al­ter­nate foods, though al­ter­nate foods are newer, so one might ex­pect that it would ap­ply more to al­ter­nate foods. How­ever, given my con­ser­vatism, I would ex­pect the es­ti­mate of the cost effec­tive­ness of al­ter­nate foods to rise over time. In­deed, this has been the case for me over the last few years as I have dis­cov­ered more catas­tro­phes that al­ter­nate foods could ame­lio­rate. In my ex­pe­rience, one has to be con­ser­va­tive to pass peer re­view at a main­stream jour­nal like the biose­cu­rity pa­per (though ad­mit­tedly, not all of my in­puts here have been peer re­viewed).

Less de­tailed view

The im­por­tance, tractabil­ity, ne­glect­ed­ness (ITN) frame­work is use­ful for screen­ing cause ar­eas. There is de­bate about us­ing the ITN frame­work for in­ter­ven­tions as well as risks. In­deed, al­ter­nate foods can only po­ten­tially solve 90% of the mor­tal­ity (and per­haps similar for X risk), but this is within the un­cer­tainty of the anal­y­sis. The larger con­cern is that if we al­lo­cate money across all the in­ter­ven­tions for a risk, less money should be spent on an in­di­vi­d­ual in­ter­ven­tion than on the en­tire cause area. One could look at the cat­e­gories of in­ter­ven­tions for nu­clear win­ter, which might be pre­vent war, elimi­nate/​re­duce nu­clear weapons, pre­vent nu­clear win­ter given nu­clear war, and adapt to nu­clear win­ter. I have ar­gued that adap­ta­tion should fo­cus on al­ter­nate foods, so per­haps al­ter­nate foods should have ⅕ the fund­ing of the cause. But since 80,000 Hours has listed about 10 in­ter­ven­tions, let’s say al­ter­nate foods should have 110 the fund­ing a pri­ori. This means spend­ing $100 mil­lion on al­ter­nate foods should be equiv­a­lent in cost effec­tive­ness as spend­ing $1 billion on the nu­clear win­ter cause area.

The less de­tailed view could look at just im­por­tance and ne­glect­ed­ness. Ac­cord­ing to these mod­els, AI poses a 4% ex­pected risk of ex­is­ten­tial catas­tro­phe this cen­tury (bear­ing in mind all of the work on safety that will be done), and agri­cul­tural catas­tro­phes pose a 1.6% ex­pected risk (of which al­ter­nate funds can only ad­dress about 90%). With equal tractabil­ity and the same level of fund­ing for the cause ar­eas, one would ex­pect that AI would be about three times as cost effec­tive as al­ter­nate foods be­cause of the greater im­por­tance of AI. Table 5 shows differ­ent lev­els of fund­ing and as­sumes log­a­r­ith­mic re­turns to in­vest­ment (see the bot­tom of the Guessti­mate model). For the al­ter­nate foods cost effec­tive­ness for av­er­age of $100 mil­lion (from $10 mil­lion to $1 billion equiv­a­lent for nu­clear win­ter cause area) this less de­tailed view pre­dicts that al­ter­nate foods should be 5 times as cost-effec­tive as AI marginal at $3 billion. Since the re­sult in Table 3 was 3 times as cost effec­tive as AI, this im­plies similar tractabil­ity of AI to al­ter­nate foods. This was sur­pris­ing to me, be­cause I ex­pected al­ter­nate foods to be some­thing like an or­der of mag­ni­tude more tractable than AI be­cause there are clear ways to make progress in al­ter­nate foods and al­ter­nate foods are not tal­ent con­strained. This could in­di­cate that all those ways I am be­ing con­ser­va­tive with al­ter­nate foods re­ally add up. A big one is that I as­sume the in­ter­ven­tions are only valuable for about 20 years in­stead of a cen­tury like for AI (though it is true that if AGI comes soon, al­ter­nate foods will be moot). If the con­ser­vatism is re­moved, it could be that al­ter­nate foods at the $100 mil­lion spend­ing level are sig­nifi­cantly more cost-effec­tive than I am claiming and there­fore sig­nifi­cantly more cost effec­tive than AI at the $3 billion spend­ing level.

This less de­tailed view can also be used to es­ti­mate the marginal cost effec­tive­ness of the one hun­dred mil­lionth dol­lar and right now as­sum­ing log­a­r­ith­mic re­turns. It is about 5 times less cost-effec­tive for the $100 mil­lion marginal to al­ter­nate foods than for $100 mil­lion of fund­ing on av­er­age (see bot­tom of Guessti­mate model). Since this is $1 billion equiv­a­lent for nu­clear win­ter cause area, this yields the same cost-effec­tive­ness as AI (see Table 5). It is about 20 times as cost-effec­tive for the marginal dol­lar now ($1 mil­lion for al­ter­nate foods or $10 mil­lion equiv­a­lent to the nu­clear win­ter cause area) than for $100 mil­lion of fund­ing on av­er­age. This yields 100 times as cost-effec­tive as AI (see Table 5). This is roughly con­sis­tent with my de­tailed es­ti­mates of the cost effec­tive­ness of the marginal dol­lar now.

Table 5. Less de­tailed view cost effec­tive­ness (only im­por­tance and ne­glect­ed­ness, as­sum­ing equal tractabil­ity) (grey be­cause of ex­trap­o­la­tions)


Rel­a­tive cost effec­tive­ness with AI = 1

AI marginal at $3 billion


Alter­nate foods av­er­age over $100 mil­lion (from $10 mil­lion to $1 billion equiv­a­lent for nu­clear win­ter cause area)


Alter­nate foods marginal at $100 mil­lion ($1 billion equiv­a­lent for nu­clear win­ter cause area)


Alter­nate foods marginal now ($10 mil­lion equiv­a­lent for nu­clear win­ter cause area)


In­ter­est­ingly, if we as­sume that an ex­is­ten­tial catas­tro­phe with AI means the loss of 9 billion hu­mans, the cost effec­tive­ness of AI now is $5-$900 per ex­pected life saved (very bot­tom of Guessti­mate model). This is not nearly as cost-effec­tive as al­ter­nate foods, but it is sig­nifi­cantly lower cost than GiveWell es­ti­mates for global health in­ter­ven­tions: $900-$7,000. Since AI ap­pears to be un­der­funded from the pre­sent gen­er­a­tion per­spec­tive, it would be ex­tremely un­der­funded when tak­ing into ac­count fu­ture gen­er­a­tions. If this were cor­rected, then in or­der to have similar cost-effec­tive­ness with al­ter­nate foods, more fund­ing for al­ter­nate foods would be jus­tified. In­deed, in or­der to fund al­ter­nate foods just from a cur­rent gen­er­a­tion per­spec­tive at a level of similar cost-effec­tive­ness to global poverty in­ter­ven­tions, billions of dol­lars of al­ter­nate food fund­ing would be jus­tified. Much more fund­ing would be jus­tified if valu­ing fu­ture gen­er­a­tions. It is kind of de­press­ing that, while we in the X risks com­mu­nity are gen­er­ally mo­ti­vated by fu­ture gen­er­a­tions, we can­not even get work on these risks funded at a level that would be jus­tified by the pre­sent gen­er­a­tion, a point made in the book Catas­tro­phe: Risk and Re­sponse.

Timing of funding

If one agrees that al­ter­nate foods should be in the EA bud­get for X risks, the next ques­tion is how to al­lo­cate fund­ing to the differ­ent causes over time. For AI, there are ar­gu­ments both for fund­ing now and fund­ing later. For al­ter­nate foods, since most of the catas­tro­phes could hap­pen right away, there is sig­nifi­cantly greater ur­gency to fund al­ter­nate foods now. Fur­ther­more, it is rel­a­tively more effec­tive to scale up the fund­ing quickly be­cause we can, through re­quests for pro­pos­als, co-opt rele­vant ex­per­tise that already ex­ists (e.g. in the differ­ent foods, such as biofuel ex­perts who know how to turn fiber into sugar). Since I have not mon­e­tized the value of the far fu­ture, I can­not use tra­di­tional cost-effec­tive­ness met­rics such as the benefit to cost ra­tio, net pre­sent value, pay­back time, and re­turn on in­vest­ment. How­ever, in the case of sav­ing ex­pected lives in the pre­sent gen­er­a­tion, the re­turn on in­vest­ment was from 100% to 5,000,000% per year. This sug­gests that the $100 mil­lion or so for al­ter­nate foods should be mostly spent in the next few years to op­ti­mally re­duce X risk (a smaller amount would main­tain pre­pared­ness in the fu­ture). Since AI safety fund­ing is now about $10 mil­lion per year, this would mean more fund­ing for al­ter­nate foods than AI in the near term. I think that this al­ter­nate foods fund­ing gap could be filled by large and small EA donors with ad­di­tional ca­pac­ity. Also, donors who are con­cerned about X-risk (or just pre­sent gen­er­a­tions) but find claims about AI im­plau­si­ble could con­tribute (but I don’t want to let those con­cerned about AI off the hook!). The for­mal op­ti­miza­tion in­clud­ing other causes such as as­ter­oid deflec­tion and ter­res­trial/​space re­fuges to re­pop­u­late the earth is fu­ture work and is re­lated to work at GCRI in­clud­ing value of in­for­ma­tion and in­te­grated as­sess­ment. Other ways of con­tribut­ing to the al­ter­nate foods effort than donat­ing will be the sub­ject of a fu­ture post.

Sen­si­tivity analysis

The great­est un­cer­tainty is the prob­a­bil­ity of nu­clear war. I performed a sen­si­tivity anal­y­sis on this for the pre­sent gen­er­a­tion here. Ba­si­cally, you can just scale up and down the cost-effec­tive­ness num­bers by the prob­a­bil­ity of nu­clear war that you feel is most ac­cu­rate rel­a­tive to the cur­rent ex­pec­ta­tion of ~1% per year.


1 You can change num­bers in view­ing model to see how out­puts change, but they will not save. If you want to save, you can make a copy of the model. Click View, visi­ble to show ar­rows. Mouse over cells to see com­ments. Click on the cell to see the equa­tion.

2 Though there were con­cerns that full scale nu­clear war would kill ev­ery­one with ra­dioac­tivity, it turns out that most of the ra­dioac­tivity is rained out within a few days. One pos­si­ble mechanism for ex­tinc­tion would be that the hunter gath­er­ers would die out be­cause they do not have food stor­age. And peo­ple in de­vel­oped coun­tries would have food stor­age, but might not be able to figure out how to go back to be­ing hunter gath­er­ers.

3 Most dis­tri­bu­tions are log­nor­mal, but some are beta to avoid greater than 100% prob­a­bil­ity. Log­nor­mal re­sults in the me­dian be­ing the ge­o­met­ric mean of the ends (mul­ti­ply the 5th and 95th per­centiles and take the square root) (as they say in statis­tics, the means jus­tify the ends :) ). Note that the mean is gen­er­ally a lot higher than the me­dian.

4 In the 10% short­fall, al­ter­nate foods have some chance of pre­vent­ing worse out­comes like full scale nu­clear war, but then even if full scale nu­clear war oc­curs, al­ter­nate foods re­duce the chance of los­ing civ­i­liza­tion.

5 Of course a very large amount of money has been spent on try­ing to pre­vent nu­clear war. More rele­vant, money has been spent de­vel­op­ing al­ter­nate foods for other rea­sons, such as mush­rooms and nat­u­ral gas di­gest­ing bac­te­ria. This could eas­ily be tens of mil­lions of dol­lars that would have needed to be spent for catas­tro­phe prepa­ra­tion. So this would be rele­vant for the marginal $100 mil­lion. How­ever, there are very high value in­ter­ven­tions we would do first, like figur­ing out how to ex­ploit mass/​so­cial me­dia in a catas­tro­phe to get the right peo­ple to know about al­ter­na­tive foods. Though the al­ter­na­tive foods would not work as well as with $100 mil­lion of R&D, just hav­ing the lead­ers of coun­tries know about them and im­ple­ment them in their own coun­tries with­out trade could still sig­nifi­cantly in­crease the chance of re­tain­ing civ­i­liza­tion. The cost of these first in­ter­ven­tions would be very low, so it would be very high cost effec­tive­ness.

6 Open AI already has ~$1 billion, so I es­ti­mate that $3 billion will be com­mit­ted to AI. This is roughly con­sis­tent with the ex­pec­ta­tion of num­ber of re­searchers in the AI model. This should also in­clude qual­ity weighted vol­un­teer time, as I have done in the case of al­ter­nate foods.

7 Broad­en­ing these un­cer­tainty bounds from the cur­rent 1.3% to 7% would also par­tially ad­dress the sur­pris­ing re­sult that the over­all un­cer­tainty of AI is lower than al­ter­nate foods.


The value
is not of type