Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before?
Summary
As part of a Centre for Effective Altruism (CEA) grant, I have updated the cost effectiveness of preparing for agricultural catastrophes such as nuclear winter (previous analysis here). This largely involves planning and research and development of alternate foods (roughly those not dependent on sunlight such as mushrooms, natural gas digesting bacteria, and extracting food from leaves). I have refined a model that uses Monte Carlo (probabilistic) sampling to estimate uncertain results using open source software (Guesstimate) that incorporates an earlier model of artificial general intelligence safety (hereafter AI) cost-effectiveness. A major change is broadening the routes to far future impact from only loss of civilization and non-recovery to include making other catastrophes more likely (e.g. totalitarianism) or worse values ending up in AGI. Additional changes include the provision of moral hazard, performing a survey of global catastrophic risk (GCR) researchers for key parameters, and using better behaved distributions in the AI model (increasing the cost-effectiveness of AI by a factor of two).
Overall, alternate foods performs about an order of magnitude more favorably relative to AI than in the previous analysis, with the ratio of alternate foods cost effectiveness to AI at the margin varying from ~3x to ~300x for the 100 millionth dollar and the margin now, respectively. This corresponds to ~60% confidence of greater cost-effectiveness than AI for the 100 millionth dollar, and ~95% confidence of greater cost-effectiveness at the margin now than AI. Anders Sandbergās version of the model produced ~80% and ~100% confidence, respectively.
Because the agricultural catastrophes could happen immediately and because existing expertise relevant to alternate foods could be co-opted by charitable giving, it is likely optimal to spend most of the $100 million in the next few years. I continue to believe that AI is extremely important, and do not advocate a reduction in AI funding. As before, both AI and alternate foods save lives in the present generation orders of magnitude more cheaply than global poverty interventions. So I believe one source of more funding for both should be those people who do not highly value the long-term future. Having alternate foods as a top priority would be a significant realignment of focus in the X risk community, so I invite more feedback and discussion (including playing with the model).1
Disclaimer/āAcknowledgements: I would like to acknowledge CEA for funding the EA grant to perform research on solutions to agricultural catastrophes, Ozzie Gooen for developing Guesstimate, Oxford Prioritisation Project for the AI model, and Joshua Pearce, Anders Sandberg and Owen Cotton-Barratt for reviewing content. Special thanks go to Finan Adamson who presented an earlier model at EA Global San Francisco 2018 with a poster. Opinions are my own and this is not the official position of CEA, Future of Humanity Institute, the Global Catastrophic Risk Institute nor the Alliance to Feed the Earth in Disasters (ALLFED).
Introduction
The greatest catastrophic threat to global agriculture is full-scale nuclear war between US and Russia, with corresponding burning of cities and blocking of the sun for 5-10 years. The purchasing power parity of an economy is a proxy for the combustible material. This is now greater for China than the US. Also, China may have a larger economy now than NATO plus the Warsaw Pact in the 1980s. Therefore, even though China only has approximately 300 nuclear weapons, an exchange with Russia or the US could potentially block the sun. This is because thousands of nuclear weapons could come from the US or Russia, and the hundreds of Chinese nuclear weapons would likely hit the densest areas in the US or Russia. The obvious intervention is prevention of nuclear war, which would be the best outcome. However, it is not neglected, as it has been worked on for many decades and is currently funded at billions of dollars per year quality adjusted. The next most obvious solution is storing food, which is far too expensive (~tens of trillions of dollars) to have competitive cost effectiveness (and it would take many years so it would not protect us right away, and it would exacerbate current malnutrition). I have posted before about getting prepared for alternate foods (roughly those not dependent on sunlight that exploit biomass or fossil fuels). This could save expected lives in the present generation for $0.20 to $400 for only 10% global agricultural shortfalls like the year without a summer in 1816 caused by a volcanic eruption, and would be even more cost effective if sun blocking scenarios were considered. Of course alternate foods would not save the lives of those people directly impacted by the nuclear weapons, which is potentially hundreds of millions. But since about 6 billion people would die with our current ~half a year of food storage if the sun were blocked for 5 years, alternate foods could solve ~90% of the problem.
Current awareness of alternate foods is relatively low: about 700,000 people globally have heard about the concept based on impression counters for the ~10 articles, podcasts, and presentations for which there were data including Science (out of more than 100 media mentions). Also, many of the technologies need to be better developed. Planning, research and development are three interventions, which could dramatically increase the probability of success of feeding everyone, each costing in the tens of millions of dollars. This post analyzes the cost effectiveness of alternate foods from a long term perspective. It is generally thought to be very unlikely that the agricultural catastrophes such as nuclear war with the burning of cities (nuclear winter), super volcanic eruption, or a large asteroid/ācomet impact would directly cause human extinction.2 However, there is significant probability that by blocking the sun for about 5 years, these catastrophes could cause the collapse of civilization. Reasons that civilization might not recover include: Easily accessible fossil fuels and minerals are exhausted, we might not have the stable climate of last 10,000 years, or we might lose trust or IQ permanently because of the trauma and genetic selection of the catastrophe. If the loss of civilization persists long enough, a natural catastrophe could cause the extinction of humanity, such as a super volcanic eruption or an asteroid/ācomet impact. Another route to far future impact is the trauma associated with the catastrophe making future catastrophes more likely, such as global totalitarianism. A further route is worse values caused by the catastrophe could be locked in by AGI.
AI has been a top priority in the X risk community. EAs have been an important part of raising awareness and funding for this cause. I seek to compare the cost effectiveness of alternate foods with AI to see if alternate foods should also be a top priority. The Guesstimate model for AI cost-effectiveness was developed by the Oxford Prioritisation Project (which uses input from Owen Cotton-Barrettās and Daniel Deweyās model). I use a subset of this model, because I do not try to quantify the absolute value of the far future (measuring instead impact on the far future as percentage saved/āimproved). I do not discuss the assumptions in the AI model here, but I did use better behaved functions to produce more reasonable results (like removing negative probabilities of X risk), and this increased the cost-effectiveness of AI by a factor of two. Another possible AI model to compare to would be Michael Dickensā, but this is future work.
Updated model
Table 1 shows the key input parameters. The structure of the model is very similar to before. I will discuss the key updates to the model.
Table 1. Input variables
Barrett 2013 analyzes only inadvertent full scale nuclear war (attacking when thinking you are being attacked). Many fear that with the current leaders of Russia and the United States, an intentional strike has a significant probability. There is also accidental nuclear explosion that could escalate and many other routes to nuclear war. However, others argue that the lack of nuclear war in the last 72 years should update the probability distribution. Technically there has been 1 nuclear war in the last 73 years (1.4% per year, though not directly comparable). Including Russia and China or US and China exchanges significantly increases the probability of āfull-scaleā nuclear war. Nine out of 13 times that thereās been a switch in which is the most militarily powerful country in the world, there has been war (though we should not take that literally for the current situation). So I think Barrettās original calculation is reasonable.
A number of catastrophic events could cause a roughly 10% global agricultural shortfall, including a medium-sized asteroid/ācomet impact, a large but not super volcanic eruption (like the one that caused the year without a summer in 1816), regional nuclear war (for example, India-Pakistan), abrupt regional climate change (10Ā°C in a decade, which has happened in the past multiple times), complete global loss of bees as pollinators, a super crop pest or pathogen, and coincident extreme weather, resulting in multiple breadbasket failures. According to UK government study, the latter scenario has ~1% per year chance now and increasing throughout the century. Though it would be technically straightforward to reduce food consumption by 10% by making less food go to waste, animals, and biofuels, the prices would go so high that those in poverty may not be able to afford food. We found an expected 500 million lives lost in such a catastrophe. There could also be extreme global climate change of >5Ā°C that happens over a century (so slow in comparison to āabruptā climate change). This could make conventional agriculture impossible in the tropics, which could be a larger than 10% agricultural impact (depending on how agriculture increased at high latitudes), but it would occur over ~1 century, so the impact might be similar to the abrupt 10% shortfalls. Other events would not directly affect food production, but still could have similar impacts on human nutrition. Some of these include a conventional world war or pandemic that disrupts global food trade, and causes famine in food-importing countries.
Intuitively, one would expect that the probability of 10% shortfalls would be significantly greater than full-scale nuclear war. There are many more potential combinations of regional nuclear war than for full-scale. My mean estimate is 3% per year for 10% of agricultural shortfalls.
I sent a survey to 31 GCR researchers, and got seven responses (including myself). The questions involved the reduction in far future potential due to the catastrophes, the contribution of ALLFED so far, and the additional contribution of spending roughly $100 million to get prepared.
The mean estimate of these GCR researchers was 17% reduction in the long-term future of humanity due to full-scale nuclear war if there were no ALLFED, which compares to a 30% estimate by 80,000 Hours. The 10% food shortfall catastrophes could result in instability and full scale nuclear war or other routes to far future impact. The poll of GCR researchers found a mean of 5% reduction in long-term potential of humanity due to these catastrophes. This is lower than the 80,000 Hoursā estimate of ~20%.
The survey also indicated the means of the distributions of percent reduction in far future loss due to ALLFED (and the work done by ALLFED researchers before the organization was officially formed) were 4% and 5% for full-scale nuclear war and 10% agricultural shortfalls, respectively.
Furthermore, the survey indicated the means of the distributions of percent further reduction in far future loss due to spending $100 million were 17% and 25% for full-scale nuclear war and 10% agricultural shortfalls, respectively.
Moral hazard would be if awareness of a food backup plan makes nuclear war more likely or more intense. I think it unlikely that, in the heat of the moment, the decision to go to nuclear war (whether accidental, inadvertent, or intentional) gives much consideration to the nontarget countries. However, awareness of a backup plan could result in increased arsenals relative to business as usual, as awareness of the threat of nuclear winter likely contributed to the reduction in arsenals. I estimate the mean loss in net effectiveness of the interventions for full-scale nuclear war to be 4%. For the 10% agricultural shortfalls, I estimate a mean 2% loss in net effectiveness, because I think the moral hazard would apply less strongly to non-nuclear scenarios, such as coincident extreme weather and volcanic eruptions. I support reducing nuclear stockpiles and have co-authored a paper arguing that more than 100 nuclear weapons used on another country even without retaliation poses unacceptable environmental blowback.
Results
As before, in order to convert average cost effectiveness to marginal, I assume that returns to donations are logarithmic, which results in the marginal cost effectiveness being just one divided by the cumulative money spent. Ratios of mean cost effectivenesses are reported in Table 2.3 With the new numbers comparing to AI at the margin, I find the 100 millionth dollar on alternate food is 3 times more cost effective, the average $100 million on alternate food is 15 times more cost effective, and the marginal dollar now on alternate food is 300 times more cost effective. One way of thinking about the high marginal cost effectiveness now is spending some money to figure out if more money is justified: value of information. These ratios are about an order of magnitude higher than the 2017 version. This is largely driven by the greater long term future impact of the catastrophes (compared to only considering loss of civilization and non-recovery). Given orders of magnitude uncertainty, more robust is likely the probabilities that one is more cost effective than the other. With the new numbers comparing to AI at the margin, I find ~60% probability that the 100 millionth dollar on alternate food is more cost effective, ~80% probability that the average $100 million on alternate food is more cost effective, and ~90% probability that the marginal dollar now on alternate food is more cost effective (see Table 2).
Table 2. Key cost effectiveness outputs
My personal estimates for these parameters tended to be close to the median of the survey, so the mean value is more cost-effective than my estimate. However, I would note that being prepared for agricultural catastrophes might protect against unknown risks, meaning the cost-effectiveness would increase.
The importance, tractability, neglectedness (ITN) framework is useful for screening cause areas. One update from the previous analysis is that because alternate foods appear to be relatively more cost-effective now, this would mean they are more tractable than AI, which was my original intuition (versus about the same tractability with my previous analysis).
Steelmanning the opposition to funding alternate foods
These are generally the same as before. One addition could be that there could be some public relations debacle that hurts the field. This could be considered within the moral hazard parameter. I think this indicates that we should be cautious with the mass media, but I doubt this is a reason not to do the work at all.
Andersā model
Andersā model differed in a number of ways to mine. The mean cost-effectiveness was similar to mine (though he has not taken into account the possibility of conflict with China being full scale nuclear war), but because of the smaller variance in his distributions, there was greater confidence that alternate foods are more cost-effective than AI (~80% at the 100 millionth dollar, and ~100% for the marginal dollar now). Another large difference is that I (and the survey) found that 10% agricultural shortfalls are similar cost effectiveness for the far future as full scale nuclear war. This was because the greater probability of these catastrophes counteracted the smaller far future impact. However, Anders rated the cost-effectiveness of the 10% shortfalls as two orders of magnitude lower than for full-scale nuclear war. I tend to be somewhere in between, with my intuition that the far future impact scales stronger than linearly with the short-term impact.
Conclusions
Since my last post, there has been significant support from EA in this cause area, most notably Adam Gleave through the EA lottery. A forthcoming post will explain the near-term projects we think are the highest priority. We have not yet submitted this model for publication, so your feedback can still influence the paper. As before, both AI and alternate foods save lives in the present generation orders of magnitude more cheaply than global poverty interventions. One way of quantifying the urgency of alternate foods is the value of accelerating full preparedness. At the bottom of the model, a calculation shows that each day acceleration of preparedness could increase the value of the far future by 0.000002%-0.002%.
Notes
1 You can change numbers in viewing model to see how outputs change, but they will not save. If you want to save, you can make a copy of the model. Click View, visible to show arrows. Mouse over cells to see comments. Click on the cell to see the equation.
2 Though there were concerns that full scale nuclear war would kill everyone with radioactivity, it turns out that most of the radioactivity is rained out within a few days. One possible mechanism for extinction would be that the hunter gatherers would die out because they do not have food storage. And people in developed countries would have food storage, but might not be able to figure out how to go back to being hunter gatherers.
3 Ratios of means require manual updates in Guesstimate, which I note in all caps in the model.
- The moĀtiĀvated reaĀsonĀing criĀtique of effecĀtive altruism by 14 Sep 2021 20:43 UTC; 285 points) (
- Big List of Cause Candidates by 25 Dec 2020 16:34 UTC; 282 points) (
- ShalĀlow evalĀuĀaĀtions of longterĀmist organizations by 24 Jun 2021 15:31 UTC; 192 points) (
- What EA proĀjects could grow to beĀcome megaproĀjects, evenĀtuĀally spendĀing $100m per year? by 6 Aug 2021 11:24 UTC; 131 points) (
- InĀterĀvenĀtion ProĀfile: BalĀlot Initiatives by 13 Jan 2020 15:41 UTC; 117 points) (
- Where are you donatĀing in 2020 and why? by 23 Nov 2020 8:47 UTC; 71 points) (
- EsĀtiĀmaĀtion for sanĀity checks by 21 Mar 2023 0:13 UTC; 64 points) (
- NuĀclear weapons ā ProbĀlem profile by 19 Jul 2024 17:17 UTC; 53 points) (
- ALLFED 2020 Highlights by 19 Nov 2020 22:06 UTC; 51 points) (
- 2021 ALLFED Highlights by 17 Nov 2021 15:24 UTC; 45 points) (
- ALLFED 2019 AnĀnual ReĀport and FundraisĀing Appeal by 23 Nov 2019 2:05 UTC; 42 points) (
- Speedrun: DeĀmonĀstrate the abilĀity to rapidly scale food proĀducĀtion in the case of nuĀclear winter by 13 Feb 2023 19:00 UTC; 39 points) (
- AGI safety and losĀing elecĀtricĀity/āinĀdusĀtry reĀsilience cost-effectiveness by 17 Nov 2019 8:42 UTC; 31 points) (
- 25 Feb 2021 5:15 UTC; 29 points) 's comment on Why I find longterĀmism hard, and what keeps me motivated by (
- The MoĀtiĀvated ReaĀsonĀing CriĀtique of EffecĀtive Altruism by 15 Sep 2021 1:43 UTC; 27 points) (LessWrong;
- My open-for-feedĀback donaĀtion plans by 4 Apr 2020 12:47 UTC; 25 points) (
- AlliĀance to Feed the Earth in Disasters (ALLFED) Progress ReĀport & GivĀing TuesĀday Appeal by 21 Nov 2018 5:20 UTC; 21 points) (
- 23 Oct 2019 21:37 UTC; 20 points) 's comment on ReĀview of CliĀmate Cost-EffecĀtiveĀness Analyses by (
- Who is workĀing on findĀing āCause Xā? by 10 Apr 2019 23:09 UTC; 19 points) (
- 17 Mar 2021 3:34 UTC; 14 points) 's comment on RelĀaĀtive ImĀpact of the First 10 EA FoĀrum Prize Winners by (
- 10 Nov 2020 4:34 UTC; 12 points) 's comment on NuĀclear war is unĀlikely to cause huĀman extinction by (
- 18 Feb 2021 23:47 UTC; 10 points) 's comment on Big List of Cause Candidates by (
- 4 Nov 2019 5:44 UTC; 10 points) 's comment on ReĀview of CliĀmate Cost-EffecĀtiveĀness Analyses by (
- AGI safety and losĀing elecĀtricĀity/āinĀdusĀtry reĀsilience cost-effectiveness by 17 Nov 2019 6:48 UTC; 10 points) (LessWrong;
- 17 Oct 2019 2:43 UTC; 8 points) 's comment on UpĀdated CliĀmate Change ProbĀlem Profile by (
- 15 Jan 2019 7:14 UTC; 7 points) 's comment on CliĀmate Change Is, In GenĀeral, Not An ExĀisĀtenĀtial Risk by (
- 26 Jun 2021 14:42 UTC; 7 points) 's comment on ShalĀlow evalĀuĀaĀtions of longterĀmist organizations by (
- 4 Sep 2019 4:57 UTC; 6 points) 's comment on Cause X Guide by (
- 28 Nov 2018 18:11 UTC; 4 points) 's comment on Why we have over-rated Cool Earth by (
- 12 Nov 2019 5:20 UTC; 4 points) 's comment on AsĀsumpĀtions about the far fuĀture and cause priority by (
- 18 Aug 2020 18:16 UTC; 4 points) 's comment on A New X-Risk FacĀtor: Brain-ComĀputer Interfaces by (
- 17 May 2019 5:05 UTC; 3 points) 's comment on Does cliĀmate change deĀserve more atĀtenĀtion within EA? by (
- 20 Jun 2019 1:35 UTC; 3 points) 's comment on Which nuĀclear wars should worry us most? by (
- 5 Dec 2019 6:30 UTC; 3 points) 's comment on Eight high-level unĀcerĀtainĀties about global catasĀtrophic and exĀisĀtenĀtial risk by (
- 10 Sep 2021 22:21 UTC; 3 points) 's comment on [Paper] InĀterĀvenĀtions that May Prevent or MolĀlify SuĀperĀvolĀcanic Eruptions by (
- 16 May 2020 7:56 UTC; 2 points) 's comment on CritĀiĀcal ReĀview of āThe Precipiceā: A ReĀassessĀment of the Risks of AI and Pandemics by (
- 16 Sep 2020 4:11 UTC; 2 points) 's comment on How do poliĀtiĀcal sciĀenĀtists do good? by (
- 18 Jan 2020 1:08 UTC; 2 points) 's comment on The āfar fuĀtureā is not just the far future by (
- 25 Nov 2018 19:09 UTC; 2 points) 's comment on Is NeĀglectĀedĀness a Strong PreĀdicĀtor of Marginal ImĀpact? by (
- čŖ²é”åč£ć®ććć°ćŖć¹ć by 20 Aug 2023 14:59 UTC; 2 points) (
- 9 Aug 2020 3:39 UTC; 2 points) 's comment on AdĀdressĀing Global Poverty as a StratĀegy to ImĀprove the Long-Term Future by (
- 3 Nov 2019 19:04 UTC; 1 point) 's comment on ReĀview of CliĀmate Cost-EffecĀtiveĀness Analyses by (
Hi David,
Thanks for putting this together. I have some critical questions which shouldnāt be taken as an overall judgement on the merits of this.
1. The upper bound of the Barrett et al estimate seems much too high. I donāt think it plausible that there is a 7% annual risk of nuclear war, especially as this is based in part on historical near misses. Weād expect a nuclear war every 14 years, which is so strongly at odds with 75 years of non-war as to not be plausible, even at the 95th percentile.
2. On climate change, 5 degrees of warming would take 100 years to occur in which time there would be agricultural improvements. Reputable forecasts suggest that gains in productivity would outweigh any losses in productivity due to climate change. IPCC models show 20% effects at 5 degrees (i.e. in 100 years) and experts suggest there will be 50% improvements in yield by 2050, (which doesnāt account for unforeseen improvements in technology). Farming would indeed become difficult in the tropics at 5 degrees, but to balance that out, frozen areas esp in Russia would become open to farming.
3. In general, does your model try to account for improvements in agricultural productivity over the next 100 years? If not, then your model probably overstates the c-e of alt foods.
4. What do you think about the Reisner et al study which argued that prior work on nuclear winter is wrong. A very high ranking physicist has also mentioned to me in conversation that they donāt think the current nuclear winter models are plausible, so I would be keen to hear what you think.
Thanks for your good questions.
1. Indeed, Bayesian updating the lognormal distribution from Barrett based on no nuclear war for 72 years would cut off the high probability tail. Barrett notes that 7% per year is unlikely given the data. This is why Anders uses a beta distribution in his model. If you assume a uniform prior and 72 years of no nuclear war with a beta distribution, you get an annual probability of approximately 1.4%. He adjusted this downward to 0.7%, but as I noted this does not take into account the possibility of China conflicts. Hellmanās model indicated roughly 1% per year. So I think this is the right order of magnitude, but feel free to put your own number in. The overall conclusions are likely not to change very much. To give you some idea, the ratio of marginal 100 millionth dollar on alternate foods versus marginal now is two orders of magnitude ratio in cost effectiveness of alternate food. So even if one were two orders of magnitude less optimistic about alternate foods, it would still have ~60% confidence that funding alternate foods now is better than AI.
2. I agree that slow climate changes are much less problematic. One way of quantifying this is the agricultural loss velocity, percent loss in productivity per year. For nuclear winter, this is roughly 100% per year. For the sudden 10% agricultural losses (like regional nuclear war or volcano like the year without a summer), this is about 10% per year. For the abrupt regional climate change, this is about 10% over 10 years, or 1% per year. But if the 5Ā°C over 100 years is a 20% agricultural effect, this is only 0.2% per year. And yet, there seems to be much more concern in the EA community (e.g. CSER) about extreme climate change, than the abrupt 10% shortfalls. And as I noted, the 80,000 Hours estimate of the long-term impact of extreme climate change was 20%. I guess one possible mechanism of why slow extreme climate change could be bad is there could be mass migration causing political tensions and potentially nuclear war. But in general, these risks seem to be significantly less serious than nuclear war directly.
3. My time horizon is only about 20 years. For the blocking of the sun, improvements in agricultural productivity are not really relevant. They would be relevant for the 10% shortfalls. Another thing that would be relevant is general economic development so the poorest of the world could handle price shocks better. Andersā time horizon is significantly longer, but he is less concerned about the 10% shortfalls. So overall I do not think it would be too large of an adjustment.
4. For the probability of nuclear winter given full-scale nuclear war, there appear to be two camps: people that think it is near 100% and people to think it is near 0%. I did a Monte Carlo analysis on it and found if you define nuclear winter as near-complete agricultural collapse of crops where they are currently planted, this was around a 20% probability. This was so low because I considered significant probabilities of counter industrial and counterforce (trying to destroy the other sideās nuclear weapons) attacks. The usual understanding is maximum casualties, and then I would say the probability is more like half. I would also note that it is possible to have far future impacts of nuclear war even without nuclear winter (e.g. worse values ending up in AGI).
David,
You write:
I donāt know whether you āagreeā with the surveyās distribution, or believe these means to be realistic. But if you do think that a 5% reduction is at least plausible, what would you say are the main drivers of ALLFEDās having been able to produce this reduction through its work so far (not work it plans to conduct later)?
Iām trying to figure out a world would look like where ALLFEDās work so far wound up helping to save a lot of lives. (This work seems to be mostly papers and workshops, but could also include work that isnāt as easy to display, like conversations with influential people.)
It could be that ALLFED indirectly inspired other organizations, leading to those orgs beginning to conduct research or make plans; it could be that ALLFEDās own research was directly useful to governments as they shaped their response plans; it could be ALLFEDās workshops taught key lessons to important people, who reacted better than they otherwise would have...
...but Iām not sure which of these stories you think sounds most likely/āimportant.
This definitely isnāt a well-formed question (my apologies!), but do you have any thoughts on the most important way(s) that ALLFEDās prior work has begun to make the world safer?
Thanks for your question, aarongertler (I canāt seem to switch this to a reply). I think those mean numbers are a little high. But I think it is plausible that the work we have done so far could result in the saving of many lives (or even civilization) if the catastrophe happens soon. One possibility is that governments search the web and find our materials. Then the governments might realize that if they cooperated, we could feed everyone, and hopefully they would not resort to military action towards a ālifeboatā situation. Another possibility is that the mass media contacts we have so far call us and run hopeful stories. These could get picked up by other media and influence leaders. A further possibility is that the people we have talked to that have some influence in US, UK and Indian governments could get the message up. A fourth possibility is our message could go viral on social media and eventually influence leaders. In many of these scenarios, even if the governments donāt change actions, since some food sources would work on the household scale, some lives could be saved this way. Also, governments might not choose to cooperate with other governments, but still learn how to feed more of their own people. It is possible that our work has prompted governments to make response plans, but they havenāt told us (it could even be classified).