Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 134 publications (>4000 citations, >50,000 downloads, h-index = 32, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 200 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Denkenberger
As one who donates 50%, it doesn’t seem like it should be that uncommon. One way I think about it is earning like upper-middle-class, living like middle-class, and donating like upper-class. Tens of percent of people work for tens of percent less money in sectors like nonprofits and governments. And I’ve heard of quite a few non-EAs who have taken jobs for half the money. And yet most people think about donating that large of a percent very differently than taking a job that pays less. I’m still not sure why—other than that it is uncommon or “weird.”
I second weakening the definition. As someone who cares deeply about future generations, I think it is infeasible to value them equally to people today in terms of actual actions. I sketched out an optimal mitigation path for asteroid/comet impact. Just valuing the present generation in one country, we should do alternate foods. Valuing the present world, we should do asteroid detection/deflection. Once you value hundreds of future generations, we should add in food storage and comet detection/deflection, costing many trillions of dollars. But if you value even further in the future, we should take even more extreme measures, like many redundancies. And this is for a very small risk compared to things like nuclear winter and AGI. Furthermore, even if one does discount future generations, if you think we could have many computer consciousnesses in only a century or so, again we should be donating huge amount of resources for reducing even small risks. I guess one way of valuing future generations equally to the present generation is to value each generation an infinitesimal amount, but that doesn’t seem right.
I applaud the explanations of the decisions for the grants and also the responses to the questions. Now that things have calmed down, since the EA Long Term Future Fund team suggested that requests for feedback on unsuccessful grants be made publicly, I am doing that.
My proposal was to further investigate a new cause area, namely resilience to catastrophes that could disable electricity regionally or globally, including extreme solar storm, high-altitude electromagnetic pulses (caused by nuclear detonations), or a narrow AI computer virus. Since nearly everything is dependent on electricity, including pulling fossil fuels out of the ground, industrial civilization could grind to a halt. Many people have suggested hardening the grid to these catastrophes, but this would cost tens of billions of dollars. However, getting prepared for quickly providing food, energy, and communications needs in a catastrophe would cost much less money and provide much of the present generation (lifesaving) and far future (preservation of anthropological civilization) benefits. I have made a Guesstimate model assessing the cost-effectiveness of work to improve long-term future outcomes given one of these catastrophes. Both my inputs and Anders Sandberg’s inputs yield >95% confidence that work now on losing electricity/industry is more cost-effective than marginal work on AI safety (Oxford Prioritisation Project/ Owen Cotton-Barratt and Daniel Dewey did the AI section, except I truncated distributions and made AI more cost effective). There is also a blank (to avoid anchoring) Guesstimate model.
The specific proposal was to buy out of my teaching and/or fund a graduate student to research particularly high value of information relevant projects and submit papers. I think that feedback would be particularly helpful because it is not just about the particular proposal, but also whether the new cause area is worth investigating further.
For more background, see the three papers involving losing electricity/industry: feeding everyone with the loss of industry, providing nonfood needs with the loss of industry, and feeding everyone losing industry and half of sun. We are still working on the paper for the cost-effectiveness from the long-term future perspective of preparing for these catastrophes funded by an EA grant, so input can influence that paper.
ALLFED has nearly completed our prioritization, and given the amount of commercialization that has already been done on resilient foods, we think we are ready to partner with other companies to do piloting of the most promising solutions in a way that is valuable for global catastrophes (e.g. very fast construction). Repurposing a paper mill for sugar (and protein if the feedstock is agricultural residues) is a good large project. But there is also fast construction of pilot scale of natural gas (methane) single cell protein and the fast construction of pilot scale hydrogen single cell protein (splitting water or gasifying a solid fuel such as biomass). Furthermore, there is the backup global radio communication system that would be extremely useful for loss of electricity scenarios.
I think there still is quite a bit of research to be done, especially analyzing cooperation scenarios and the potential of resilient food production by country. This could help inform country-level response plans. This could be facilitated by setting up a research institute on resilient foods. Another possibility is running an X prize for radical new solutions.
I am skeptical and would like to see the math on standard deviations. For the US, according to this, about one third of Nobel prizes were awarded to people who did their undergraduate at a non top 100 global university (and I’m pretty sure it would be the majority outside the global top 20 that are in the US). And you don’t have to win a Nobel Prize in order to become an EA! So I think there is lots of potential talent for EA outside the global top 100, at least at the undergraduate level. A key factor here is size—many of the most elite schools are not very big. For instance, the honors college at Penn State has similar SAT scores to Princeton, and it has about half as many undergrads as Princeton. At the graduate level, I think the talent tends to concentrate more, but I still think there is significant talent outside the global top 100.
(Edit: Penn State honors college is larger than Swarthmore.)
- 9 Aug 2022 13:54 UTC; 13 points) 's comment on Most Ivy-smart students aren’t at Ivy-tier schools by (
Thanks for considering ALLFED. We try to respond to inquiries quickly. We have looked back, and have not be able to locate any such inquiries. We will be finalizing our 2020 report with financial details soon.
Thanks a lot for the engagement in the cost-effectiveness model. To clarify, the cost of preparation does not include the scale up in a catastrophe. The idea is that the resilient foods (we are rebranding away from “alternative foods”) could be scaled up without large-scale preparation (e.g. countries would repurpose the paper factories to produce food after the catastrophe, rather than spending billions of dollars ahead of time). Most of the promising resilient foods have already been commercialized. In this paper, we found that if there were no resilient foods, expenditure on stored foods in a catastrophe would be approximately $90 trillion and about 10% of people would survive. However, if resilient foods could be produced at $2.5 per dry kilogram retail, 97% of people would survive but the total expenditure would only be ~$20 trillion. So one could argue that resilient foods would actually save money in a catastrophe. But we did not include that effect in the cost-effectiveness model.
I expect that affecting a large amount of the Earth’s future impact (i.e., 3 to 50% of the future impact of humanity) would be very hard even in extreme circumstances.
Just to make sure we are on the same page, if there were a 10% probability of full-scale nuclear war in the next 30 years and there were a 10% reduction in the long-term future potential of humanity given nuclear war, and if planning and R&D for resilient foods mitigated the far future impact of nuclear war by 50%, then that would improve the long-term potential of humanity by 0.5 percentage points (the product of the three percentages).
- 26 Jun 2021 12:57 UTC; 2 points) 's comment on Propose and vote on potential EA Wiki entries by (
Nice piece! Though this does not work for all longtermist interventions, some find it motivating that AGI safety, alternative foods, and interventions for losing electricity/industry (and probably other interventions) likely save lives in the present generation more cost-effectively than GiveWell top charities. This book argues that doing more to mitigate catastrophes can be justified by concerns of the present generation.
- 17 Mar 2021 3:34 UTC; 14 points) 's comment on Relative Impact of the First 10 EA Forum Prize Winners by (
EA is overwhelmingly white, male, upper-middle-class, and of a narrow range of (typically quantitative) academic backgrounds.
Though these characteristics are over represented in EA, I think one should be careful about claiming overall majorities. According to the 2020 EA survey, EA is 71% male and 76% white. I couldn’t quickly find the actual distribution of EA income, but eyeballing some graphs here and using $100,000 household income as a threshold (say $60,000 individual income) and $600k household upper bound (upper class is roughly the 1% top earners), I would estimate around one third of EAs would be upper middle class now. But I think your point was that they came from an upper-middle-class background, which I have not seen data on. I would still doubt it would be more than half of EAs, so let’s be generous and use that. Using your list above of analytic philosophy, mathematics, computer science, or economics, that is about 53% of EAs (2017 data, so probably lower now). If these characteristics were all independent, that would indicate the product of about 14% of EAs would have all these characteristics. Now there is likely positive correlation between these characteristics, but I believe by definition that with the numbers above, it can’t the exceed the 50% upper middle class, even if all of those happen to be male, white, and those majors.
US suburbs may have a lot of building mass in aggregate, but it’s also really spread out and generally doesn’t contain that much which is likely to draw nuclear attack.
There are only 55 metropolitan areas in the US with greater than 1 million population. Furthermore, the mostly steel/concrete city centers are generally not very large, so even with a nuclear weapon targeted at the city center, it would burn a significant amount of suburbs. So with 1500 nuclear weapons countervalue even spread across NATO, a lot of the area hit would be suburbs.
Yeah, sorry, I’ve heard enough crying wolf on this (Sagan on Kuwait being the most prominent) that I don’t buy it, at least not until I see good validation of the models in question on real-world events. Which is notably lacking from all of these papers. So I’ll take the best analog, and go from there. Also, note that your cite there is from 1990, when computers were bad and Kuwait hadn’t happened yet.
“As Toon, Turco, et al. (2007) explained, for fires with a diameter exceeding the atmospheric scale height (about 8 km), pyro-convection would directly inject soot into the lower stratosphere.” Another way of getting at this is looking at the maximum height of buoyant plumes. It scales with the thermal power raised to the one quarter exponent. The Kuwait oil fires were between 90 MW and 2 GW. Whereas firestorms could be ~three orders of magnitude more powerful than the biggest Kuwait oil fire. So that implies much higher lofting. Furthermore, volcanoes are very high thermal power, and they regularly reach the stratosphere directly.
Also note that the doommonger’s best attempt to puzzle stratospheric soot out of atmospheric data from WWII didn’t really show more than a brief gap at most.
I don’t see this as a significant update, because the expected signal was small compared to the noise.
This is helpful, but there is a key difference between the EA job market and the general one: there are a limited number of positions in EA. I think a valuable metric that perhaps could be explored on the next EA survey is the level of EA “unemployment.” This could mean the number of EAs who would prefer to have a job at an EA aligned organization, but have not gotten one. I suspect this will be far higher than the general level of unemployment. As an example, say there are 50 EAs with a particular skill, and five EA jobs requiring that skill. Then if they all apply to those five jobs, 2% of the applicants will get a job in each case, but that is only 10% of the EAs getting a job, so there would be 90% “unemployment.” Whereas outside of EA, they could all apply to 50 jobs and all get jobs. This could be analogous to underemployment, such as PhDs who want a job such as academia that requires a PhD, but have not gotten one.
Regarding case 1, with a pandemic leaving 50% of the population dead but no major infrastructure damage, I think you can make much stronger claims about there not being ‘civilization collapse’ meaning near-total failure of industrial food, water, and power systems. Indeed, collapse so defined from that stimulus seems nonsensical to me for rich quantitative reasons.
If there were a pandemic heading toward 50% population fatality, I think that it is likely that workers would not show up to critical industries and there would be a collapse of industrial civilization. I looked into whether the military could replace those workers, and it did not look feasible. Whether there would be further collapse of large-scale cooperation is less certain. If that cooperation is maintained, I agree it would be possible to have agricultural productivity similar to preindustrial Europe. However, it would mean a very rapid scale up of hand/animal farming equipment, and hand powered wells, carts that could be drawn by animals, etc (which ALLFED is planning on investigating). Some people say that modern crop varieties would actually do worse than traditional crop varieties if there were no artificial fertilizers and pesticides. If that were true or if scaling of tools were difficult, then we could have much worse agricultural productivity than preindustrial Europe.
Loss of rapid communication would likely imply fragmentation of large countries, if it is true that empires can only be maintained with I think a ~14 day communication radius. Furthermore, it is possible that cooperation outside of 100 person groups is lost, particularly because of fear of the disease. In this case, I think it is likely with current preparation to only be able to do hunting and gathering. In addition, the hunting and gathering population density could be much less than historic, because the overshoot in population density could mean that plants and animals that are good to eat could be driven to extinction by desperate humans.
Though it is possible that current food storage could be protected well, it is not clear to me that there would be a strong defense advantage. The desperate attackers would have weapons as well. If we go significantly above the carrying capacity and food is distributed fairly equally, then everyone would starve.
Thanks for mentioning ALLFED! We touched on this in one of our papers:
The importance, tractability, neglectedness (ITN) framework [45] is useful for prioritizing cause areas. The importance is the expected impact on the long-term future of the risk. Tractability measures the ease of making progress. Neglectedness quantifies how much effort is being directed toward reducing the risk. Unfortunately, this framework cannot be applied to interventions straightforwardly. This is because addressing a risk could have many potential interventions. Nevertheless, some semi-quantitative insights can be gleaned. The importance of AGI is larger than industry loss catastrophes, but industry loss interventions are far more neglected.
Or another way of saying this is that different solutions could solve different parts or percentages of the problem. So really I think we should be doing more actual cost-effectiveness analyses, and only using ITN for initial screening.
Thanks to everyone who helps make these events possible. I assume UC Berkeley in the summer that accommodated ~1000 people in 2016 was not more expensive, so you would describe it as less suited to the event? Why is that? It had the large advantage of very inexpensive housing in the dorms. That is understandable if CEA only wants to subsidize a certain number of tickets, but I would think there are significant number of additional people who would pay the full cost. I’m interested in the estimate of the percent reduction in value to the first ~600 participants associated with a larger conference and how that was weighed against the value that additional participants could get. With fewer EAGx events, I expect the value of the latter would be larger this year than other years.
Nuclear winter would be approximately 8°C change in only one year, and this is unlikely to cause extinction. 10°C climate warming over a century would be much lower impact, because there is time to relocate infrastructure and people (and nuclear winter also reduces solar radiation). So I have put it in the intensity category of an abrupt 10% agricultural shortfall. Based on a survey of GCR researchers, this has a mean long-term reduction in far future potential of approximately 5%. This combined with a probability of about 2% gives about a 0.1% reduction in the far future potential. Full scale nuclear war is estimated to have a 17% reduction in long term future potential. There is great uncertainty in the probability of full-scale nuclear war, but I think 0.1% per year or 10% in the next 100 years is reasonably conservative.* Therefore, full scale nuclear war is more likely than extreme climate change and also significantly greater consequences if it were to happen. But then the question is how much would it cost to significantly mitigate the problems. Since solar radiation management is risky, the present value of the cost of largely solving the climate change problem by reducing emissions is around $10 trillion (there was an EA forum post on value of information of this, but I can’t seem to find it). I have researched both energy efficiency and renewable energy for years, and I do think there is still some low hanging fruit of energy efficiency that pays for itself. However, to actually solve the problem will cost a lot of money. On the other hand, reducing the far future impact of nuclear winter by about 17% would cost around $100 million by investing in response plans and research and development of alternative foods. Therefore, since alternative foods address a roughly 15 times bigger problem, at 100,000 times lower cost and with 1⁄5 the threat reduction (if we assume the $10 trillion on emissions reductions completely solves the problem), this works out to approximately 300,000 times higher cost effectiveness for alternative foods versus emissions reductions.
Fortunately, alternative foods also mitigates climate related catastrophes such as abrupt regional climate change, coincident extreme weather on multiple continents, and slow 10°C change (which makes the cost effectiveness of alternative foods even higher than the numbers calculated above). There may be other low hanging fruit that address climate change such as Cool Earth (though see this criticism) and energy efficiency (though even if energy efficiency pays for itself, it still costs donor money to advocate for it). But even at a cost of $0.38 per ton CO2, it is still a few orders of magnitude lower cost effectiveness than alternative foods or artificial general intelligence safety from the perspective of the long-term future. Of course it is better to do this probabilistically, which is why I have encouraged you to add climate change to an existing cost-effectiveness model of alternative foods and artificial intelligence.
Hopefully we can direct tens of billions of dollars more to EA, and then we can work our way further down the marginal cost effectiveness curves of existential risk mitigation, but I don’t think that reducing greenhouse gas emissions should be a priority for EA at this point.
*For the alternative food analysis, we only used at few decades effective time horizon but higher probability of nuclear war from here.
I did not look at the details, but it appears that neither of these estimates take into account opportunity costs. Typical farming profit is around $200 per hectare per year, so if instead you sequester 5 tCO2e per hectare per year, that would cost ~$40 per tCO2e, ~2 orders of magnitude more expensive. By the way, I believe $300 billion divided by 205 billion tons carbon = 750 billion tons CO2 would be $0.40 per ton CO2.
- 5 Jul 2019 23:50 UTC; 16 points) 's comment on New study in Science implies that tree planting is the cheapest climate change solution by (
- Will ‘Team Trees’ be Effective? by 31 Oct 2019 5:00 UTC; 9 points) (
I think it is good to keep vigilant and make sure we are not missing good cause areas. However, I think that your examples are not actually neglected. Using this scale, ~$10 billion per year or more is not neglected.
The point is that there is a difference between working in a general area and working on the specific subset of that area that is highest impact and most neglected. In much the same way as AI safety research is neglected even if AI research more generally is not, likewise in the parallel cases I present, I argue that serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not.
Below I try to look at actual effort to resolving the problem effectively, so it is not analogous to the total amount of money put into artificial intelligence. For resource depletion, the total amount of money put in could include all the money spent on extracting resources. So I’m just looking at efforts towards resource sustainability, which would be analogous to artificial intelligence safety.
One flaw with the 80,000 hours scale is that it does not take into account historic work. Historically, there has been a tremendous amount of effort devising and implementing practical alternatives to capitalism. So averaged over the last century, it would be far more than $10 billion per year.
Before climate change reached prominence in the environmental movement, much effort was directed at recycling and renewable energy to address the resource depletion issue. Even now some effort in these areas is directed at resource depletion. In previous decades, there was a large amount of effort on family planning, partly because of resource depletion issues, and even now billions of dollars are being spent per year on this. So I am quite sure more than $10 billion per year is being spent on addressing resource depletion.
I’m not easily finding the number of philosophers of religion. However, I know that many people who are disillusioned with the religion they grew up with do not just jump to atheism, and instead try to find the true religion (at least for certain period of time). So if you add up all the effort hours with some reasonable wage, I’m pretty sure it would average more than $10 billion per year over the last century.
So the neglectedness of these cause areas just cannot compare to that of artificial intelligence safety, which is only tens of millions of dollars per year (and very small more than a decade ago). Of course it is still possible that there are good opportunities within these cause areas, but it is just much less likely than in more neglected cause areas.
Of course we need to prioritize. The Nobel example we have data for, but I think that is too high a bar. My point is that there are probably a similar number of potential EAs at the big relatively high ranking state schools like University of Illinois at Urbana Champaign or University of Texas at Austin as there are at Princeton. The state school students may have lower wealth and political connections, but I think the capability is there (and perhaps less entitlement). (Disclosure: I went to Penn State, Princeton, and University of Colorado at Boulder.)
I am also excited to see work on such an important, neglected topic.
While I haven’t looked into this much, I feel fairly convinced that hundreds of thousands or millions of people could survive using traditional approaches to agriculture in parts of the world with more moderate climate effects (and basic mitigation strategies, like switching to crop types that are more resilient to temperature and precipitation fluctuations).
ALLFED has indeed found a number of cool tolerant crops that could likely grow in nuclear winter conditions in the tropics. However, they are generally planted far away from the tropics, so if there were not long distance cooperation, the situation would be bad. Even without long distance cooperation, artifacts have moved thousands of kilometers, but I think it takes thousands of years. One possibility would be relocating crops from nearby mountains, but that would only work in specific circumstances.
On the other hand, there could be long distance movement of people, perhaps with remaining above ground fossil fuel and current ships. But then places where agriculture is easier in nuclear winter such as Oceania could be overwhelmed with migrants.
The carrying capacity of the Earth for hunter-gatherers is thought to be around 10 million if the survivors regress to pre-paleolithic levels of technology (if they lose, for example, flakes, handaxes, controlled use of fire, and wooden spears) (Taiz, 2013).
It appears that this is not the correct reference for that quote. Taiz says that the global population was 10 million in 8,000 BC and another one of your references said that by then the hunter gatherers had covered the globe and had 10 million population (some say only 1 million) and they would generally have had those pre-paleolithic technologies. Ellis says 100 million hunter gatherers would be possible with prehistoric technology, which is much higher than the actual population in 8,000 BC (though it would be consistent with your statement).
Several experts, including ALLFED director David Denkenberger, have affirmed this conclusion — they do not expect humanity to dip below the minimum viable population even in relatively extreme sun-blocking scenarios.
To be clear, I don’t expect it, but I think extinction is a non-negligible probability.
Before getting into the likelihood that society would recover from civilizational collapse under these starting conditions, I’ll briefly discuss whether we should expect human civilization to actually collapse in my sense in this scenario.
Doesn’t appear to be public?
I agree that most academic research is a bad ROI but I find that a lot of this sort of ‘nobody reads research’ commentary is equating reads with citations which seems completely wrong. By that metric most forum posts would also not be read by anyone.
I agree-for one, the studies I’ve seen saying that the median publication is not cited are including conference papers, so if one is talking about the peer-reviewed literature, citations are significantly greater. I’ve estimated the average number of citations per paper is around 30 for the peer-reviewed literature. Furthermore, from what I’ve seen, the number of reads on places like ResearchGate and Academia.edu tend to be one to two orders of magnitude greater than the number of citations. So I think a reasonable expectation for a peer-reviewed paper is hundreds or thousands of reads.
I think a geometric mean would be more appropriate, so (48*468)^0.5 = 150. But I disagree with a number of the inputs.
Current US + Russia arsenals are around 11,000 warheads, but current deployed arsenals are only about 3000. With Putin pulling out of New START, many nuclear weapons that are not currently deployed could become so. Also, in an all-out nuclear war, currently nondeployed nuclear weapons could be used (with some delay). Furthermore, even if only two thirds as many nuclear weapons are used, the amount of soot would not scale down linearly because of hitting higher average combustible loading areas.
I agree that targeting would likely not maximize burned material, and I consider that in my Monte Carlo analysis.
While it is true that most city centers have a higher percentage steel and concrete than Hiroshima, at least in the US, suburbs are still built of wood, and that is the majority of overall building mass. So I don’t think the overall flammability is that much different. There is also been the counteracting factor of much more building area per person, and taller average buildings in cities. Of course steel buildings can still burn, as shown by 9/11.
The linear burn area scaling is a good point. Do you have data for the 400 kT average? I think if you have multiple detonations in the vicinity, then you could have burn area outside the burn area that one would calculate for independent detonations. This could be due to combined thermal radiation from multiple fireballs, but also the thermal radiation from multiple surrounding firestorms so it creates one big firestorm. Also, because assuming a linear burn area means small/less dense cities would be targeted, correcting the linear burn area downward by a factor of 2-3 would not decrease the soot production by a factor of 2-3.
There is a fundamental difference between a moving front fire (conflagration) like a bushfire and a firestorm where it all burns at once. If you have a moving front, the plume is relatively narrow, so it gets heavily diluted and does not rise very high (also true for a oil well fire). Whereas if you have a large area burning at once, it gets much less diluted and will likely go into the upper troposphere. Then solar lofting typically takes it to the stratosphere. Nagasaki was a moving front fire, and I do give significant probability mass to moving front fires instead of firestorms in my analysis.
So overall I got a median of about 30 Tg to the stratosphere (Fig. 6) for a full-scale nuclear war, similar to Luísa’s. I could see some small downward adjustment based on the linear burn area assumption, but significantly smaller than Bean’s adjustment for that factor.
Added 20 September: though the blasted area goes with the 2⁄3 exponent of the yield because energy is dissipated in the shock wave, the area above the threshold thermal radiation for starting fires would be linear if the atmosphere were transparent. In reality, there is some atmospheric absorption, but it would be close to linear. So I no longer think there should be a significant downward adjustment from my model.