Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 134 publications (>4000 citations, >50,000 downloads, h-index = 32, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 200 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Denkenberger
As one who donates 50%, it doesn’t seem like it should be that uncommon. One way I think about it is earning like upper-middle-class, living like middle-class, and donating like upper-class. Tens of percent of people work for tens of percent less money in sectors like nonprofits and governments. And I’ve heard of quite a few non-EAs who have taken jobs for half the money. And yet most people think about donating that large of a percent very differently than taking a job that pays less. I’m still not sure why—other than that it is uncommon or “weird.”
Should we be spending no less on alternate foods than AI now?
Americans give ~4%, not 2%
I second weakening the definition. As someone who cares deeply about future generations, I think it is infeasible to value them equally to people today in terms of actual actions. I sketched out an optimal mitigation path for asteroid/comet impact. Just valuing the present generation in one country, we should do alternate foods. Valuing the present world, we should do asteroid detection/deflection. Once you value hundreds of future generations, we should add in food storage and comet detection/deflection, costing many trillions of dollars. But if you value even further in the future, we should take even more extreme measures, like many redundancies. And this is for a very small risk compared to things like nuclear winter and AGI. Furthermore, even if one does discount future generations, if you think we could have many computer consciousnesses in only a century or so, again we should be donating huge amount of resources for reducing even small risks. I guess one way of valuing future generations equally to the present generation is to value each generation an infinitesimal amount, but that doesn’t seem right.
I applaud the explanations of the decisions for the grants and also the responses to the questions. Now that things have calmed down, since the EA Long Term Future Fund team suggested that requests for feedback on unsuccessful grants be made publicly, I am doing that.
My proposal was to further investigate a new cause area, namely resilience to catastrophes that could disable electricity regionally or globally, including extreme solar storm, high-altitude electromagnetic pulses (caused by nuclear detonations), or a narrow AI computer virus. Since nearly everything is dependent on electricity, including pulling fossil fuels out of the ground, industrial civilization could grind to a halt. Many people have suggested hardening the grid to these catastrophes, but this would cost tens of billions of dollars. However, getting prepared for quickly providing food, energy, and communications needs in a catastrophe would cost much less money and provide much of the present generation (lifesaving) and far future (preservation of anthropological civilization) benefits. I have made a Guesstimate model assessing the cost-effectiveness of work to improve long-term future outcomes given one of these catastrophes. Both my inputs and Anders Sandberg’s inputs yield >95% confidence that work now on losing electricity/industry is more cost-effective than marginal work on AI safety (Oxford Prioritisation Project/ Owen Cotton-Barratt and Daniel Dewey did the AI section, except I truncated distributions and made AI more cost effective). There is also a blank (to avoid anchoring) Guesstimate model.
The specific proposal was to buy out of my teaching and/or fund a graduate student to research particularly high value of information relevant projects and submit papers. I think that feedback would be particularly helpful because it is not just about the particular proposal, but also whether the new cause area is worth investigating further.
For more background, see the three papers involving losing electricity/industry: feeding everyone with the loss of industry, providing nonfood needs with the loss of industry, and feeding everyone losing industry and half of sun. We are still working on the paper for the cost-effectiveness from the long-term future perspective of preparing for these catastrophes funded by an EA grant, so input can influence that paper.
Possible way of reducing great power war probability?
Multiple high-impact PhD student positions
ALLFED has nearly completed our prioritization, and given the amount of commercialization that has already been done on resilient foods, we think we are ready to partner with other companies to do piloting of the most promising solutions in a way that is valuable for global catastrophes (e.g. very fast construction). Repurposing a paper mill for sugar (and protein if the feedstock is agricultural residues) is a good large project. But there is also fast construction of pilot scale of natural gas (methane) single cell protein and the fast construction of pilot scale hydrogen single cell protein (splitting water or gasifying a solid fuel such as biomass). Furthermore, there is the backup global radio communication system that would be extremely useful for loss of electricity scenarios.
I think there still is quite a bit of research to be done, especially analyzing cooperation scenarios and the potential of resilient food production by country. This could help inform country-level response plans. This could be facilitated by setting up a research institute on resilient foods. Another possibility is running an X prize for radical new solutions.
Cost-Effectiveness of Foods for Global Catastrophes: Even Better than Before?
I am skeptical and would like to see the math on standard deviations. For the US, according to this, about one third of Nobel prizes were awarded to people who did their undergraduate at a non top 100 global university (and I’m pretty sure it would be the majority outside the global top 20 that are in the US). And you don’t have to win a Nobel Prize in order to become an EA! So I think there is lots of potential talent for EA outside the global top 100, at least at the undergraduate level. A key factor here is size—many of the most elite schools are not very big. For instance, the honors college at Penn State has similar SAT scores to Princeton, and it has about half as many undergrads as Princeton. At the graduate level, I think the talent tends to concentrate more, but I still think there is significant talent outside the global top 100.
(Edit: Penn State honors college is larger than Swarthmore.)
- 9 Aug 2022 13:54 UTC; 13 points) 's comment on Most Ivy-smart students aren’t at Ivy-tier schools by (
Thanks for considering ALLFED. We try to respond to inquiries quickly. We have looked back, and have not be able to locate any such inquiries. We will be finalizing our 2020 report with financial details soon.
Thanks a lot for the engagement in the cost-effectiveness model. To clarify, the cost of preparation does not include the scale up in a catastrophe. The idea is that the resilient foods (we are rebranding away from “alternative foods”) could be scaled up without large-scale preparation (e.g. countries would repurpose the paper factories to produce food after the catastrophe, rather than spending billions of dollars ahead of time). Most of the promising resilient foods have already been commercialized. In this paper, we found that if there were no resilient foods, expenditure on stored foods in a catastrophe would be approximately $90 trillion and about 10% of people would survive. However, if resilient foods could be produced at $2.5 per dry kilogram retail, 97% of people would survive but the total expenditure would only be ~$20 trillion. So one could argue that resilient foods would actually save money in a catastrophe. But we did not include that effect in the cost-effectiveness model.
I expect that affecting a large amount of the Earth’s future impact (i.e., 3 to 50% of the future impact of humanity) would be very hard even in extreme circumstances.
Just to make sure we are on the same page, if there were a 10% probability of full-scale nuclear war in the next 30 years and there were a 10% reduction in the long-term future potential of humanity given nuclear war, and if planning and R&D for resilient foods mitigated the far future impact of nuclear war by 50%, then that would improve the long-term potential of humanity by 0.5 percentage points (the product of the three percentages).
- 26 Jun 2021 12:57 UTC; 2 points) 's comment on Propose and vote on potential EA Wiki entries by (
Nice piece! Though this does not work for all longtermist interventions, some find it motivating that AGI safety, alternative foods, and interventions for losing electricity/industry (and probably other interventions) likely save lives in the present generation more cost-effectively than GiveWell top charities. This book argues that doing more to mitigate catastrophes can be justified by concerns of the present generation.
- 17 Mar 2021 3:34 UTC; 14 points) 's comment on Relative Impact of the First 10 EA Forum Prize Winners by (
David Denkenberger: Loss of Industrial Civilization and Recovery (Workshop)
[Question] Remote local group leaders?
EA is overwhelmingly white, male, upper-middle-class, and of a narrow range of (typically quantitative) academic backgrounds.
Though these characteristics are over represented in EA, I think one should be careful about claiming overall majorities. According to the 2020 EA survey, EA is 71% male and 76% white. I couldn’t quickly find the actual distribution of EA income, but eyeballing some graphs here and using $100,000 household income as a threshold (say $60,000 individual income) and $600k household upper bound (upper class is roughly the 1% top earners), I would estimate around one third of EAs would be upper middle class now. But I think your point was that they came from an upper-middle-class background, which I have not seen data on. I would still doubt it would be more than half of EAs, so let’s be generous and use that. Using your list above of analytic philosophy, mathematics, computer science, or economics, that is about 53% of EAs (2017 data, so probably lower now). If these characteristics were all independent, that would indicate the product of about 14% of EAs would have all these characteristics. Now there is likely positive correlation between these characteristics, but I believe by definition that with the numbers above, it can’t the exceed the 50% upper middle class, even if all of those happen to be male, white, and those majors.
How you can save expected lives for $0.20-$400 each and reduce X risk
[Paper] Interventions that May Prevent or Mollify Supervolcanic Eruptions
Alliance to Feed the Earth in Disasters (ALLFED) Progress Report & Giving Tuesday Appeal
US suburbs may have a lot of building mass in aggregate, but it’s also really spread out and generally doesn’t contain that much which is likely to draw nuclear attack.
There are only 55 metropolitan areas in the US with greater than 1 million population. Furthermore, the mostly steel/concrete city centers are generally not very large, so even with a nuclear weapon targeted at the city center, it would burn a significant amount of suburbs. So with 1500 nuclear weapons countervalue even spread across NATO, a lot of the area hit would be suburbs.
Yeah, sorry, I’ve heard enough crying wolf on this (Sagan on Kuwait being the most prominent) that I don’t buy it, at least not until I see good validation of the models in question on real-world events. Which is notably lacking from all of these papers. So I’ll take the best analog, and go from there. Also, note that your cite there is from 1990, when computers were bad and Kuwait hadn’t happened yet.
“As Toon, Turco, et al. (2007) explained, for fires with a diameter exceeding the atmospheric scale height (about 8 km), pyro-convection would directly inject soot into the lower stratosphere.” Another way of getting at this is looking at the maximum height of buoyant plumes. It scales with the thermal power raised to the one quarter exponent. The Kuwait oil fires were between 90 MW and 2 GW. Whereas firestorms could be ~three orders of magnitude more powerful than the biggest Kuwait oil fire. So that implies much higher lofting. Furthermore, volcanoes are very high thermal power, and they regularly reach the stratosphere directly.
Also note that the doommonger’s best attempt to puzzle stratospheric soot out of atmospheric data from WWII didn’t really show more than a brief gap at most.
I don’t see this as a significant update, because the expected signal was small compared to the noise.
I think a geometric mean would be more appropriate, so (48*468)^0.5 = 150. But I disagree with a number of the inputs.
Current US + Russia arsenals are around 11,000 warheads, but current deployed arsenals are only about 3000. With Putin pulling out of New START, many nuclear weapons that are not currently deployed could become so. Also, in an all-out nuclear war, currently nondeployed nuclear weapons could be used (with some delay). Furthermore, even if only two thirds as many nuclear weapons are used, the amount of soot would not scale down linearly because of hitting higher average combustible loading areas.
I agree that targeting would likely not maximize burned material, and I consider that in my Monte Carlo analysis.
While it is true that most city centers have a higher percentage steel and concrete than Hiroshima, at least in the US, suburbs are still built of wood, and that is the majority of overall building mass. So I don’t think the overall flammability is that much different. There is also been the counteracting factor of much more building area per person, and taller average buildings in cities. Of course steel buildings can still burn, as shown by 9/11.
The linear burn area scaling is a good point. Do you have data for the 400 kT average? I think if you have multiple detonations in the vicinity, then you could have burn area outside the burn area that one would calculate for independent detonations. This could be due to combined thermal radiation from multiple fireballs, but also the thermal radiation from multiple surrounding firestorms so it creates one big firestorm. Also, because assuming a linear burn area means small/less dense cities would be targeted, correcting the linear burn area downward by a factor of 2-3 would not decrease the soot production by a factor of 2-3.
There is a fundamental difference between a moving front fire (conflagration) like a bushfire and a firestorm where it all burns at once. If you have a moving front, the plume is relatively narrow, so it gets heavily diluted and does not rise very high (also true for a oil well fire). Whereas if you have a large area burning at once, it gets much less diluted and will likely go into the upper troposphere. Then solar lofting typically takes it to the stratosphere. Nagasaki was a moving front fire, and I do give significant probability mass to moving front fires instead of firestorms in my analysis.
So overall I got a median of about 30 Tg to the stratosphere (Fig. 6) for a full-scale nuclear war, similar to Luísa’s. I could see some small downward adjustment based on the linear burn area assumption, but significantly smaller than Bean’s adjustment for that factor.
Added 20 September: though the blasted area goes with the 2⁄3 exponent of the yield because energy is dissipated in the shock wave, the area above the threshold thermal radiation for starting fires would be linear if the atmosphere were transparent. In reality, there is some atmospheric absorption, but it would be close to linear. So I no longer think there should be a significant downward adjustment from my model.