Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 143 publications (>4800 citations, >50,000 downloads, h-index = 36, second most prolific author in the existential/āglobal catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Denkenbergeršø
Thanks for the thoughtful response!
As soon as I have money to save, I am going to hedge against job automation by investing in AGI stock. This will offer better financial protection compared to holding cash.
(Not investment advice) sounds reasonable, though some diversification may be prudent. There was an interesting discussion on LessWrong here.
How much money should we be savĀing for reĀtireĀment?
A few of us at ALLFED (myself, @jamesmulhall, and others) have been thinking about response planning for essential (vital) workers in extreme pandemics. Our impression is that thereās a reasonable chance we will not be prepared for an extreme pandemic if it happens, so we should have back-up plans in place to keep basic services functioning and prevent collapse. We think this is probably a neglected area that more people should be working on, and weāre interested in whether others think this is likely to be a high-impact topic. We decided to compare it to a standard and evidence-backed intervention to protect the vital workforce that is receiving funding from EA ā stockpiling of pandemic proof PPE (P4E).
We asked Squiggle AI to create two cost-effectiveness analyses comparing stockpiling P4E vs research and planning to rapidly scale up after the outbreak transmission-reducing interventions (e.g. UV) to keep essential workers safe. Given the additional costs of both interventions could be significantly lowered by influencing funding governments have already allocated to stockpiling/āresponse planning, we ran the model with (linked here) and without a message (linked here) to only consider the costs of philanthropic funding.
Summary result:
Considering all spending, research and planning is estimated as 34 (8.5ā140) times as cost-effective as stockpiling
Considering only philanthropic spending, research and planning is estimated as 47 (23ā100) times as cost-effective as stockpiling
We did not feed any numbers into the model, but the ones it self generated seemed reasonably sensible (e.g., Kevin Esveltās quote of $20 billion for stockpiling adequate PPE for the US falls within the $4-20 billion estimate by the model)
Prompt:
Create a cost-effectiveness analysis comparing two interventions to keep US essential workers safe in a pandemic with extremely high transmissibility and fatality rates. Assess the interventions on the probability they are successful at preventing the collapse of civilization. Only include money spent before the pandemic happens as there will be plenty of money available for implementation after it starts.
1: Stockpiling elastomeric half mask respirators and PAPRs before the extreme pandemic.
2: Researching and planning to scale up transmission reduction interventions rapidly after the pandemic starts, including workplace adaptations, indoor air quality interventions (germicidal UV, in-room filtration, ventilation), isolation of workers in on-site housing, and contingency measures for providing basic needs if infrastructure fails.
Outputs:
- narrative and explanations of the logic behind all of the numbers used
- ranges of costs for the two options
- ranges of effectiveness for the two options
- cost-effectiveness for the two options
- mean and median ratios of cost effectiveness of planning vs stockpiling
- distribution plots of the cost effectiveness of planning vs stockpiling
Optional message:
Important: only account for philanthropic funding costs to make these interventions happen. Assume that governments already have pandemic preparedness funding allocated for stockpiles and response planning. This may reduce philanthropic costs if stockpiling interventions can redirect government purchases from disposable masks to more protective elastomeric respirators/āPAPRs or if research and planning interventions can add their recommendations to existing government frameworks to prepare essential industries for disasters.
I found these visualizations very helpful! I think of AGI as the top of your HLAI section: human level in all tasks. Life 3.0 claimed that just being superhuman at AI coding would become super risky (recursive self improvement (RSI)). But it seems to me it would need to be ~human level at some other tasks as well like planning and deception to be super risky. Still, that could be relatively narrow overall.
Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that?
For climate change, I think it means focusing on the catastrophes that could plausibly happen in the next couple decades, such as coincident extreme weather on multiple continents or the collapse of the sub polar gyre. So adaptation becomes relatively more important than emissions reductions. Since AI probably makes nuclear conflict and engineered pandemic more likely, the importance of these fields may still be similar, but you would likely move away from long payoff actions like field building and maybe some things that are likely to take a long time, such as major arsenal reductions or wide adoption of germicidal UV or enhanced air filtration. Instead, one might focus on trying to reduce the chance of nuclear war, especially given AI-enabled systems or AI-induced tensions, or increasing resilience to nuclear war. On the pandemic side, of course reducing the risk that AI enables pandemics, but also near-term actions that could have a significant impact like early warning for pandemics or rapid scale up of resilience post outbreak.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like itās much healthier than having zero concrete examples and everything shrouded in secrecy.
I think itās true that a lot of topics are not discussed because of concerns about info hazard. But I do think we already had some concrete examples, such as some hotly debated gain of function cases, considering the possibility of something as infectious as measles but as fatal as rabies, or Myxomatosis killing 99% of rabbits.
Thanks for all you two do! If you donāt mind me asking, how does the return on your investments factor in? E.g., is the negative savings offset by return such that your net worth is not falling?
ALLFEDās 2024 Highlights
I like how comprehensive this is.
Note that a one in a millennium eruption together with a once in a century plague, like the Plague of Justinian still wasnāt enough to cause existential risk (humans arenāt extinct yet), though the ensuing little ice age could arguably be categorized as a catastrophic risk.
Minor, but existential risk includes more than extinction. So it could be āhumans havenāt undergone an unrecoverable collapse yet (or some other way of losing future potential).ā
I guess as long as there is another category (like āotherā), itās ok. But I believe one EAG exit survey didnāt have another category, so one person I heard from felt excluded.
I donāt know of any good data source on how many people are currently earning to give, but our internal data at GWWC suggests it could be at least 100s (also depending on which definition you use)
According to the 2022 EA survey, out of 3270 people who answered, 335 people were earning to give. Since there are a lot more EAs than 3270, I think it would be more like a thousand people who are earning to give. But itās true they might not be using the 80k definition:
Current definition 80k: āWe say someone is earning to give when they:
Work a job thatās higher earning than they would have otherwise but that they believe is morally neutral or positive
Donate a large fraction of the extra earnings, typically 20-50% of their total salary
Donate to organisations they think are highly effective (i.e. funding-constrained organisations working on big, neglected global problems)ā
I agree with you that it should not have to be a different job, but I disagree that 20% is too low. There are many (most?) EAs who do not have a direct high-impact career or do a lot of high-impact volunteering. So roughly the other way of having impact is earning to give, and if people can give 10%, I think that should qualify.
Thatās interesting to think about the transition from early agriculture/āpastoralism to pre-industrial society. The analyses Iāve seen focus on just recovering agriculture and/āor industry. Do you think in between could be a significant bottleneck or it would just take time? Not a peer-reviewed study, but there were some estimates of future recovery times here.
Thatās very helpful. Do you have a rough idea proportions within creating a better future, e.g. climate, nuclear, bio, and AI?
33% of participants have transitioned into high-impact careers, including joining Charity Entrepreneurship and roles in AI Safety;
That is amazingly high!
These are very exciting! Could you say more about the foci of these orgs within EA cause areas?
Existential catastrophe, annual 0.30% 20.04% David Denkenberger, 2018 Existential catastrophe, annual 0.10% 3.85% Anders Sandberg, 2018 You mentioned how some of the risks in the table were for extinction, rather than existential risk. However, the above two were for the reduction in long-term future potential, which could include trajectory changes that do not qualify as existential risk, such as slightly worse values ending up in locked-in AI. Also another source by this definition was the 30% reduction in long-term potential from 80,000 Hoursā earlier version of this profile. By the way, the source attributed to me was based on a poll of GCR researchersāmy own estimate is lower.
The conventional wisdom is that a crisis like this leads to a panic-neglect cycle, where we oversupply caution for a while, but canāt keep it up. This was the expectation of many people in biosecurity, with the main strategy being about making sure the response wasnāt too narrowly focused on a re-run of Covid, instead covering a wide range of possible pandemics, and that the funding was ring-fenced so that it couldnāt be funnelled away to other issues when the memory of this tragedy began to fade.But we didnāt even see a panic stage: spending on biodefense for future pandemics was disappointingly weak in the UK and even worse in the US.
Have you seen data on spending for future pandemics before COVID and after?
We do not claim to be an x-risk cause area.
I think thatās reasonable that biodiversity loss is unlikely to be an existential risk. However, existential risks could significantly impact biodiversity. Abrupt sunlight reduction scenarios such as nuclear winter could cause extinction of species in the wild, which could potentially be mitigated by keeping the species alive in zoos if there were sufficient food. These catastrophes plus other catastrophes such as those that disrupt infrastructure like extreme pandemic causing people to be too fearful to show up to work in critical industries, could cause desperate people hunting species to extinction. But I think the biggest threat is AGI, which could wipe out all biodiversity. Then again, if AGI goes well, it may be able to resurrect extinct species. So it could be that the most cost-effective way of preserving biodiversity is working on AGI safety.
Minor point, but Iāve seen big tent EA as referring to applying effectiveness techniques on any charity. Then maybe broad current EA causes could be called the middle-sized tent. Then just GCR/ālongtermism could be called the small tent (which 80k already largely pivoted to years ago, at least considering their impact multipliers). Then just AI could be the very small tent.