Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 143 publications (>4800 citations, >50,000 downloads, h-index = 36, second most prolific author in the existential/āglobal catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Denkenbergeršø
Commenting on d/āacc:
Revenue-Generating Microgrids: Solar and battery systems that profit from energy arbitrage, grid stabilization services, and demand response markets while providing automatic backup power during grid failures. They remain grid-connected for economic optimization but can āislandā instantly during disruptions.
While itās possible that batteries get much cheaper, right now they are prohibitively expensive for days worth of storage. There are low-cost options at large scale, including compressed air energy storage and pumped hydropower, and there may be reasonable cost versions involving air at smaller scale, such as the systems that liquefy air.
Resilience-Focused Food Systems: Local container farms and seed co-ops that supplement, rather than replace, large-scale agriculture, ensuring a baseline of food security
If by container farms you mean using artificial light, thatās very inefficient and expensive.
Multi-Track Career Systems: Scientists maintain portfolios across traditional academia, prediction market validation, open-source contributions, and commercial applications, reducing career risk and institutional dependency.
I think some scientists would, but most would prefer to specialize.
AI-Driven Load Balancing: Sophisticated software manages the complex energy market, predicting demand and seamlessly shifting loads between the central grid, community batteries, and even electric vehicle fleets.
Profitable Energy Storage: Beyond national oil reserves, communities maintain local āenergy buffersā like green hydrogen storage or charged battery banks, providing a distributed backup for critical infrastructure while earning revenue from grid services (frequency regulation, peak shaving, voltage support) during normal operations.
Iām a fan of vehicle to grid, where vehicles with some formal electric drive can provide grid services including backup power.
Economic Incentives for Distributed Energy: Technology cost reductions and new revenue streams (grid services, peer-to-peer energy trading, carbon credits, micro-reactors) made distributed systems profitable rather than just environmentally beneficial.
Iāve done some research on nuclear micro reactors, and I think there is potential for isolated areas like in Alaska where they have to ship diesel in. But I think itās going to be difficult to be competitive with bulk power.
Pragmatic Integration over Ideological Purity: The most successful projects focused not on going āoff-gridā but on creating valuable services for the grid (e.g., selling battery capacity for stabilization), which funded their development.
I agree with this.
Resilience Inequality: Well-resourced communities can afford robust, multi-day backup systems, while poorer regions remain vulnerable to grid failures, creating a stark divide between the āresilientā and the ābrittle.ā
That sounds reasonable.
First, preventing post-AGI autocracy. Superintelligence structurally leads to concentration of power: post-AGI, human labour soon becomes worthless; those who can spend the most on inference-time compute have access to greater cognitive abilities than anyone else; and the military (and whole economy) can in principle be aligned to a single person.
Even if labour becomes worthless, many people own investments, and Foresight Institute has this interesting idea of āCapital dividend funds: National and regional funds holding equity in AI infrastructure, robotics fleets, and automated production facilities, distributing dividends to citizens as universal basic capital.ā
I think it makes a lot of sense to examine alternate scenarios. Commenting on tool AI:
Nearly every expert interviewed for this project preferred this kind of āTool AIā future, at least
for the near termThis is very interesting, because banning AI agents had little support on my LessWrong survey and there was only one vote for it out of 39 on the EA forum survey I ran. To be fair, this implies banning forever, so if it were temporary, there might be more support.
Capital dividend funds: National and regional funds holding equity in AI infrastructure, robotics fleets, and automated production facilities, distributing dividends to citizens as universal basic capital.
I think this is very important because people often point out that humans will not have influence/āincome if they donāt have a labor wage, but they could still have influence/āincome through ownership of capital.
You mention how poverty would still be a problem. However, I think if AI starts to automate knowledge work, the increased demand for physical jobs should lift most people out of poverty (at least until robots fill nearly all those jobs).
As I learned from someone on this forum, āEAs are not cold and calculatingāthey are warm and calculating.ā
I think that in the build up to ASI, nuclear and pandemic risks would increase, but afterwards they would likely be solved. So letās assume someone is trying to minimize existential risk overall. If one eventually wants ASI (or thinks it is inevitable), the question is when is optimal. If one thinks that the background existential risk not caused by AI is 0.1% per year, and the existential risk from AI is 10% if developed now, then the question is, āHow much does existential risk from AI decrease by delaying it?ā If one thinks that we can get the existential risk from AI to less than 9% in a decade, then it would make sense to delay. Otherwise it would not make sense to delay.
Individuals should not accelerate AI
Though there is risk of corruption of values, I think the counterfactual impact of a safety-oriented person joining a less safe lab to do safety work is net positive.
We should slow AI down
I think we should weigh reducing AI risk by slowing it down against other continuing sources of X-risk. Iām also concerned about a pause becoming permanent, or increasing risk when unpaused, or only getting one chance to pause. However, if AI progress is much faster than now, I think a pause could increase the expected value of the long-run future.
If there is nuclear war without nuclear winter, there would be a dramatic loss of industrial capability which would cascade through the global system. However, being prepared to scale up alternatives such as wood gas powered vehicles producing electricity would significantly speed recovery time and reduce mortality. I think if there is less people killing each other over scarce resources, values would be better, so global totalitarianism would be less likely and bad values locked into AI would be less likely. Similarly, if there is nuclear winter, I think the default is countries banning trade and fighting over limited food. But if countries realized they could feed everyone if they cooperated, I think cooperation is more likely and that would result in better values for the future.
For a pandemic, I think being ready to scale up disease transmission interventions very quickly, including UV, in room air filtration, ventilation, glycol, and temporary working housing would make the outcome of the pandemic far better. Even if those donāt work and there is a collapse of electricity/āindustry due to the pandemic, again being able to do backup ways of meeting basic needs like heating, food, and water[1] would likely result in better values for the future.
Then there is the factor that resilience makes collapse of civilization less likely. Thereās a lot of uncertainty of whether values would be better or worse the second time around, but I think values are pretty good now compared to what we could have, so it seems like not losing civilization would be a net benefit for the long-term (and obviously a net benefit for the short term).- ^
Paper about to be submitted.
- ^
I agree that flourishing is very important. I have thought since around 2018 that the largest advantage for the long-term future of resilience to global catastrophes is not preventing extinction, but instead increasing flourishing, such as reducing the chance of other existential catastrophes like global totalitarianism, or making it more likely that better values end up in AI.
At some point, I had to face the fact that Iād wasted years of my life. EA and rationality, at their core (at least from a predictive perspective), were about getting money and living forever. Other values were always secondary. There are exceptions, Yudkowsky seems to have passed the Ring Temptation test, but theyāre rare. I tried to salvage something. I gave it one last shot and went to LessOnline/āManifest. If you pressed people even a little, they mostly admitted that their motivations were money and power.
Iām sorry you feel this way. Though I would still disagree with you, I think you mean to say the part of EA focused on AI has a primary motivation of getting money and living forever. The majority of EAs are not focused on AI, and are instead focused on nuclear, bio risk, global health and development, animal welfare, etc and they generally are not motivated by living forever. Those who are doing direct work in these areas nearly all do so on low salaries.
I think you make a lot of good points as to why other causes should not have their funding reduced that much. But I didnāt see you making the point that in particular nuclear and pandemic risks could increase because of AI, so the case for funding them remains relatively strong. So maybe a compromise is reducing funding for global poverty/āanimal welfare/āclimate projects that have long timelines for impact, increasing funding for AI, and maintaining it for nuclear and pandemic? My understanding of what is happening now is that global poverty/āanimal welfare funding is being maintained, but non-AI X-risk funding has fallen dramatically.
My jobs are associate professor of mechanical engineering at University of Canterbury, energy efficiency research and policy consultant, and volunteer director at ALLFED. From a selfish perspective, I really like that I do a variety of tasks and get to use my strengths of interdisciplinary understanding, creative problem-solving and reality checking. Researching a new field means that papers are much less incremental (and therefore more interesting to me), though it is challenging finding journals and funding for a lot of it. New Zealand has a mild climate, so itās great for outdoor activities year-round, and itās relatively safe in relation to mundane risks and global catastrophes. Iām also fortunate to live in a place with a thriving EA presence. From an altruistic perspective, I can focus my research on resilience to global catastrophes, have access to student support, and can afford to donate half my income. Though I donāt think itās my primary altruistic impact, it has been rewarding to contribute to small projects that have saved over a power plantās worth of energy. Iād estimate that there are only a few thousand jobs that I could plausibly get that I would overall prefer more, giving a percentile for job preference of ~99.9999%. Iām incredibly grateful for this.
:) though I was talking about whether there could be funding for a high impact project at all, rather than the amount.
In the US, most funding for PhDs is external to the universities. In my experience, internal funding is more open to projects outside the mainstream.
Herring might also be similar to anchovies and sardines in this respect. These small fish are also potentially resilient food sources in catastrophes.
Yes, and one other reason why I can have higher impact in New Zealand is that universities fund PhD students, unlike the US in general.
I thought the sea level changes were really interesting. Check out this animation of current currents.
To get a pause at any time you have to start asking now. Itās totally academic to ask about when exactly to pause and itās not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But itās also possible to advocate for pausing when some threshold or trigger is hit, and not now. Itās also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
What youāre looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
I personally still have significant uncertainty as to the best course of action, and I understand you are under a lot of stress, but I donāt think this characterization is an effective way of shifting people towards your point of view.
Though Carl said that an unilateral pause would be riskier, Iām pretty sure he is not supporting a universal pause now. He said āTo the extent you have a willingness to do a pause, itās going to be much more impactful later on. And even worse, itās possible that a pause, especially a voluntary pause, then is disproportionately giving up the opportunity to do pauses at that later stage when things are more important....Now, I might have a different view if we were talking about a binding international agreement that all the great powers were behind. That seems much more suitable. And Iām enthusiastic about measures like the recent US executive order, which requires reporting of information about the training of new powerful models to the government, and provides the opportunity to see whatās happening and then intervene with regulation as evidence of more imminent dangers appear. Those seem like things that are not giving up the pace of AI progress in a significant way, or compromising the ability to do things later, including a later pause...Why didnāt I sign the pause AI letter for a six-month pause around now?
But in terms of expending political capital or what asks would I have of policymakers, indeed, this is going to be quite far down the list, because its political costs and downsides are relatively large for the amount of benefit ā or harm. At the object level, when I think itās probably bad on the merits, it doesnāt arise. But if it were beneficial, I think that the benefit would be smaller than other moves that are possible ā like intense work on alignment, like getting the ability of governments to supervise and at least limit disastrous corner-cutting in a race between private companies: thatās something that is much more clearly in the interest of governments that want to be able to steer where this thing is going. And yeah, the space of overlap of things that help to avoid risks of things like AI coups, AI misinformation, or use in bioterrorism, there are just any number of things that we are not currently doing that are helpful on multiple perspectives ā and that are, I think, more helpful to pursue at the margin than an early pause.ā
So he says he might be supportive of a universal pause, but it sounds like he would rather have it later than now.
They arenāt the ones terrified of not having the vision to go for the Singularity, of being seen as āLudditesā for opposing a dangerous and recklessly pursued technology. Frankly they arenāt the influential ones.
I see where you are coming from, but it think it would be more accurate to say that you are disappointed in (or potentially even betrayed by[1]) the minority of EAs who are accelerationists, rather than characterizing it as being betrayed by the community as a whole (which is not accelerationist).
- ^
Though I think this is too harsh as early thinking in AI Safety included Bostromās differential technological development and MIRIās seed (safe) AI, the former of which is similar to people trying to shape Anthropicās work and the latter of which could be characterized as accelerationist.
- ^
2 votes
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
I agree there should be more cost effectiveness analyses for AI. These two papers have a high level cost effectiveness of AI safety for improving the long-run future, rather than evaluating individual orgs (based on Oxford Prioritisation Project work). More recently, there is Rethinkās work with a DALYs/ā$1000 calculation.