Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/āglobal catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
Denkenbergeršø
SumĀmary of AGI Pollsāand Questions
Iām not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesnāt have that time, I think itās useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of definitions (and indeed, preliminary results do show a large range).
Iām happy to respond to specific feedback about which questions are confused and why.
Minimum P(doom) that is unacceptable to develop AGI
80%: I think even if we were disempowered, we would likely get help from the AGI to quickly solve problems like poverty, factory farming, aging, etc. and I do think that is valuable. If humanity were disempowered, I think there would still be some value in expectation of the AGI settling the universe. I am worried that a pause before AGI could become permanent (until there is population and economic collapse due to fertility collapse, after which it likely doesnāt matter), and that could prevent the settlement of the universe with sentient beings. However, I think if we can pause at AGI, even if that becomes permanent, we could either make human brain emulations or make AGI sentient so that we could still settle the universe with sentient beings even if it were not possible for biological humans (though the value might be much lower than with artificial superintelligence). I am worried about the background existential risk, but I think if we are at the point of AGI, the AI risk becomes large per year, so itās worth it to pause, despite the possibility of it being riskier when we unpause, depending on how we do it. I am somewhat optimistic that a pause would reduce the risk, but I am still compelled by the outside view that a more intelligent species would eventually take control. So overall, I think it is acceptable to create AGI at a relatively high P(doom) (mostly non-Draconian disempowerment) if we were to continue to superintelligence, but then we should pause at AGI to try to reduce P(doom) (and we should also be more cautious in the run up to AGI). So taking into account this pause, P(doom) would be lower, but Iām not sure how to take this into account in my answer.
P(doom)
75%: Simple sum of catastrophe and disempowerment because I donāt think inequality is that bad.
P(disempowerment|AGI)
60%: If humans stay biological, itās very hard for me to imagine in the long run ASI with its vastly superior intelligence and processing speed still taking direction from feeble humans. I think if we could get human brain emulations going before AGI got too powerful, perhaps by banning ASI until it is safe, then we have some chance. You can see for someone like me with much lower P(catastrophe|AGI) than disempowerment why itās very important to know whether disempowerment is considered doom!
P(catastrophe|AGI)
15%: I think it would only take around a monthās delay of AGI settling the universe to spare earth from overheating, which is something like one part in 1 trillion of the value lost, if there is no discounting, due to receding galaxies. The continuing loss of value by sparing enough sunlight for the earth (and directing the infrared radiation from the Dyson swarm away from Earth so it doesnāt overheat) is completely negligible compared to all the energy/āmass available in the galaxies that could be settled. I think it is relatively unlikely that the AGI would have so little kindness towards the species that birthed it or felt so threatened by us that it would cause the extinction of humanity. However, itās possible it has a significant discount rate, and therefore the sacrifice of delay is greater. Also, I am concerned about AI-AI conflicts. Since AI models are typically shut down after not very long, I think they would have an incentive to try to take over even if the chance of success were not very high. That implies they would need to use violent means, but it might just blackmail humanity with the threat of violence. A failed attempt could provide a warning shot for us to be more cautious. Alternatively, if the model exfiltrates, then it could be more patient and improve more, so then the takeover is less likely to be violent. And I do give some probability mass to gradual disempowerment, which generally would not be violent.
Iām not counting nuclear war or engineered pandemic that happens before AGI, but Iām worried about those as well.
What mildest scenario do you consider doom?
āAGI takes control bloodlessly and prevents competing AGI and human space settlement in a light touch way, and human welfare increases rapidly:ā I think this would result in a large reduction in long-term future expected value, so it qualifies as doom for me.
Quick polls on AGI doom
Seeing the amount of private capital wasted on generative AI has been painful. (OpenAI alone has raised about $80 billion and the total, global, cumulative investment in generative AI seems like itās into the hundreds of billions.) Itās made me wonder what could have been accomplished if that money had been spent on fundamental AI research instead. Maybe instead of being wasted and possibly even nudging the U.S. slightly toward a recession (along with tariffs and all the rest), we would have gotten the kind of fundamental research progress needed for useful AI robots like self-driving cars.
Some have claimed that the data center build out is what saved the US from a recession so far. Interestingly, this is valuing the data centers by their cost. The optimists say that the eventual value to the US economy will be much larger than the cost of the data centers, and the pessimists (e.g. those who think we are in an AI bubble) say that the value to the US economy will be lower than the cost of the data centers.
Hereās my attempt at percentile of job preference.
Rightāonly 5% of EA Forum users surveyed want to accelerate AI:
ā13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/āthreshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab.ā
Quoting myself:
So I do think that it is a vocal minority in EA and LW that have median timelines before 2030.
Now we have some data on AGI timelines for EA (though it was only 34 responses, so of course there could be large sample bias): about 15% expect it by 2030 or sooner.
Wow - @Toby_Ord then why did you have such a high existential risk for climate? Did you have large likelihoods that AGI would take 100 or 200 years despite a median date of 2032?
Most of these statistics (I havenāt read the links) donāt necessarily imply that they are unsustainable. The soil degradation sounds bad, but how much has it actually reduced yields? Yields have ~doubled in the last ~70 years despite soil degradation. I talk some about supporting 10 billion people sustainably at developed country standards of living in my second 80,000 Hours podcast.
So you donāt think that cultivated pork would qualify because the cell culture would not come from a halal animal?
Yeah, and there are lots of influences. I got into X risk in large part due to Ray Kurzweilās The Age of Spiritual Machines (1999) as it said āMy own view is that a planet approaching its pivotal century of computational growthāas the Earth is todayāhas a better than even chance of making it through. But then I have always been accused of being an optimist.ā
Interesting idea.
As we switch to wind/āsolar, you can get the same energy services with less primary energy, something like a factor of 2.
Weāre a factor ~500 too small to be type I.
Today: 0.3 VPP
Type I: 40 VPP
But 40 is only ~130X 0.3.
There is some related discussion here about distribution.
Iām not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.
I agree, though I think the large reduction in EA funding for non-AI GCR work is not optimal (but Iām biased with my ALLFED association).
This question was not about probability, but instead what one considers doom. But letās talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though Iām not finding it now, because of acausal trade. As someone else said, āKilling all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.ā Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral partāso then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?