Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/āglobal catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
Denkenbergeršø
Could you please explain your reasoning on 40 hours?
I think your formulation is elegant, but I think the real possibilities are lumpier and span many more orders of magnitude (OOMs). Hereās a modification from a comment on a similar idea:
I think there would be some probability mass that we have technological stagnation and population reductions, though the cumulative number of lives would be much larger than alive today. Then there would be some mass on maintaining something like 10 billion people for a billion years (no AI, staying on earth either due to choice or technical reasons). Then there would be AI doing a Dyson swarm, but either because of technical reasons or high discount rate, not going to other stars. Then there would be AI settles the galaxy, but again either because of technical reasons or discount rate, not going to other galaxies. Then there would be settling many galaxies. Then 30 OOMs to the right, there could be another high slope region corresponding to aestivation. And there could be more intermediate states corresponding to various scales of space settlement of biological humans. Even if you ignore the technical barriers, there are still many different levels scale we could choose to end up at. Even if you think the probability should be smoothed because of uncertainties, still there are something like 60 OOMs between survival of biological humans on Earth and digital aestivation. Or are you collapsing all that and just looking at welfare regardless of the scale? Even welfare could span many OOMs.
ALLFED is lookĀing for a CEO
I didnāt realize it was that much money. This has relevance to the debates about whether AI will value humans. Though EA has not focused as much on making mainstream money more effective, there have been some efforts.
But my major response is why the focus on cultivated meat? It seems like efforts on plant-based meat or fermentation or leaf protein concentrate have much greater likelihood of achieving parity in the near term.It could even be that mitigating existential risk is the most cost-effective way of saving species, though I realize that is probably too far afield for this pot of money.
Thanks for doing this and for pointing it out to me. Yeah, participation bias could be huge, but itās still good to get some idea.
SumĀmary of AGI Pollsāand Questions
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 arenāt well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
This question was not about probability, but instead what one considers doom. But letās talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though Iām not finding it now, because of acausal trade. As someone else said, āKilling all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.ā Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?=
For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to āWhat percentage of the worlds population am I willing to kill in expectation?ā. Answers such as ā10^6 humansā and ā10^9 humansā are both monstrous, even though your poll would rate them very differently.Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral partāso then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?
Iām not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesnāt have that time, I think itās useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of definitions (and indeed, preliminary results do show a large range).
Iām happy to respond to specific feedback about which questions are confused and why.
Minimum P(doom) that is unacceptable to develop AGI
80%: I think even if we were disempowered, we would likely get help from the AGI to quickly solve problems like poverty, factory farming, aging, etc. and I do think that is valuable. If humanity were disempowered, I think there would still be some value in expectation of the AGI settling the universe. I am worried that a pause before AGI could become permanent (until there is population and economic collapse due to fertility collapse, after which it likely doesnāt matter), and that could prevent the settlement of the universe with sentient beings. However, I think if we can pause at AGI, even if that becomes permanent, we could either make human brain emulations or make AGI sentient so that we could still settle the universe with sentient beings even if it were not possible for biological humans (though the value might be much lower than with artificial superintelligence). I am worried about the background existential risk, but I think if we are at the point of AGI, the AI risk becomes large per year, so itās worth it to pause, despite the possibility of it being riskier when we unpause, depending on how we do it. I am somewhat optimistic that a pause would reduce the risk, but I am still compelled by the outside view that a more intelligent species would eventually take control. So overall, I think it is acceptable to create AGI at a relatively high P(doom) (mostly non-Draconian disempowerment) if we were to continue to superintelligence, but then we should pause at AGI to try to reduce P(doom) (and we should also be more cautious in the run up to AGI). So taking into account this pause, P(doom) would be lower, but Iām not sure how to take this into account in my answer.
P(doom)
75%: Simple sum of catastrophe and disempowerment because I donāt think inequality is that bad.
P(disempowerment|AGI)
60%: If humans stay biological, itās very hard for me to imagine in the long run ASI with its vastly superior intelligence and processing speed still taking direction from feeble humans. I think if we could get human brain emulations going before AGI got too powerful, perhaps by banning ASI until it is safe, then we have some chance. You can see for someone like me with much lower P(catastrophe|AGI) than disempowerment why itās very important to know whether disempowerment is considered doom!
P(catastrophe|AGI)
15%: I think it would only take around a monthās delay of AGI settling the universe to spare earth from overheating, which is something like one part in 1 trillion of the value lost, if there is no discounting, due to receding galaxies. The continuing loss of value by sparing enough sunlight for the earth (and directing the infrared radiation from the Dyson swarm away from Earth so it doesnāt overheat) is completely negligible compared to all the energy/āmass available in the galaxies that could be settled. I think it is relatively unlikely that the AGI would have so little kindness towards the species that birthed it or felt so threatened by us that it would cause the extinction of humanity. However, itās possible it has a significant discount rate, and therefore the sacrifice of delay is greater. Also, I am concerned about AI-AI conflicts. Since AI models are typically shut down after not very long, I think they would have an incentive to try to take over even if the chance of success were not very high. That implies they would need to use violent means, but it might just blackmail humanity with the threat of violence. A failed attempt could provide a warning shot for us to be more cautious. Alternatively, if the model exfiltrates, then it could be more patient and improve more, so then the takeover is less likely to be violent. And I do give some probability mass to gradual disempowerment, which generally would not be violent.
Iām not counting nuclear war or engineered pandemic that happens before AGI, but Iām worried about those as well.
What mildest scenario do you consider doom?
āAGI takes control bloodlessly and prevents competing AGI and human space settlement in a light touch way, and human welfare increases rapidly:ā I think this would result in a large reduction in long-term future expected value, so it qualifies as doom for me.
Quick polls on AGI doom
Seeing the amount of private capital wasted on generative AI has been painful. (OpenAI alone has raised about $80 billion and the total, global, cumulative investment in generative AI seems like itās into the hundreds of billions.) Itās made me wonder what could have been accomplished if that money had been spent on fundamental AI research instead. Maybe instead of being wasted and possibly even nudging the U.S. slightly toward a recession (along with tariffs and all the rest), we would have gotten the kind of fundamental research progress needed for useful AI robots like self-driving cars.
Some have claimed that the data center build out is what saved the US from a recession so far. Interestingly, this is valuing the data centers by their cost. The optimists say that the eventual value to the US economy will be much larger than the cost of the data centers, and the pessimists (e.g. those who think we are in an AI bubble) say that the value to the US economy will be lower than the cost of the data centers.
Hereās my attempt at percentile of job preference.
Rightāonly 5% of EA Forum users surveyed want to accelerate AI:
ā13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/āthreshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab.ā
Quoting myself:
So I do think that it is a vocal minority in EA and LW that have median timelines before 2030.
Now we have some data on AGI timelines for EA (though it was only 34 responses, so of course there could be large sample bias): about 15% expect it by 2030 or sooner.
Wow - @Toby_Ord then why did you have such a high existential risk for climate? Did you have large likelihoods that AGI would take 100 or 200 years despite a median date of 2032?
This only holds if the future value in the universe of AIs that took over is almost exactly the same as the future value if humans remained in control (meaning varying less than one part in a billion (and I think less than one part in a billion billion billion billion billion billion)). Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower. But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.