Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/āglobal catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
Denkenbergeršø
Some people are concerned about AI x-risk, and they have P(doom)s in the 5ā25% range. I donāt get that. I canāt pass an Ideological Turing Test for someone who sees all these problems, but still expects us to avert extinction with >75% probability. I donāt understand what would lead one to believe that this is what things look like when weāre on track to solving a problem.
P(doom) does not necessarily equal extinction. Paul Christiano had (in 2023) P(AI takeover) at 22%, and P(most humans die from takeover) = 11% (but then other ways of most people dying). But he has much lower probabilities of extinction due to pseudo pico kindness, acausal trade, etc.
Outside of EA, when people get rich I doubt there are a bunch of charity lobbyists breathing down their necks?
Yes, there are. This is the high net worth individual strategy that so many charities use (one of my universities even had a mini course on how to do it).
Do we need a scared reaction option on the EA Forum?
Microalgae is fairly expensive, so I think macroalgae is more promisingāmost of it is low protein, but there are high protein varieties. Leaf protein concentrate (e.g. Leaft) seems promising as well.
Plant-based meat prices per pound are based on frozen and refrigerated plant-based meat subcategories from SPINS year ending 12/ā1/ā24. Animal-based meat prices per pound are based on data for fresh meat subcategories from the Circana year ending Dec. 2024.
Fresh meat typically costs more, and it seems like this includes whole muscle meat, so I think if you do a fair comparison, PBM is more like double the cost of ground beef.
Nice! Did you consider seaweed or leaf protein concentrate? The numbers Iāve seen is that PBM is still twice the price of ground beefādid that source compare to all beef?
Quick searching indicates that it is generally allowed in small orgs. Also, in general, ~10% of people meet their spouse at work.
I just have info from AI:
Era % Plant Calories Notes 1920s (early kibble) 10ā30% Mostly horse meat + grains; Purina starts ~1926 1950sā1970s 40ā60% Corn/āsoy fillers rise for cost, extrusion tech 1980sā2000s 50ā70% Grain-heavy economy formulas dominant ā 2010sā2026 60ā80% 52% of US pet foods use plant proteins by 2024; āgrain-freeā niche lowers some to 40% ā
I havenāt dug into the surveys that Knight cites but Iām super skeptical. I know vegans who donāt have vegan pets, and I know how hard it is to make people go vegan. There are big barriers to getting humans to transition to alternative proteins at scale, and thatās only more true for companion animals.
Iām skeptical as well, but in some ways, the barriers for pets going vegan are lower:
Taste is less of an issue for pets.
Time cost is much lower for pets because you can just pick out one food and buy it every time.
For people concerned about social interactions involving veganism, you donāt have to tell anyone that your pet is vegan.
It may be easier to mitigate the health issues of being vegan for pets: For methane single cell protein (SCP) fed to salmon, just a little compared to fully vegan (soy) diet showed a big improvement in gut health. Iād be most confident that this would port to other obligate carnivores like cats, but I could see it being beneficial for dogs as well. Methane SCP is not yet approved for human food, but they are targeting pet food.
In the last few decades, dog food has become more plant based because plants are cheaper (and they figured out how to make it appealing to dogs and not offensive to people). If methane SCP can become cheaper than animal byproducts, you could have a healthy cheaper product with lower environmental impact that probably wouldnāt taste as good, but I think many non-vegans would go for.
I personally do think the probability of eventual disempowerment is high. However, you are implying that it is 100%. If it is 99%, or indeed even 99.9999999%, and one thinks the value of the future is significantly higher with humanity (not necessarily biological humans) in control vs AI, then there are still astronomical stakes of humanity remaining in control.
Let be the number of parameters in the model, be the number of data tokens it is trained on, be the number of times the model is deployed (e.g. the number of questions it is asked) and be the number of inference steps each time it is deployed (e.g. the number of tokens per answer). Then this approximately works out to:[9]
Note that scaling up the number of parameters, , increases both pre-training compute and inference compute, because you need to use those parameters each time you run a forward pass in your model.
Several variables are not showing up in the text.
If AI systems replace humanity, that outcome would undoubtedly be an absolute disaster for the eight billion human beings currently alive on Earth. However, it would be a localized, short-term disaster rather than an astronomical one. Bostromās argument, strictly interpreted, no longer applies to this situation. The reason is that the risk is confined to the present generation of humans: the question at stake is simply whether the eight billion people alive today will be killed or allowed to continue living. Even if you accept that killing eight billion people would be an extraordinarily terrible outcome, it does not automatically follow that this harm carries the same moral weight as a catastrophe that permanently eliminates the possibility of 10^23 future lives.
This only holds if the future value in the universe of AIs that took over is almost exactly the same as the future value if humans remained in control (meaning varying less than one part in a billion (and I think less than one part in a billion billion billion billion billion billion)). Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower. But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Could you please explain your reasoning on 40 hours?
I think your formulation is elegant, but I think the real possibilities are lumpier and span many more orders of magnitude (OOMs). Hereās a modification from a comment on a similar idea:
I think there would be some probability mass that we have technological stagnation and population reductions, though the cumulative number of lives would be much larger than alive today. Then there would be some mass on maintaining something like 10 billion people for a billion years (no AI, staying on earth either due to choice or technical reasons). Then there would be AI doing a Dyson swarm, but either because of technical reasons or high discount rate, not going to other stars. Then there would be AI settles the galaxy, but again either because of technical reasons or discount rate, not going to other galaxies. Then there would be settling many galaxies. Then 30 OOMs to the right, there could be another high slope region corresponding to aestivation. And there could be more intermediate states corresponding to various scales of space settlement of biological humans. Even if you ignore the technical barriers, there are still many different levels scale we could choose to end up at. Even if you think the probability should be smoothed because of uncertainties, still there are something like 60 OOMs between survival of biological humans on Earth and digital aestivation. Or are you collapsing all that and just looking at welfare regardless of the scale? Even welfare could span many OOMs.
ALLFED is lookĀing for a CEO
I didnāt realize it was that much money. This has relevance to the debates about whether AI will value humans. Though EA has not focused as much on making mainstream money more effective, there have been some efforts.
But my major response is why the focus on cultivated meat? It seems like efforts on plant-based meat or fermentation or leaf protein concentrate have much greater likelihood of achieving parity in the near term.It could even be that mitigating existential risk is the most cost-effective way of saving species, though I realize that is probably too far afield for this pot of money.
Thanks for doing this and for pointing it out to me. Yeah, participation bias could be huge, but itās still good to get some idea.
SumĀmary of AGI Pollsāand Questions
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 arenāt well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
This question was not about probability, but instead what one considers doom. But letās talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though Iām not finding it now, because of acausal trade. As someone else said, āKilling all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.ā Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?=
For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to āWhat percentage of the worlds population am I willing to kill in expectation?ā. Answers such as ā10^6 humansā and ā10^9 humansā are both monstrous, even though your poll would rate them very differently.Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral partāso then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?
Iām not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesnāt have that time, I think itās useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of definitions (and indeed, preliminary results do show a large range).
Iām happy to respond to specific feedback about which questions are confused and why.
Letās break this into two questions:
1. After a few years of ASI, will the ASI be able to stop or reverse aging?
2. After a few years of ASI, will hardly anyone die of aging related diseases?
Letās tackle number one first. Itās true the ASI would not be able to do long term human trials the regular way. However, I think it could learn a lot from the data from running trillions of lab-on-a-chip experiments. I think it could develop nano bots that could remove cancer cells and repair aging related damage. And it could get quick feedback by making C. elegans, etc immortal. It might also be able to simulate biology from first principles in order to run the equivalent of decades long human trials.
I also think it could develop noninvasive scanning techniques that would allow someoneās consciousness to be simulated. And even if that doesnāt count, it might even be able to build up a new biological human that has equivalent consciousness to the original (which still may not count depending on oneās values). There are likely many other routes to quick longevity that I canāt think of but an ASI could.
As for the second question, would people allow the, e.g., repair nano bots into their bodies? One subquestion is whether countries would allow it. Based on current laws, probably not, though itās possible they would change quickly due to ASI (and people could go into international waters). Another subquestion is if it is legal, would people do it? Obviously some people would not, but if the alternative is a soon death, I think many people would.