Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/āglobal catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
Denkenbergeršø
For which event? Iām not seeing you on the poll above.
Interesting. The claim I heard was that some rationalists anticipated that there would be a lockdown in the US and figured out who they wanted to be locked down with, especially to keep their work going. That might not have been put on LW when it was happening. I was skeptical that the US would lock down.
50% of my income for the 11th year to ALLFED.
Yes, one or more of the ācrazyā things happening by 2029. Good suggestion: I have edited the post and my comments to include the year.
Slow things down 10 to how many years?
Good question. For personal planning purposes, I think all causes would make sense. But the title is AI, so maybe just significantly associated with AI? I think these polls are about how the future is different because of AI.
Year of Crazy (2029)
Iām using a combination of scenarios in the postāone or more of these happen significantly before AGI.
Year of Singularity (2040)
Though I think we could get explosive economic growth with AGI or even before, Iām going to interpret this as explosive physical growth, that we could double physical resources every year or less. I think that will take years after AGI to, e.g., crack robotics/āmolecular manufacturing.
Year of AGI (2035)
Extrapolating the METR graph here <https://āāwww.lesswrong.com/āāposts/āā6KcP7tEe5hgvHbrSF/āāmetr-how-does-time-horizon-vary-across-domains> means soon for super-human coder, but I think itās going to take years after that for the tasks that are slower on that graph, and many tasks are not even on that graph (despite the speedup from having a superhuman coder).
Quick Polls on AI Timelines
Hereās another example of someone in the LessWrong community thinking that LLMs wonāt scale to AGI.
Could this apply to cultivated meat based on non-halal animals such as pigs?
Welcome, Denise! You may be interested in ALLFED, as one of the things we investigate is resilience to tail end climate catastrophes.
āI donāt want to encourage people to donate (even to the same places as I did) unless you already have a few million dollars in assetsā
I do see advantages of the abundance mindset, but your threshold is extremely high-it excludes nearly everyone in developed countries, let alone the world. Plenty of people without millions of dollars of assets have an abundance mindset (including myself).
Some say (slight hyperbole), āTeaching a child to not step on bugs is as valuable to the child as it is to the bug.ā So I think there is some mainstream caring about bugs.
ALLFEDās 2025 Highlights
Shameless plug for ALLFED: Four of our former volunteers moved into paid work in biosecurity, and they were volunteers before we did much direct work in biosecurity. Now we are doing more directly. Since ALLFED has had to shrink, the contribution from volunteers has become relatively more important. So I think ALLFED is a good place for young people to skill up in biosecurity and have impact.
Here are some probability distributions from a couple of them.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I donāt know why you would think that.
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish papers, and they also have more faith in the peer review process. Even now, I would guess that EAs have more appropriate skepticism of social science results than the average social science academic.
Iām not sure exactly what you were referencing Eliezer Yudkowsky as an example of ā someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices,
I think his epistemics have gone downhill in the last few years as he has been stressed out that the end of the world is nigh. However, I do think he is much more aware of biases than the average academic, and has, at least historically, updated his opinions a lot, such as early on realizing that AI might not be all positive (and recognizing he was wrong).
Was your model informed by @Arepo ās similar models? I believe he was considering rerunning the time of perils because of a catastrophe before AGI. Either way, catastrophic risk becomes much more important to the long-run future than with a simple analysis.