Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/āglobal catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
Denkenbergeršø
I agree, though I think the large reduction in EA funding for non-AI GCR work is not optimal (but Iām biased with my ALLFED association).
Ah⦠now I see you above and I realized I could mouse overāit is year of crazy. So you think the world will get crazy two years after AGI.
Thanks!
For which event? Iām not seeing you on the poll above.
Interesting. The claim I heard was that some rationalists anticipated that there would be a lockdown in the US and figured out who they wanted to be locked down with, especially to keep their work going. That might not have been put on LW when it was happening. I was skeptical that the US would lock down.
50% of my income for the 11th year to ALLFED.
Yes, one or more of the ācrazyā things happening by 2029. Good suggestion: I have edited the post and my comments to include the year.
Slow things down 10 to how many years?
Good question. For personal planning purposes, I think all causes would make sense. But the title is AI, so maybe just significantly associated with AI? I think these polls are about how the future is different because of AI.
Year of Crazy (2029)
Iām using a combination of scenarios in the postāone or more of these happen significantly before AGI.
Year of Singularity (2040)
Though I think we could get explosive economic growth with AGI or even before, Iām going to interpret this as explosive physical growth, that we could double physical resources every year or less. I think that will take years after AGI to, e.g., crack robotics/āmolecular manufacturing.
Year of AGI (2035)
Extrapolating the METR graph here <https://āāwww.lesswrong.com/āāposts/āā6KcP7tEe5hgvHbrSF/āāmetr-how-does-time-horizon-vary-across-domains> means soon for super-human coder, but I think itās going to take years after that for the tasks that are slower on that graph, and many tasks are not even on that graph (despite the speedup from having a superhuman coder).
Quick Polls on AI Timelines
Hereās another example of someone in the LessWrong community thinking that LLMs wonāt scale to AGI.
Could this apply to cultivated meat based on non-halal animals such as pigs?
Welcome, Denise! You may be interested in ALLFED, as one of the things we investigate is resilience to tail end climate catastrophes.
āI donāt want to encourage people to donate (even to the same places as I did) unless you already have a few million dollars in assetsā
I do see advantages of the abundance mindset, but your threshold is extremely high-it excludes nearly everyone in developed countries, let alone the world. Plenty of people without millions of dollars of assets have an abundance mindset (including myself).
Some say (slight hyperbole), āTeaching a child to not step on bugs is as valuable to the child as it is to the bug.ā So I think there is some mainstream caring about bugs.
Iām not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.