What are the odds of extinction from nuclear, AI, bio, climate change, etc.?
His thoughts on the threat of “population collapse”?
How work on existential risk compares to work on animal welfare and global poverty in expected value (is it 50% better? 100x better?)
How does work on animal welfare and global poverty affect existential risk and the quality of the long-term future?
Where do Nick Bostrom, Toby Ord, Eliezer Yudkowsky, etc. go wrong that leads them to believe in substantially higher levels of AI risk than you?
What new E.A. projects would you like to see which haven’t been recommended by OpenPhil, FTX Future Fund, etc.
Do you believe in the perrenialist philosophy (the perspective in philosophy and spirituality that views all of the world’s religious traditions share a single, metaphysical truth or origin from which all esoteric and exoteric knowledge and doctrine has grown)? What would the discovery of absolute truth mean for the long-term future?
What problems need to be solved before we’ve created the “best possible world”? Or can we just rely on AGI to solve our problems?
Which values (besides MCE) are important for making sure the future goes well?
How can we “improve institutions to promote development” as is recommended as a potentially pressing longtermist issue by 80000 Hours?
What bad, non-extinction risks does AI pose?
Does E.A. underestimate the importance of becoming a space-faring species for ensuring the survival of humanity?
How can we prevent totalitarianism?
Where do you differ from SBF on E.A. priorities? How would you spend $1 billion?
What are the odds of extinction from nuclear, AI, bio, climate change, etc.?
His thoughts on the threat of “population collapse”?
How work on existential risk compares to work on animal welfare and global poverty in expected value (is it 50% better? 100x better?)
How does work on animal welfare and global poverty affect existential risk and the quality of the long-term future?
Where do Nick Bostrom, Toby Ord, Eliezer Yudkowsky, etc. go wrong that leads them to believe in substantially higher levels of AI risk than you?
What new E.A. projects would you like to see which haven’t been recommended by OpenPhil, FTX Future Fund, etc.
Do you believe in the perrenialist philosophy (the perspective in philosophy and spirituality that views all of the world’s religious traditions share a single, metaphysical truth or origin from which all esoteric and exoteric knowledge and doctrine has grown)? What would the discovery of absolute truth mean for the long-term future?
What problems need to be solved before we’ve created the “best possible world”? Or can we just rely on AGI to solve our problems?
Which values (besides MCE) are important for making sure the future goes well?
How can we “improve institutions to promote development” as is recommended as a potentially pressing longtermist issue by 80000 Hours?
What bad, non-extinction risks does AI pose?
Does E.A. underestimate the importance of becoming a space-faring species for ensuring the survival of humanity?
How can we prevent totalitarianism?
Where do you differ from SBF on E.A. priorities? How would you spend $1 billion?