Foundational research on the value of the long-term future
Research That Can Help Us Improve
If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?
To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I’d be interested in research following up on subjects like those of the posts:
Foundational research on the value of the long-term future
Research That Can Help Us Improve
If we successfully avoid existential catastrophe in the next century, what are the best pathways to reaching existential security, and how likely is it? How optimistic should we be about the trajectory of the long-term future? What are the worst-case scenarios, and how do we avoid them? How can we make sure the future is robustly positive and build a world where as many people are flourishing?
To elaborate on what I have in mind with this proposal, it seems important to conduct research beyond reducing existential risk over the next century – we should make sure that the future we have afterwards is good as well. I’d be interested in research following up on subjects like those of the posts:
“Disappointing Futures” Might Be As Important As Existential Risk—Michael Dickens
Why I prioritize moral circle expansion over artificial intelligence alignment—Jacy Reese
Should We Prioritize Long-Term Existential Risk? - Michael Dickens
Cooperation, Conflict, and Transformative Artificial Intelligence—Center on Long-Term Risk
This sounds great! I particularly liked that you brought up S-risks and MCE. I think these are important considerations.