I would replace “avoiding x-risk” with “avoiding stuff like extinction” in this question. SBF’s usage is nonstandard—an existential catastrophe is typically defined as something that causes us to be able to achieve at most a small fraction of our potential. Events which cause us to achieve only 10^-30 of our potential are an existential catastrophe. If we avoid existential catastrophe, the future is great by definition.
Regardless, I’m not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).
Regardless, I’m not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).
Most work on s-risks, such as the work done by the Center for Reducing Suffering and the Center on Long-Term Risk, is an example of this type of research, although restricted to a subset of ways to improve the future conditional on non-extinction.
If we avoid existential catastrophe, the future is great by definition.
(Note that this only follows if you assume that humanity has the potential for greatness.)
I would replace “avoiding x-risk” with “avoiding stuff like extinction” in this question. SBF’s usage is nonstandard—an existential catastrophe is typically defined as something that causes us to be able to achieve at most a small fraction of our potential. Events which cause us to achieve only 10^-30 of our potential are an existential catastrophe. If we avoid existential catastrophe, the future is great by definition.
Regardless, I’m not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).
Most work on s-risks, such as the work done by the Center for Reducing Suffering and the Center on Long-Term Risk, is an example of this type of research, although restricted to a subset of ways to improve the future conditional on non-extinction.
(Note that this only follows if you assume that humanity has the potential for greatness.)