Regardless, Iām not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).
Most work on s-risks, such as the work done by the Center for Reducing Suffering and the Center on Long-Term Risk, is an example of this type of research, although restricted to a subset of ways to improve the future conditional on non-extinction.
If we avoid existential catastrophe, the future is great by definition.
(Note that this only follows if you assume that humanity has the potential for greatness.)
Most work on s-risks, such as the work done by the Center for Reducing Suffering and the Center on Long-Term Risk, is an example of this type of research, although restricted to a subset of ways to improve the future conditional on non-extinction.
(Note that this only follows if you assume that humanity has the potential for greatness.)