In this question, I am assuming ethical longtermism ā that our objective is to maximize total well-being over the long term. It seems like many longtermist EAs believe that the most high-impact way to improve the far future is to reduce existential risks to humanity. However, there are other ways to improve the far future: speeding up technological progress, speeding up moral progress, improving institutions, settling space, and so on. (I think of these as improving the quality of the far future, conditioned on not having an existential catastrophe.) What are some arguments for why existential risk is more pressing than these other levers, or vice versa?
[Edit: Iām especially interested in which lever is most pressing when we take the welfare of non-human animals into account.]
Pedro Oliboni wrote a paper that addresses one aspect of my question, the tradeoff between existential risk reduction and economic growth: On The Relative Long-Term Future Importance of Investments in Economic Growth and Global Catastrophic Risk Reduction.