This part of Sam Bankman-Fried’s interview on the 80K Podcast interview stood out to me. He’s asked about some of his key uncertainties, and one that he offers is:
Maybe a bigger core thing is, as long as we don’t screw things up, [if] we’re going to have a great outcome in the end versus how much you have to actively try as a world to end up in a great place. The difference between a really good future and the expected future — given that we make it to the future — are those effectively the same, or are those a factor of 10 to the 30 away from each other? I think that’s a big, big factor, because if they’re basically the same, then it’s all just about pure x-risk prevention: nothing else matters but making sure that we get there. If they’re a factor of 10 to the 30 apart, x-risk prevention is good, but it seems like maybe it’s even more important to try to see what we can do to have a great future.
What are the best available resources on comparing “improving the future conditional on avoiding x-risk” vs. “avoiding x-risk”?
I asked a similar question before: Is existential risk more pressing than other ways to improve the long-term future?
As your question states, there are two basic types of trajectory changes:
increasing our chance of having control over the long-term future (reducing x-risks); and
making the future go better conditional on us having control over it.
You might think reducing x-risks is more valuable if you think that:
reducing x-risk will greatly increase the expected lifespan of humanity (for example, halving x-risk at every point in time doubles humanity’s expected lifespan); and
conditional on there being a future, the future is likely to be good without explicit interventions by us, or such interventions are unlikely to improve the future.
On the other hand, if you think that the future is unlikely to go well without intervention, then you might want to focus on the second type of trajectory change.
For example, I think there is a substantial risk that our decisions today will perpetuate astronomical suffering over the long-term future (e.g. factory farming in space, artificial minds being mistreated), so I prioritize s-risks over extinction risks.
On the other hand, I think economic growth is less valuable than x-risk reduction because there’s only room for a few more millennia of sustained economic growth, whereas humanity could last millions of years if we avoid x-risks.