As your question states, there are two basic types of trajectory changes:
increasing our chance of having control over the long-term future (reducing x-risks); and
making the future go better conditional on us having control over it.
You might think reducing x-risks is more valuable if you think that:
reducing x-risk will greatly increase the expected lifespan of humanity (for example, halving x-risk at every point in time doubles humanity’s expected lifespan); and
conditional on there being a future, the future is likely to be good without explicit interventions by us, or such interventions are unlikely to improve the future.
On the other hand, if you think that the future is unlikely to go well without intervention, then you might want to focus on the second type of trajectory change.
For example, I think there is a substantial risk that our decisions today will perpetuate astronomical suffering over the long-term future (e.g. factory farming in space, artificial minds being mistreated), so I prioritize s-risks over extinction risks.
On the other hand, I think economic growth is less valuable than x-risk reduction because there’s only room for a few more millennia of sustained economic growth, whereas humanity could last millions of years if we avoid x-risks.
I asked a similar question before: Is existential risk more pressing than other ways to improve the long-term future?
As your question states, there are two basic types of trajectory changes:
increasing our chance of having control over the long-term future (reducing x-risks); and
making the future go better conditional on us having control over it.
You might think reducing x-risks is more valuable if you think that:
reducing x-risk will greatly increase the expected lifespan of humanity (for example, halving x-risk at every point in time doubles humanity’s expected lifespan); and
conditional on there being a future, the future is likely to be good without explicit interventions by us, or such interventions are unlikely to improve the future.
On the other hand, if you think that the future is unlikely to go well without intervention, then you might want to focus on the second type of trajectory change.
For example, I think there is a substantial risk that our decisions today will perpetuate astronomical suffering over the long-term future (e.g. factory farming in space, artificial minds being mistreated), so I prioritize s-risks over extinction risks.
On the other hand, I think economic growth is less valuable than x-risk reduction because there’s only room for a few more millennia of sustained economic growth, whereas humanity could last millions of years if we avoid x-risks.