I also agree with the comment above that it’s important to distinguish between what we call “the long-term value thesis” and the idea that reducing extinction risks is the key priority. You can believe in the long-term value thesis but think there’s better ways to help the future than reducing extinction risks, and you can reject the long-term value thesis but still think extinction risk is a top priority.
Agreed. Calling reducing X-risks non-near-term-future causes strikes me as using bad terminology.
I plan on posting the standalone post later today. This is one of the issues that I will do a better job addressing (as well as stating when an argument applies only to a subset of long term future/existential risk causes).
As a further illustration of the difference with your first point, the idea that the future might be net negative is only reason against reducing extinction risk, but it might be more reason to focus on improving the long-term in general. This is what the s-risk people often think.
Agreed. As someone who prioritises s-risk reduction, I find it odd that long-termism is sometimes considered equivalent to x-risk reduction. It is legitimate if people think that x-risk reduction is the best way to improve the long-term, but it should be made clear that this is based on additional beliefs about ethics (rejecting suffering-focused views and not being very concerned about value drift), about how likely x-risks in this century are, and about how tractable it is to reduce them, relative to other ways of improving the long-term. I for one think that none of these points is obvious.
So I feel that there is a representativeness problem between x-risk reduction and other ways of improving the long-term future (not necessarily only s-risk reduction), in addition to an underrepresentation of near-term causes.
I’m aware of this and also planning on addressing it. One of the reasons that people associate the long term future with x-risk reduction is that the major EA organizations that have embraced the long term future thesis (80,000 Hours, Open Phil etc.) all consider biosecurity to be important. If your primary focus is on s-risks, you would not put much effort into biorisk reduction. (See here and here.)
I agree the long-term value thesis and the aim of reducing extinction risk often go together, but I think it would be better if we separated them conceptually.
At 80k we’re also concerned that there might be better ways to help the future, which is one reason why we highly prioritise global priorities research.
Agreed. Calling reducing X-risks non-near-term-future causes strikes me as using bad terminology.
I plan on posting the standalone post later today. This is one of the issues that I will do a better job addressing (as well as stating when an argument applies only to a subset of long term future/existential risk causes).
As a further illustration of the difference with your first point, the idea that the future might be net negative is only reason against reducing extinction risk, but it might be more reason to focus on improving the long-term in general. This is what the s-risk people often think.
Agreed. As someone who prioritises s-risk reduction, I find it odd that long-termism is sometimes considered equivalent to x-risk reduction. It is legitimate if people think that x-risk reduction is the best way to improve the long-term, but it should be made clear that this is based on additional beliefs about ethics (rejecting suffering-focused views and not being very concerned about value drift), about how likely x-risks in this century are, and about how tractable it is to reduce them, relative to other ways of improving the long-term. I for one think that none of these points is obvious.
So I feel that there is a representativeness problem between x-risk reduction and other ways of improving the long-term future (not necessarily only s-risk reduction), in addition to an underrepresentation of near-term causes.
I’m aware of this and also planning on addressing it. One of the reasons that people associate the long term future with x-risk reduction is that the major EA organizations that have embraced the long term future thesis (80,000 Hours, Open Phil etc.) all consider biosecurity to be important. If your primary focus is on s-risks, you would not put much effort into biorisk reduction. (See here and here.)
I agree the long-term value thesis and the aim of reducing extinction risk often go together, but I think it would be better if we separated them conceptually.
At 80k we’re also concerned that there might be better ways to help the future, which is one reason why we highly prioritise global priorities research.