All work is future-oriented (other than a few exceptions involving the manipulation of time). Maybe “long-term future” instead of “far future” would be better since the latter seems to suggest to me that the benefits won’t be observed until far into the future, whereas “long-term” doesn’t necessarily exclude the “short term”, although most of the impact is typically thought to come from benefits in the far future. Just referring to it as “extinction risk reduction” or “existential risk reduction” doesn’t necessarily have longtermist or far-future connotations.
Similarly, Bostrom, in his astronomical waste essay, argues that even with a ‘person-affecting utilitarian’ view, reducing existential risk is a priority
I think the case for it under a symmetric person-affecting view (like presentism or necessitarianism) is much weaker compared to, say, global health and poverty work, for which we have far more robustly cost-effective interventions.
Also, under an asymmetric person-affecting view, reducing extinction risk would probably not be a priority (or it could even be bad), but reducing s-risks, i.e. risks of astronomical suffering, another type of existential risk, could be. There is overlap between extinction risk work and s-risk work through cooperation/conflict work and AI safety; see the priorities of Effective Altruism Foundation/Foundational Research Institute. I think asymmetric views would normally prioritize global health and poverty, animal welfare or s-risks, depending on your priors and the importance of empirical evidence.
I think the case for it under a symmetric person-affecting view (like presentism or necessitarianism) is much weaker compared to, say, global health and poverty work, for which we have far more robustly cost-effective interventions.
Two analyses here indicate expected cost per life saved in the present generation from both AGI safety and alternative foods for nuclear winter, abrupt climate change, etc to be lower than global health. There are orders of magnitude of uncertainty in the X risk interventions, but still little overlap with the global health cost effectiveness distributions, so I think it is fairly robust.
This previous post by Gregory Lewis also seems relevant both to this point in particular, and to this post in general. E.g., Lewis writes:
there is a common pattern of thought along the lines of, “X-risk reduction only matters if the total view is true, and if one holds a different view one should basically discount it”. Although rough, this cost-effectiveness guestimate suggests this is mistaken. Although it seems unlikely x-risk reduction is the best buy from the lights of a person-affecting view (we should be suspicious if it were), given ~$10000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.
Second, although it seems unlikely that x-risk reduction would be the best buy by the lights of a person affecting view, this would not be wildly outlandish. Those with a person-affecting view who think x-risk is particularly likely, or that the cause area has easier wins available than implied in the model, might find the best opportunities to make a difference. It may therefore supply reason for those with such views to investigate the factual matters in greater depth, rather than ruling it out based on their moral commitments.
By robust, I mean relying less on subjective judgements (including priors). Could someone assign a much lower probability of such catastrophic risks? Could they be much more skeptical about how much extra work in the area reduces/mitigates these risk (i.e. the progress)?
On the other hand, how much more skeptical could they be of GiveWell-recommended charities, which are based on RCTs? Of course, generalization is always an issue.
All work is future oriented
Indeed. You don’t tend to employ the word ‘future’ or emphasize it for most work though.
One alternative could be ‘full future’, signifying that it encompasses both the near and long term.
I think there should be space for new and more specific terms.
‘Long term’ has strengths, but it’s overloaded with many meanings. ‘Existential risk reduction’ is specific but quite a mouthful; something shorter would be great. I’m working on another article where I will offer one new alternative.
Isn’t just “x-risk” okay? Or is too much lost in the abbreviation? I suppose people might confuse it for extinction risks specifically, instead of existential risks generally, but you could write it out as “existential risks (x-risks)” or “x-risks (existential risks)” the first time in an article.
Also, “reduction” seems kind of implicit due to the negative connotations of the word “risk” (you could reframe as “existential opportunities” if you wanted to flip the connotation). No one working on global health and poverty wants to make people less healthy or poorer, and no one working on animal welfare wants to make animals suffer more.
Good point, ‘x-risk’ is short and ‘reduction’ should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in “I work with x-risk”, just as “I work with/in global poverty” works. Though some interjections that occur to me in the moment are: “the cause of x-risk” feels clumsy, “letter, dash, and then a word” feels like an odd construct, and it’s a bit negatively oriented.
All work is future-oriented (other than a few exceptions involving the manipulation of time). Maybe “long-term future” instead of “far future” would be better since the latter seems to suggest to me that the benefits won’t be observed until far into the future, whereas “long-term” doesn’t necessarily exclude the “short term”, although most of the impact is typically thought to come from benefits in the far future. Just referring to it as “extinction risk reduction” or “existential risk reduction” doesn’t necessarily have longtermist or far-future connotations.
I think the case for it under a symmetric person-affecting view (like presentism or necessitarianism) is much weaker compared to, say, global health and poverty work, for which we have far more robustly cost-effective interventions.
Also, under an asymmetric person-affecting view, reducing extinction risk would probably not be a priority (or it could even be bad), but reducing s-risks, i.e. risks of astronomical suffering, another type of existential risk, could be. There is overlap between extinction risk work and s-risk work through cooperation/conflict work and AI safety; see the priorities of Effective Altruism Foundation/Foundational Research Institute. I think asymmetric views would normally prioritize global health and poverty, animal welfare or s-risks, depending on your priors and the importance of empirical evidence.
Two analyses here indicate expected cost per life saved in the present generation from both AGI safety and alternative foods for nuclear winter, abrupt climate change, etc to be lower than global health. There are orders of magnitude of uncertainty in the X risk interventions, but still little overlap with the global health cost effectiveness distributions, so I think it is fairly robust.
This previous post by Gregory Lewis also seems relevant both to this point in particular, and to this post in general. E.g., Lewis writes:
By robust, I mean relying less on subjective judgements (including priors). Could someone assign a much lower probability of such catastrophic risks? Could they be much more skeptical about how much extra work in the area reduces/mitigates these risk (i.e. the progress)?
On the other hand, how much more skeptical could they be of GiveWell-recommended charities, which are based on RCTs? Of course, generalization is always an issue.
Thank you for your thoughtful comment!
One alternative could be ‘full future’, signifying that it encompasses both the near and long term.
I think there should be space for new and more specific terms. ‘Long term’ has strengths, but it’s overloaded with many meanings. ‘Existential risk reduction’ is specific but quite a mouthful; something shorter would be great. I’m working on another article where I will offer one new alternative.
Isn’t just “x-risk” okay? Or is too much lost in the abbreviation? I suppose people might confuse it for extinction risks specifically, instead of existential risks generally, but you could write it out as “existential risks (x-risks)” or “x-risks (existential risks)” the first time in an article.
Also, “reduction” seems kind of implicit due to the negative connotations of the word “risk” (you could reframe as “existential opportunities” if you wanted to flip the connotation). No one working on global health and poverty wants to make people less healthy or poorer, and no one working on animal welfare wants to make animals suffer more.
Good point, ‘x-risk’ is short and ‘reduction’ should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in “I work with x-risk”, just as “I work with/in global poverty” works. Though some interjections that occur to me in the moment are: “the cause of x-risk” feels clumsy, “letter, dash, and then a word” feels like an odd construct, and it’s a bit negatively oriented.