All work is future-oriented (other than a few exceptions involving the manipulation of time). Maybe âlong-term futureâ instead of âfar futureâ would be better since the latter seems to suggest to me that the benefits wonât be observed until far into the future, whereas âlong-termâ doesnât necessarily exclude the âshort termâ, although most of the impact is typically thought to come from benefits in the far future. Just referring to it as âextinction risk reductionâ or âexistential risk reductionâ doesnât necessarily have longtermist or far-future connotations.
Similarly, Bostrom, in his astronomical waste essay, argues that even with a âperson-affecting utilitarianâ view, reducing existential risk is a priority
I think the case for it under a symmetric person-affecting view (like presentism or necessitarianism) is much weaker compared to, say, global health and poverty work, for which we have far more robustly cost-effective interventions.
Also, under an asymmetric person-affecting view, reducing extinction risk would probably not be a priority (or it could even be bad), but reducing s-risks, i.e. risks of astronomical suffering, another type of existential risk, could be. There is overlap between extinction risk work and s-risk work through cooperation/âconflict work and AI safety; see the priorities of Effective Altruism Foundation/âFoundational Research Institute. I think asymmetric views would normally prioritize global health and poverty, animal welfare or s-risks, depending on your priors and the importance of empirical evidence.
I think the case for it under a symmetric person-affecting view (like presentism or necessitarianism) is much weaker compared to, say, global health and poverty work, for which we have far more robustly cost-effective interventions.
Two analyses here indicate expected cost per life saved in the present generation from both AGI safety and alternative foods for nuclear winter, abrupt climate change, etc to be lower than global health. There are orders of magnitude of uncertainty in the X risk interventions, but still little overlap with the global health cost effectiveness distributions, so I think it is fairly robust.
This previous post by Gregory Lewis also seems relevant both to this point in particular, and to this post in general. E.g., Lewis writes:
there is a common pattern of thought along the lines of, âX-risk reduction only matters if the total view is true, and if one holds a different view one should basically discount itâ. Although rough, this cost-effectiveness guestimate suggests this is mistaken. Although it seems unlikely x-risk reduction is the best buy from the lights of a person-affecting view (we should be suspicious if it were), given ~$10000 per life year compares unfavourably to best global health interventions, it is still a good buy: it compares favourably to marginal cost effectiveness for rich country healthcare spending, for example.
Second, although it seems unlikely that x-risk reduction would be the best buy by the lights of a person affecting view, this would not be wildly outlandish. Those with a person-affecting view who think x-risk is particularly likely, or that the cause area has easier wins available than implied in the model, might find the best opportunities to make a difference. It may therefore supply reason for those with such views to investigate the factual matters in greater depth, rather than ruling it out based on their moral commitments.
By robust, I mean relying less on subjective judgements (including priors). Could someone assign a much lower probability of such catastrophic risks? Could they be much more skeptical about how much extra work in the area reduces/âmitigates these risk (i.e. the progress)?
On the other hand, how much more skeptical could they be of GiveWell-recommended charities, which are based on RCTs? Of course, generalization is always an issue.
All work is future oriented
Indeed. You donât tend to employ the word âfutureâ or emphasize it for most work though.
One alternative could be âfull futureâ, signifying that it encompasses both the near and long term.
I think there should be space for new and more specific terms.
âLong termâ has strengths, but itâs overloaded with many meanings. âExistential risk reductionâ is specific but quite a mouthful; something shorter would be great. Iâm working on another article where I will offer one new alternative.
Isnât just âx-riskâ okay? Or is too much lost in the abbreviation? I suppose people might confuse it for extinction risks specifically, instead of existential risks generally, but you could write it out as âexistential risks (x-risks)â or âx-risks (existential risks)â the first time in an article.
Also, âreductionâ seems kind of implicit due to the negative connotations of the word âriskâ (you could reframe as âexistential opportunitiesâ if you wanted to flip the connotation). No one working on global health and poverty wants to make people less healthy or poorer, and no one working on animal welfare wants to make animals suffer more.
Good point, âx-riskâ is short and âreductionâ should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in âI work with x-riskâ, just as âI work with/âin global povertyâ works. Though some interjections that occur to me in the moment are: âthe cause of x-riskâ feels clumsy, âletter, dash, and then a wordâ feels like an odd construct, and itâs a bit negatively oriented.
All work is future-oriented (other than a few exceptions involving the manipulation of time). Maybe âlong-term futureâ instead of âfar futureâ would be better since the latter seems to suggest to me that the benefits wonât be observed until far into the future, whereas âlong-termâ doesnât necessarily exclude the âshort termâ, although most of the impact is typically thought to come from benefits in the far future. Just referring to it as âextinction risk reductionâ or âexistential risk reductionâ doesnât necessarily have longtermist or far-future connotations.
I think the case for it under a symmetric person-affecting view (like presentism or necessitarianism) is much weaker compared to, say, global health and poverty work, for which we have far more robustly cost-effective interventions.
Also, under an asymmetric person-affecting view, reducing extinction risk would probably not be a priority (or it could even be bad), but reducing s-risks, i.e. risks of astronomical suffering, another type of existential risk, could be. There is overlap between extinction risk work and s-risk work through cooperation/âconflict work and AI safety; see the priorities of Effective Altruism Foundation/âFoundational Research Institute. I think asymmetric views would normally prioritize global health and poverty, animal welfare or s-risks, depending on your priors and the importance of empirical evidence.
Two analyses here indicate expected cost per life saved in the present generation from both AGI safety and alternative foods for nuclear winter, abrupt climate change, etc to be lower than global health. There are orders of magnitude of uncertainty in the X risk interventions, but still little overlap with the global health cost effectiveness distributions, so I think it is fairly robust.
This previous post by Gregory Lewis also seems relevant both to this point in particular, and to this post in general. E.g., Lewis writes:
By robust, I mean relying less on subjective judgements (including priors). Could someone assign a much lower probability of such catastrophic risks? Could they be much more skeptical about how much extra work in the area reduces/âmitigates these risk (i.e. the progress)?
On the other hand, how much more skeptical could they be of GiveWell-recommended charities, which are based on RCTs? Of course, generalization is always an issue.
Thank you for your thoughtful comment!
One alternative could be âfull futureâ, signifying that it encompasses both the near and long term.
I think there should be space for new and more specific terms. âLong termâ has strengths, but itâs overloaded with many meanings. âExistential risk reductionâ is specific but quite a mouthful; something shorter would be great. Iâm working on another article where I will offer one new alternative.
Isnât just âx-riskâ okay? Or is too much lost in the abbreviation? I suppose people might confuse it for extinction risks specifically, instead of existential risks generally, but you could write it out as âexistential risks (x-risks)â or âx-risks (existential risks)â the first time in an article.
Also, âreductionâ seems kind of implicit due to the negative connotations of the word âriskâ (you could reframe as âexistential opportunitiesâ if you wanted to flip the connotation). No one working on global health and poverty wants to make people less healthy or poorer, and no one working on animal welfare wants to make animals suffer more.
Good point, âx-riskâ is short and âreductionâ should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in âI work with x-riskâ, just as âI work with/âin global povertyâ works. Though some interjections that occur to me in the moment are: âthe cause of x-riskâ feels clumsy, âletter, dash, and then a wordâ feels like an odd construct, and itâs a bit negatively oriented.