Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart. I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
Extinction versus Global Catastrophic Risks (GCRs)
It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
Sensitivity to views of risk
Some people may be more sceptical of x-risk estimates this century, but might still reach the same prioritisation under the long-termist framing as the cost is so much higher.
This maybe depends how hard you think the “x-risk is really high” pill is to swallow compared to the “future lives matter equally” pill.
Suspicious Convergence
Going from not valuing future generations to valuing future generations seems initially like a huge change in values where you’re adding this enormous group into your moral circle. It seems suspicious that this shouldn’t change our priorities.
It’s maybe not quite as bad as it sounds as it seems reasonable to expect some convergence between what makes lives today good and what makes future lives good. However especially if you’re optimising for maximum impact, you would expect these to come apart.
The world could be plausibly net negative
To the extent you think farmed animals suffer, and that wild animals live net negative lives, a large scale extinction event might not reduce welfare that much in the short-term. This maybe seems less true for a pandemic that would kill all humans (although presumably substantially reduce the number of animals in factory farms). But for example a failed alignment situation where all becomes paperclips doesn’t seem as bad if all the animals were suffering anyway.
The future might be net negative
If you think that, given no deadly pandemic, the future might be net negative (E.g. because of s-risks, or potentially “meh” futures, or you’re very sceptical about AI alignment going well) then preventing pandemics doesn’t actually look that good under a longtermist view.
General improvements for future risks/Patient Philanthropy
As Scott mentions, other possible long-termist approaches such as value spreading, improving institutions, or patient philanthropic investment doesn’t come up under the x-risk view. I think you should be more inclined to these approaches if you expect new risks to appear in the future, providing we make it past current risks.
It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).
To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
ALLFED-type work is likely highly cost effective from the short-term perspective; see global and country (US) specific analyses.
Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart. I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
Extinction versus Global Catastrophic Risks (GCRs)
It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
Sensitivity to views of risk
Some people may be more sceptical of x-risk estimates this century, but might still reach the same prioritisation under the long-termist framing as the cost is so much higher.
This maybe depends how hard you think the “x-risk is really high” pill is to swallow compared to the “future lives matter equally” pill.
Suspicious Convergence
Going from not valuing future generations to valuing future generations seems initially like a huge change in values where you’re adding this enormous group into your moral circle. It seems suspicious that this shouldn’t change our priorities.
It’s maybe not quite as bad as it sounds as it seems reasonable to expect some convergence between what makes lives today good and what makes future lives good. However especially if you’re optimising for maximum impact, you would expect these to come apart.
The world could be plausibly net negative
To the extent you think farmed animals suffer, and that wild animals live net negative lives, a large scale extinction event might not reduce welfare that much in the short-term. This maybe seems less true for a pandemic that would kill all humans (although presumably substantially reduce the number of animals in factory farms). But for example a failed alignment situation where all becomes paperclips doesn’t seem as bad if all the animals were suffering anyway.
The future might be net negative
If you think that, given no deadly pandemic, the future might be net negative (E.g. because of s-risks, or potentially “meh” futures, or you’re very sceptical about AI alignment going well) then preventing pandemics doesn’t actually look that good under a longtermist view.
General improvements for future risks/Patient Philanthropy
As Scott mentions, other possible long-termist approaches such as value spreading, improving institutions, or patient philanthropic investment doesn’t come up under the x-risk view. I think you should be more inclined to these approaches if you expect new risks to appear in the future, providing we make it past current risks.
It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).
ALLFED-type work is likely highly cost effective from the short-term perspective; see global and country (US) specific analyses.