There are some places where you seem to use the terms “existential risk” and “extinction risk” as interchangeable. For example, you write:
I do not think it is obvious that reducing the probability of extinction does more good per dollar than the value drift rate, which naively suggests the effective altruist community should invest relatively more into reducing value drift. But I find it plausible that, upon further analysis, it would become clear that existential risk matters much more.
Additionally, it seems that, to get your “annual extinction probability” estimate, some of the estimates you use from the spreadsheet I put together are actually existential risk, global catastrophic risk, or collapse risk. For example, you seem to use Ord’s estimate of total existential risk, Rees’ estimate of the odds that our present civilizationon earth will survive to the end of the present century, and Simpson’s estimate that “Humanity’s prognosis for the coming century is well approximated by a global catastrophic risk of 0.2% per year” (emphases added).
But, as both Bostrom and Ord make clear in their writings on existential risk, extinction is not the only possible type of existential catastrophe. There could also be an unrecoverable collapse or an unrecoverable dystopia. And many global catastrophes would not be existential catastrophes.
I see this as important because:
Overlooking that there are possible types of existential catastrophe other than extinction might lead to us doing too little to protect against them.
Relatedly, using the term “existential risk” when one really means “extinction risk” might make existential risk less effective as jargon that can efficiently convey this key thing many EAs care about.
Existential risk and global catastrophic risk are both very likely at least a bit higher than extinction risk (since they include a large number of possible events). And I’d guess collapse risk might be higher as well. So you may end up with an overly high extinction risk estimate in your discount rate.
Alternatively, if existential risk is actually the most appropriate thing to include in your discount rate (rather than extinction risk), using estimates of extinction risk alone may lead your discount rate being too low. This is because extinction risk estimates overlook the risk of unrecoverable collapse or dystopia.
To be clear, I have no problems with sources that just talk about extinction risk. Often, that’s the appropriate scope for a given piece of work. I just have a pet peeve with people really talking about extinction risk, but using the term existential risk, or vice versa.
Also to be clear, you’re far from the only person who’s done that, and this isn’t really a criticism of the substance of the post (though it may suggest that the estimates should be tweaked somewhat).
Existential risk ≠ extinction risk ≠ global catastrophic risk
For an expanded version of the following points, see Clarifying existential risks and existential catastrophes and/or 3 suggestions about jargon in EA.
There are some places where you seem to use the terms “existential risk” and “extinction risk” as interchangeable. For example, you write:
Additionally, it seems that, to get your “annual extinction probability” estimate, some of the estimates you use from the spreadsheet I put together are actually existential risk, global catastrophic risk, or collapse risk. For example, you seem to use Ord’s estimate of total existential risk, Rees’ estimate of the odds that our present civilization on earth will survive to the end of the present century, and Simpson’s estimate that “Humanity’s prognosis for the coming century is well approximated by a global catastrophic risk of 0.2% per year” (emphases added).
But, as both Bostrom and Ord make clear in their writings on existential risk, extinction is not the only possible type of existential catastrophe. There could also be an unrecoverable collapse or an unrecoverable dystopia. And many global catastrophes would not be existential catastrophes.
I see this as important because:
Overlooking that there are possible types of existential catastrophe other than extinction might lead to us doing too little to protect against them.
Relatedly, using the term “existential risk” when one really means “extinction risk” might make existential risk less effective as jargon that can efficiently convey this key thing many EAs care about.
Existential risk and global catastrophic risk are both very likely at least a bit higher than extinction risk (since they include a large number of possible events). And I’d guess collapse risk might be higher as well. So you may end up with an overly high extinction risk estimate in your discount rate.
Alternatively, if existential risk is actually the most appropriate thing to include in your discount rate (rather than extinction risk), using estimates of extinction risk alone may lead your discount rate being too low. This is because extinction risk estimates overlook the risk of unrecoverable collapse or dystopia.
To be clear, I have no problems with sources that just talk about extinction risk. Often, that’s the appropriate scope for a given piece of work. I just have a pet peeve with people really talking about extinction risk, but using the term existential risk, or vice versa.
Also to be clear, you’re far from the only person who’s done that, and this isn’t really a criticism of the substance of the post (though it may suggest that the estimates should be tweaked somewhat).