When you’re weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: “a major consideration here is the use of AI to mitigate other x-risks”. So I don’t think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
So I don’t think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
I used to prefer focussing on tail risk, but I now think expected deaths are a better metric.
Interventions in the effective altruism community are usually assessed under 2 different frameworks, existential risk mitigation, and nearterm welfare improvement. It looks like 2 distinct frameworks are needed given the difficulty of comparing nearterm and longterm effects. However, I do not think this is quite the right comparison under a longtermist perspective, where most of the expected value of one’s actions results from influencing the longterm future, and the indirect longterm effects of saving lives outside catastrophes cannot be neglected.
In this case, I believe it is better to use a single framework for assessing interventions saving human lives in catastrophes and normal times. One way of doing this, which I consider in this post, is supposing the benefits of saving one life are a function of the population size.
Assuming the benefits of saving a life are proportional to the ratio between the initial and final population, and that the cost to save a life does not depend on this ratio, it looks like saving lives in normal times is better to improve the longterm future than doing so in catastrophes.
When you’re weighing existential risks (or other things which steer human civilization on a large scale) against each other, effects are always going to be denominated in a very large number of lives. And this is what OP said they were doing: “a major consideration here is the use of AI to mitigate other x-risks”. So I don’t think the headline numbers are very useful here (especially because we could make them far far higher by counting future lives).
Thanks for the comment, Richard.
I used to prefer focussing on tail risk, but I now think expected deaths are a better metric.