Thanks. I do wonder though if EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified. I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
Even with the astronomical waste argument, which is the most extreme version of this argument, at some point you have astronomical numbers of people living, and the rest of the future isn’t tremendously large in comparison, and so focusing on flourishing at that point makes more sense. Of course, this would be quite far in the future.
In practice, I expect the bar comes well before that point, because if everyone is focusing on x-risks, it will become harder and harder to reduce x-risks further, while staying equally as easy to focus on flourishing.
Note that in practice many more people in the world focus on flourishing than on x-risks, so maybe the few long-term focused people might end up always prioritizing x-risks because everyone else picks the low-hanging fruit in flourishing. But that’s different from saying “it’s never important to work on animal suffering”, it’s saying “someone else will fix animal suffering, and so I should do the other important thing of reducing x-risk”.
Thanks. I do wonder though if EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified. I’m not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?
Even with the astronomical waste argument, which is the most extreme version of this argument, at some point you have astronomical numbers of people living, and the rest of the future isn’t tremendously large in comparison, and so focusing on flourishing at that point makes more sense. Of course, this would be quite far in the future.
In practice, I expect the bar comes well before that point, because if everyone is focusing on x-risks, it will become harder and harder to reduce x-risks further, while staying equally as easy to focus on flourishing.
Note that in practice many more people in the world focus on flourishing than on x-risks, so maybe the few long-term focused people might end up always prioritizing x-risks because everyone else picks the low-hanging fruit in flourishing. But that’s different from saying “it’s never important to work on animal suffering”, it’s saying “someone else will fix animal suffering, and so I should do the other important thing of reducing x-risk”.