I think the conclusions of RP’s research with respect to cause prioritization still hold up after incorporating the arguments you’ve enumerated in your post.
This seems maybe truth for animals vs AMF but not for animals vs Xrisk.
This could depend on your population ethics and indirect considerations. I’ll assume some kind of expectational utilitarianism.
The strongest case for extinction and existential risk reduction is on a (relatively) symmetric total view. On such a view, it all seems dominated by far future moral patients, especially artificial minds, in expectation. Farmed animal welfare might tell us something about whether artificial minds are likely to have net positive or net negative aggregate welfare, and moral weights for animals can inform moral weights for different artificial minds and especially those with limited agency. But it’s relatively weak evidence. If you expect future welfare to be positive, then extinction risk reduction looks good and (far) better in expectation even with very low probabilities of making a difference, but could be Pascalian, especially for an individual (https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/). The Pascalian concerns could also apply to other population ethics.
If you have narrow person-affecting views, then cost-effective farmed animal interventions don’t generally help animals alive now, so won’t do much good. If death is also bad on such views, extinction risk reduction would be better, but not necessarily better than GiveWell recommendations. If death isn’t bad, then you’d pick work to improve human welfare, which could include saving the lives of children for the benefit of the parents and other family, not the children saved.
If you have asymmetric or wide person-affecting views, then animal welfare could look better than extinction risk reduction depending on human vs nonhuman moral weights and expected current lives saved by x-risk reduction, but worse than far future quality improvements or s-risk reduction (e.g. https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12927, but maybe animal welfare work counts for those, too, and either may be Pascalian). Still, on some asymmetric or wide views, extinction risk reduction could look better than animal welfare, in case good lives offset the bad ones (https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12927). Also, maybe extinction risk reduction could look better for indirect reasons, e.g. replacing alien descendants with our happier ones, or because the work also improves the quality of the far future conditional on not going extinct.
EDIT: Or, if the people alive today aren’t killed (whether through a catastrophic event or anything else, like malaria), there’s a chance they’ll live very very long lives through technological advancement, and so saving them could at least beat the near-term effects of animal welfare if dying earlier is worse on a given person-affecting view.
That being said, all the above variants of expectational utilitarianism are irrational, because unbounded utility functions are irrational (e.g. can be money pumped, https://onlinelibrary.wiley.com/doi/abs/10.1111/phpr.12704), so the standard x-risk argument seems based on jointly irrational premises. And x-risk reduction might not follow from stochastic dominance or expected utility maximization on all bounded increasing utility functions of total welfare (https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ and https://arxiv.org/abs/1807.10895; the argument for riskier bets here also depends on wide background value uncertainty, which would be lower with lower moral weights for nonhuman animals; stochastic dominance is equivalent to higher expected utility on all bounded increasing utility functions consistent with the (pre)order in deterministic cases).
This seems maybe truth for animals vs AMF but not for animals vs Xrisk.
We’re working on animals vs xrisk next!
This could depend on your population ethics and indirect considerations. I’ll assume some kind of expectational utilitarianism.
The strongest case for extinction and existential risk reduction is on a (relatively) symmetric total view. On such a view, it all seems dominated by far future moral patients, especially artificial minds, in expectation. Farmed animal welfare might tell us something about whether artificial minds are likely to have net positive or net negative aggregate welfare, and moral weights for animals can inform moral weights for different artificial minds and especially those with limited agency. But it’s relatively weak evidence. If you expect future welfare to be positive, then extinction risk reduction looks good and (far) better in expectation even with very low probabilities of making a difference, but could be Pascalian, especially for an individual (https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/). The Pascalian concerns could also apply to other population ethics.
If you have narrow person-affecting views, then cost-effective farmed animal interventions don’t generally help animals alive now, so won’t do much good. If death is also bad on such views, extinction risk reduction would be better, but not necessarily better than GiveWell recommendations. If death isn’t bad, then you’d pick work to improve human welfare, which could include saving the lives of children for the benefit of the parents and other family, not the children saved.
If you have asymmetric or wide person-affecting views, then animal welfare could look better than extinction risk reduction depending on human vs nonhuman moral weights and expected current lives saved by x-risk reduction, but worse than far future quality improvements or s-risk reduction (e.g. https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12927, but maybe animal welfare work counts for those, too, and either may be Pascalian). Still, on some asymmetric or wide views, extinction risk reduction could look better than animal welfare, in case good lives offset the bad ones (https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12927). Also, maybe extinction risk reduction could look better for indirect reasons, e.g. replacing alien descendants with our happier ones, or because the work also improves the quality of the far future conditional on not going extinct.
EDIT: Or, if the people alive today aren’t killed (whether through a catastrophic event or anything else, like malaria), there’s a chance they’ll live very very long lives through technological advancement, and so saving them could at least beat the near-term effects of animal welfare if dying earlier is worse on a given person-affecting view.
That being said, all the above variants of expectational utilitarianism are irrational, because unbounded utility functions are irrational (e.g. can be money pumped, https://onlinelibrary.wiley.com/doi/abs/10.1111/phpr.12704), so the standard x-risk argument seems based on jointly irrational premises. And x-risk reduction might not follow from stochastic dominance or expected utility maximization on all bounded increasing utility functions of total welfare (https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ and https://arxiv.org/abs/1807.10895; the argument for riskier bets here also depends on wide background value uncertainty, which would be lower with lower moral weights for nonhuman animals; stochastic dominance is equivalent to higher expected utility on all bounded increasing utility functions consistent with the (pre)order in deterministic cases).
Yes, I agree with that caveat.