When doing rough analysis, there are virtues to having simple models simply laid out, so I commend this—but step 2 is looking at which analytic and other choices the simple model is most sensitive to, and laying that out, and I think this post suffers from not doing that.
In this case, there are plausible moral and analytic assumptions that lead to almost any conclusion you’d like. A few examples:
Include declining total numbers of net-negative lives among wild animals.
Reject total utilitarianism for average utilitarianism across species.
Change your time scale to longer than 10m years, and humanity is plausibly the only way any species on earth survives.
Project species welfare on a per-species basis instead of an aggregate, and it may be improving, and this may be Simpson’s paradox.
Change the baseline zero-level for species welfare, and the answer could reverse.
And other than the first, none of these is even considered in you future directions—even though the assumptions being made are, it seems, far too strong given the types of uncertainties involved. So I applaud flagging that this is uncertain, but don’t think it’s actually useful to make any directional claim, not would further modeling do that much to change this.
Finally, I’m struggling to see how and where this is decision relevant for people or organizations—but that’s an entirely different set of complaints about how to do analyses.
Finally, I’m struggling to see how and where this is decision relevant for people or organizations—but that’s an entirely different set of complaints about how to do analyses.
One way in which it’s decision relevant for people considering how much to prioritize extinction risk mitigation. Arguments for extinction risk mitigation being overwhelmingly important often rely on the assumption that the expected value of the future is positive (and astronomically large). A seemingly sensible way to get evidence on whether the future is likely to be good is to look at whether the present is good and whether the trend is positive. I think this is why multiple people have tried to look into those questions (see Holden Karnovsky’s blog, which is linked already in the main post, and Chapter 9 of What We Owe the Future).
In fact, in WWOTF, Macaskill does almost the same exercise as the one in this post, except he uses neuron counts as measures of moral weight instead of rethink priorities’ weights. My memory is that he comes to the conclusion that the welfare of animals hardly makes an impact on total welfare. I think this post makes a very nice contribution in showing that Macaskill’s conclusion isn’t robust to using alternative (and plausible) moral weights.
Note: there could be plenty of other arguments for X-risk being overwhelmingly important that don’t rely on the claim that the expected value of the future is positive.
When doing rough analysis, there are virtues to having simple models simply laid out, so I commend this—but step 2 is looking at which analytic and other choices the simple model is most sensitive to, and laying that out, and I think this post suffers from not doing that.
In this case, there are plausible moral and analytic assumptions that lead to almost any conclusion you’d like. A few examples:
Include declining total numbers of net-negative lives among wild animals.
Reject total utilitarianism for average utilitarianism across species.
Change your time scale to longer than 10m years, and humanity is plausibly the only way any species on earth survives.
Project species welfare on a per-species basis instead of an aggregate, and it may be improving, and this may be Simpson’s paradox.
Change the baseline zero-level for species welfare, and the answer could reverse.
And other than the first, none of these is even considered in you future directions—even though the assumptions being made are, it seems, far too strong given the types of uncertainties involved. So I applaud flagging that this is uncertain, but don’t think it’s actually useful to make any directional claim, not would further modeling do that much to change this.
Finally, I’m struggling to see how and where this is decision relevant for people or organizations—but that’s an entirely different set of complaints about how to do analyses.
One way in which it’s decision relevant for people considering how much to prioritize extinction risk mitigation. Arguments for extinction risk mitigation being overwhelmingly important often rely on the assumption that the expected value of the future is positive (and astronomically large). A seemingly sensible way to get evidence on whether the future is likely to be good is to look at whether the present is good and whether the trend is positive. I think this is why multiple people have tried to look into those questions (see Holden Karnovsky’s blog, which is linked already in the main post, and Chapter 9 of What We Owe the Future).
In fact, in WWOTF, Macaskill does almost the same exercise as the one in this post, except he uses neuron counts as measures of moral weight instead of rethink priorities’ weights. My memory is that he comes to the conclusion that the welfare of animals hardly makes an impact on total welfare. I think this post makes a very nice contribution in showing that Macaskill’s conclusion isn’t robust to using alternative (and plausible) moral weights.
Note: there could be plenty of other arguments for X-risk being overwhelmingly important that don’t rely on the claim that the expected value of the future is positive.