Finally, I’m struggling to see how and where this is decision relevant for people or organizations—but that’s an entirely different set of complaints about how to do analyses.
One way in which it’s decision relevant for people considering how much to prioritize extinction risk mitigation. Arguments for extinction risk mitigation being overwhelmingly important often rely on the assumption that the expected value of the future is positive (and astronomically large). A seemingly sensible way to get evidence on whether the future is likely to be good is to look at whether the present is good and whether the trend is positive. I think this is why multiple people have tried to look into those questions (see Holden Karnovsky’s blog, which is linked already in the main post, and Chapter 9 of What We Owe the Future).
In fact, in WWOTF, Macaskill does almost the same exercise as the one in this post, except he uses neuron counts as measures of moral weight instead of rethink priorities’ weights. My memory is that he comes to the conclusion that the welfare of animals hardly makes an impact on total welfare. I think this post makes a very nice contribution in showing that Macaskill’s conclusion isn’t robust to using alternative (and plausible) moral weights.
Note: there could be plenty of other arguments for X-risk being overwhelmingly important that don’t rely on the claim that the expected value of the future is positive.
One way in which it’s decision relevant for people considering how much to prioritize extinction risk mitigation. Arguments for extinction risk mitigation being overwhelmingly important often rely on the assumption that the expected value of the future is positive (and astronomically large). A seemingly sensible way to get evidence on whether the future is likely to be good is to look at whether the present is good and whether the trend is positive. I think this is why multiple people have tried to look into those questions (see Holden Karnovsky’s blog, which is linked already in the main post, and Chapter 9 of What We Owe the Future).
In fact, in WWOTF, Macaskill does almost the same exercise as the one in this post, except he uses neuron counts as measures of moral weight instead of rethink priorities’ weights. My memory is that he comes to the conclusion that the welfare of animals hardly makes an impact on total welfare. I think this post makes a very nice contribution in showing that Macaskill’s conclusion isn’t robust to using alternative (and plausible) moral weights.
Note: there could be plenty of other arguments for X-risk being overwhelmingly important that don’t rely on the claim that the expected value of the future is positive.