I tend to think you’re right, but don’t think it’s wildly overdetermined—mostly because animal suffering reduction seems more robustly good than does preventing extinction (which I realize is not the sole or explicit goal of longtermism, but is sometimes an intermediate goal)
You asked for an analysis “even from a total utilitarian, longtermist perspective.” From that perspective, I claim that preventing extinction clearly has astronomical (positive) expected value, since variance between possible futures is dominated by what the cosmic endowment is optimized for, and optimizing for utility is much more likely than optimizing for disutility. If you disagree, I’d be interested to hear why, here or on a call.
I tend to think you’re right, but don’t think it’s wildly overdetermined—mostly because animal suffering reduction seems more robustly good than does preventing extinction (which I realize is not the sole or explicit goal of longtermism, but is sometimes an intermediate goal)
You can also compare s-risk reduction work with animal welfare.
You asked for an analysis “even from a total utilitarian, longtermist perspective.” From that perspective, I claim that preventing extinction clearly has astronomical (positive) expected value, since variance between possible futures is dominated by what the cosmic endowment is optimized for, and optimizing for utility is much more likely than optimizing for disutility. If you disagree, I’d be interested to hear why, here or on a call.