I don’t think a complete case has been made (even from a total utilitarian, longtermist perspective) that at the current funding margin, it makes sense to spend marginal dollars on longtermism-motivated projects instead of animal welfare projects. I’d be very interested to see this comparison in particular
I think this is wildly overdetermined in favor of longtermism. For example, I think at the current margins, a well-spent dollar has a ~10^-13 chance of making the future go much better, with a value probably more than 10^50 happy human lives (and with a much greater expected value—arguably infinite, but that’s another conversation). So the marginal longtermist dollar is worth much more than 10^37 happy lives in expectation. (That’s way more than the number of fish that have ever lived, but for the sake of having a number I think we can safely upper-bound the direct effect of the marginal animal-welfare dollar at 10^0 happy lives.) Given utilitarianism, even if you nudge my numbers quite a bit, I think longtermism blows animal welfare out of the water.
Of course, I don’t think a longtermist dollar is actually ~10^40 times more effective than an animal-welfare one, because of miscellaneous side effects of animal welfare spending on the long-term future. But I think those side effects dominate. (I have heard an EA working on animal welfare say that they think the effects of their work are dominated basically by side effects on humans’ attitudes.) And presumably the side effects aren’t greater than the benefits of funding longtermist projects.
I tend to think you’re right, but don’t think it’s wildly overdetermined—mostly because animal suffering reduction seems more robustly good than does preventing extinction (which I realize is not the sole or explicit goal of longtermism, but is sometimes an intermediate goal)
You asked for an analysis “even from a total utilitarian, longtermist perspective.” From that perspective, I claim that preventing extinction clearly has astronomical (positive) expected value, since variance between possible futures is dominated by what the cosmic endowment is optimized for, and optimizing for utility is much more likely than optimizing for disutility. If you disagree, I’d be interested to hear why, here or on a call.
Sure, want to change the numbers by a factor of, say, 10^12 to account for simulation? The long-term effects still dominate. (Maybe taking actions to influence our simulators is more effective than trying to cause improvements in the long-term of our universe, but that isn’t an argument for doing naive short-term interventions.)
10^12 might be too low. Making up some numbers: If future civilizations can create 10^50 lives, and we think there’s an 0.1% chance that 0.01% of that will be spent on ancestor simulations, then that’s 10^43 expected lives in ancestor simulations. If each such simulation uses 10^12 lives worth of compute, that’s a 10^31 multiplier on short-term helping.
I think this is wildly overdetermined in favor of longtermism. For example, I think at the current margins, a well-spent dollar has a ~10^-13 chance of making the future go much better, with a value probably more than 10^50 happy human lives (and with a much greater expected value—arguably infinite, but that’s another conversation). So the marginal longtermist dollar is worth much more than 10^37 happy lives in expectation. (That’s way more than the number of fish that have ever lived, but for the sake of having a number I think we can safely upper-bound the direct effect of the marginal animal-welfare dollar at 10^0 happy lives.) Given utilitarianism, even if you nudge my numbers quite a bit, I think longtermism blows animal welfare out of the water.
Of course, I don’t think a longtermist dollar is actually ~10^40 times more effective than an animal-welfare one, because of miscellaneous side effects of animal welfare spending on the long-term future. But I think those side effects dominate. (I have heard an EA working on animal welfare say that they think the effects of their work are dominated basically by side effects on humans’ attitudes.) And presumably the side effects aren’t greater than the benefits of funding longtermist projects.
I tend to think you’re right, but don’t think it’s wildly overdetermined—mostly because animal suffering reduction seems more robustly good than does preventing extinction (which I realize is not the sole or explicit goal of longtermism, but is sometimes an intermediate goal)
You can also compare s-risk reduction work with animal welfare.
You asked for an analysis “even from a total utilitarian, longtermist perspective.” From that perspective, I claim that preventing extinction clearly has astronomical (positive) expected value, since variance between possible futures is dominated by what the cosmic endowment is optimized for, and optimizing for utility is much more likely than optimizing for disutility. If you disagree, I’d be interested to hear why, here or on a call.
A proper treatment of this should take into account that short-term helping also might have positive effects in lots of simulations to a much greater extent than long-term helping. https://longtermrisk.org/how-the-simulation-argument-dampens-future-fanaticism
Sure, want to change the numbers by a factor of, say, 10^12 to account for simulation? The long-term effects still dominate. (Maybe taking actions to influence our simulators is more effective than trying to cause improvements in the long-term of our universe, but that isn’t an argument for doing naive short-term interventions.)
10^12 might be too low. Making up some numbers: If future civilizations can create 10^50 lives, and we think there’s an 0.1% chance that 0.01% of that will be spent on ancestor simulations, then that’s 10^43 expected lives in ancestor simulations. If each such simulation uses 10^12 lives worth of compute, that’s a 10^31 multiplier on short-term helping.