How does marginal spending on animal welfare and global health influence the long-term future?
I’d guess that most of the expected impact in both cases comes from the futures in which Earth-originating intelligent life (E-OIL) avoids near-term existential catastrophe and goes on to create a vast amount of value in the universe by creating a much larger economy and colonizing other galaxies and solar systems, and transforming the matter there into stuff that matters a lot more morally than lifeless matter (“big futures”).
For animal welfare spending, then, perhaps most of the expected impact come from the spending reducing the amount of animal suffering and suffering of other non-human sentient beings (e.g. future AIs) in the universe compared to the big futures without the late 2020s animal welfare spending. Perhaps the causal pathway for this is affecting what people think about the moral value of animal suffering and that positively affecting what E-OIL does with the reachable universe in big futures (less animal suffering and lower probability of neglecting the importance of sentient AI moral patients).
For global health spending, perhaps most of the expected impact comes from increasing the probability that E-OIL goes on to have a big future. Assuming the big futures are net positive (as I think is likely) this would be a good thing.
I think some global health spending probably has much more of an impact on this than others. For example, $100M would only put a dent in annual malaria deaths (~20,000 fewer deaths, a <5% reduction in annual deaths for 1 year) and it seems like that would have quite a small effect on existential risk. Whereas it seems plausible to me that if the money was spent on reducing the probability of a severe global pandemic in the 2030s (spending which seems like it could qualify as “global health” spending) plausibly could have a much more significant effect. I don’t know how much $100M could reduce the odds of a global pandemic in the 2030s, but intuitively I’d guess that it could make enough of a difference to be much more impactful on reducing 21st century existential risk than reducing malaria deaths.
How would the best “global health” spending compare to the “animal welfare” spending? Could it reduce existential risk by enough to do more good than better values achieved via animal welfare spending could do?
I think it plausibly could (i.e. the global health spending plausibly could do much more good), especially in the best futures in which it turns out that AI does our moral philosophy really well such that our current values don’t get locked in, but rather we figure out fantastic moral values after e.g. a long reflection and terraform the reachable universe based on those values.
But I think that in expectation, $100M of the global health spending would only reduce existential risk by a small amount, increasing the EV of the future by a small amount (something like <0.001%), and intuitively $100M extra spent on animal welfare (given the relatively small size of current spending on animal welfare), could do a lot more good (to increase the value of the big future by a larger small amount (than the small amount of increased probability of a big future from the global health scenario).
Initially I was answering about halfway toward Agree from Neutral, but after thinking this out, I’m moving further toward Agree.
How does marginal spending on animal welfare and global health influence the long-term future?
I’d guess that most of the expected impact in both cases comes from the futures in which Earth-originating intelligent life (E-OIL) avoids near-term existential catastrophe and goes on to create a vast amount of value in the universe by creating a much larger economy and colonizing other galaxies and solar systems, and transforming the matter there into stuff that matters a lot more morally than lifeless matter (“big futures”).
For animal welfare spending, then, perhaps most of the expected impact come from the spending reducing the amount of animal suffering and suffering of other non-human sentient beings (e.g. future AIs) in the universe compared to the big futures without the late 2020s animal welfare spending. Perhaps the causal pathway for this is affecting what people think about the moral value of animal suffering and that positively affecting what E-OIL does with the reachable universe in big futures (less animal suffering and lower probability of neglecting the importance of sentient AI moral patients).
For global health spending, perhaps most of the expected impact comes from increasing the probability that E-OIL goes on to have a big future. Assuming the big futures are net positive (as I think is likely) this would be a good thing.
I think some global health spending probably has much more of an impact on this than others. For example, $100M would only put a dent in annual malaria deaths (~20,000 fewer deaths, a <5% reduction in annual deaths for 1 year) and it seems like that would have quite a small effect on existential risk. Whereas it seems plausible to me that if the money was spent on reducing the probability of a severe global pandemic in the 2030s (spending which seems like it could qualify as “global health” spending) plausibly could have a much more significant effect. I don’t know how much $100M could reduce the odds of a global pandemic in the 2030s, but intuitively I’d guess that it could make enough of a difference to be much more impactful on reducing 21st century existential risk than reducing malaria deaths.
How would the best “global health” spending compare to the “animal welfare” spending? Could it reduce existential risk by enough to do more good than better values achieved via animal welfare spending could do?
I think it plausibly could (i.e. the global health spending plausibly could do much more good), especially in the best futures in which it turns out that AI does our moral philosophy really well such that our current values don’t get locked in, but rather we figure out fantastic moral values after e.g. a long reflection and terraform the reachable universe based on those values.
But I think that in expectation, $100M of the global health spending would only reduce existential risk by a small amount, increasing the EV of the future by a small amount (something like <0.001%), and intuitively $100M extra spent on animal welfare (given the relatively small size of current spending on animal welfare), could do a lot more good (to increase the value of the big future by a larger small amount (than the small amount of increased probability of a big future from the global health scenario).
Initially I was answering about halfway toward Agree from Neutral, but after thinking this out, I’m moving further toward Agree.