I see your point. Ok, let’s narrow down. Would you say that encouraging people to use evidence to make their decisions, in any area including giving, is robustly net positive?
Could it not plausibly be the case that supporting rigorous research explicitly into how best to reduce wild-animal suffering is robustly net positive? I say this because whenever I’m making cause-prioritization considerations, the concern that always dominates seems to be wild-animal suffering and the effect that intervention x (whether it’s global poverty or domesticated animal welfare) will have on it.
General promotion of anti-speciesism, with equal emphasis put on wild-animals, would also seem to be robustly net positive, although this general promotion would be difficult to do and may have a low success rate, so it would probably be outweighed in an expected-utility calculation by more speculative interventions such as vegan advocacy which have an unclear sign when it comes to wild-animal suffering.
Suppose we invest more into researching wild animal suffering. We might become somewhat confident that an intervention is valuable and then implement it, but this intervention turns out to be extremely harmful. WAS is sufficiently muddy that interventions might often have the opposite of the desired effect. Or perhaps research leads us to conclude that we need to halt space exploration to prevent people from spreading WAS throughout the galaxy, but in fact it would be beneficial to have more wild animals, or we would terraform new planets in a way that doesn’t cause WAS. Or, more likely, the research will just accomplish nothing.
I think the value of higher quality and more information in terms of wild animal suffering will still be a net positive, meaning that funding research in WAS could be highly valuable. I say ‘could’ only because something else might still be more valuable. But if, on expected value, it seems like the best thing to do, the uncertainties shouldn’t put us off too much, if at all.
I see your point. Ok, let’s narrow down. Would you say that encouraging people to use evidence to make their decisions, in any area including giving, is robustly net positive?
Encouraging people to use evidence still has similar concerns, e.g. they might become more effective at doing harmful things.
I do not know of a single intervention that’s robustly net positive.
Could it not plausibly be the case that supporting rigorous research explicitly into how best to reduce wild-animal suffering is robustly net positive? I say this because whenever I’m making cause-prioritization considerations, the concern that always dominates seems to be wild-animal suffering and the effect that intervention x (whether it’s global poverty or domesticated animal welfare) will have on it.
General promotion of anti-speciesism, with equal emphasis put on wild-animals, would also seem to be robustly net positive, although this general promotion would be difficult to do and may have a low success rate, so it would probably be outweighed in an expected-utility calculation by more speculative interventions such as vegan advocacy which have an unclear sign when it comes to wild-animal suffering.
Suppose we invest more into researching wild animal suffering. We might become somewhat confident that an intervention is valuable and then implement it, but this intervention turns out to be extremely harmful. WAS is sufficiently muddy that interventions might often have the opposite of the desired effect. Or perhaps research leads us to conclude that we need to halt space exploration to prevent people from spreading WAS throughout the galaxy, but in fact it would be beneficial to have more wild animals, or we would terraform new planets in a way that doesn’t cause WAS. Or, more likely, the research will just accomplish nothing.
I think the value of higher quality and more information in terms of wild animal suffering will still be a net positive, meaning that funding research in WAS could be highly valuable. I say ‘could’ only because something else might still be more valuable. But if, on expected value, it seems like the best thing to do, the uncertainties shouldn’t put us off too much, if at all.
Yes, I agree that WAS research has a high expected value. My point was that it has a non-trivial probability (say, >10%) of being harmful.