Thanks for your thoughts and the links. I agree that more consideration of long-term effects and population ethics seems important (also, I would have thought, for the impact of accelerating animal welfare improvements). I don’t know anything to go on for quantitative estimates of long-term effects myself, though.
Regarding the possibility of cage-free campaigns as being net negative, I agree this sounds like a risk, so perhaps I was loose in saying donating a certain amount to THL could be “robustly better”. I’m not sure it’s going to be possible to be 100% sure that any set of interventions won’t have a negative impact, though—I was basically going for being able to feel “quite confident” that the impact on farmed animals wouldn’t be negative (edit: given the assumptions I’ve made—all things considered I’m not as confident as that), and haven’t been able yet to be precise about what that means.
Thinking about it, in general, it seems to me that the ranges of possible effects of interventions could be unbounded, so then you’d have to accept some chance of having a negative impact in the corresponding cause areas. Perhaps this is something your general framework could be augmented to take into account e.g. could one set a maximum allowed probability of having a negative effect in one cause area, or would it be sufficient to have a positive expected effect in each area?
Thinking about it, in general, it seems to me that the ranges of possible effects of interventions could be unbounded, so then you’d have to accept some chance of having a negative impact in the corresponding cause areas. Perhaps this is something your general framework could be augmented to take into account e.g. could one set a maximum allowed probability of having a negative effect in one cause area, or would it be sufficient to have a positive expected effect in each area?
So, it’s worth distinguishing between
quantified uncertainty, or, risk, when you can put a single probability on something, and
unquantified uncertainty, when you can’t decide among multiple probabilities).
If there’s a quantified risk of negative, but your expected value is positive under all of the worldviews you find plausible enough to consider anyway (e.g. for all cause areas), then you’re still okay under the framework I propose in this post. I am effectively suggesting that it’s sufficient to have a positive expected effect in each area (although there may be important considerations that go beyond cause areas).
However, you might have enough cluelessness that you can’t find any portfolio that’s positive in expected value under all plausible worldviews like this. That would suck, but I would normally accept continuing to look for robustly positive expected value portfolios as a good option (whether or not it is robustly positive).
Thanks for your thoughts and the links. I agree that more consideration of long-term effects and population ethics seems important (also, I would have thought, for the impact of accelerating animal welfare improvements). I don’t know anything to go on for quantitative estimates of long-term effects myself, though.
Regarding the possibility of cage-free campaigns as being net negative, I agree this sounds like a risk, so perhaps I was loose in saying donating a certain amount to THL could be “robustly better”. I’m not sure it’s going to be possible to be 100% sure that any set of interventions won’t have a negative impact, though—I was basically going for being able to feel “quite confident” that the impact on farmed animals wouldn’t be negative (edit: given the assumptions I’ve made—all things considered I’m not as confident as that), and haven’t been able yet to be precise about what that means.
Thinking about it, in general, it seems to me that the ranges of possible effects of interventions could be unbounded, so then you’d have to accept some chance of having a negative impact in the corresponding cause areas. Perhaps this is something your general framework could be augmented to take into account e.g. could one set a maximum allowed probability of having a negative effect in one cause area, or would it be sufficient to have a positive expected effect in each area?
So, it’s worth distinguishing between
quantified uncertainty, or, risk, when you can put a single probability on something, and
unquantified uncertainty, when you can’t decide among multiple probabilities).
If there’s a quantified risk of negative, but your expected value is positive under all of the worldviews you find plausible enough to consider anyway (e.g. for all cause areas), then you’re still okay under the framework I propose in this post. I am effectively suggesting that it’s sufficient to have a positive expected effect in each area (although there may be important considerations that go beyond cause areas).
However, you might have enough cluelessness that you can’t find any portfolio that’s positive in expected value under all plausible worldviews like this. That would suck, but I would normally accept continuing to look for robustly positive expected value portfolios as a good option (whether or not it is robustly positive).