Thinking about it, in general, it seems to me that the ranges of possible effects of interventions could be unbounded, so then you’d have to accept some chance of having a negative impact in the corresponding cause areas. Perhaps this is something your general framework could be augmented to take into account e.g. could one set a maximum allowed probability of having a negative effect in one cause area, or would it be sufficient to have a positive expected effect in each area?
So, it’s worth distinguishing between
quantified uncertainty, or, risk, when you can put a single probability on something, and
unquantified uncertainty, when you can’t decide among multiple probabilities).
If there’s a quantified risk of negative, but your expected value is positive under all of the worldviews you find plausible enough to consider anyway (e.g. for all cause areas), then you’re still okay under the framework I propose in this post. I am effectively suggesting that it’s sufficient to have a positive expected effect in each area (although there may be important considerations that go beyond cause areas).
However, you might have enough cluelessness that you can’t find any portfolio that’s positive in expected value under all plausible worldviews like this. That would suck, but I would normally accept continuing to look for robustly positive expected value portfolios as a good option (whether or not it is robustly positive).
So, it’s worth distinguishing between
quantified uncertainty, or, risk, when you can put a single probability on something, and
unquantified uncertainty, when you can’t decide among multiple probabilities).
If there’s a quantified risk of negative, but your expected value is positive under all of the worldviews you find plausible enough to consider anyway (e.g. for all cause areas), then you’re still okay under the framework I propose in this post. I am effectively suggesting that it’s sufficient to have a positive expected effect in each area (although there may be important considerations that go beyond cause areas).
However, you might have enough cluelessness that you can’t find any portfolio that’s positive in expected value under all plausible worldviews like this. That would suck, but I would normally accept continuing to look for robustly positive expected value portfolios as a good option (whether or not it is robustly positive).