Yes, in general we care about E(CE of A/CE of B).
I meant we should in theory just care about r = E(“CE of A”)/E(“CE of B”)[1], and pick A over B if the expected cost-effectiveness of A is greater than that of B (i.e. if r > 1), even if A was worse than B in e.g. 90 % of the worlds. In practice, if A is better than B in 90 % of the worlds (in which case the 10th precentile of “CE of A”/”CE of B” would be 1), r will often be higher than 1, so focussing on r or E(“CE of A”/”CE of B”) will lead to the same decisions.
If r is what matters, to investigate whether one’s decision to pick A over B is robust, the aim of the sensitivity analysis would be ensuring that r > 1 under various plausible conditions. So, instead of checking whether the CE of A is often higher than the CE of B, one should be testing whether the expected CE of A if often higher than the expected CE of B.
In practice, it might be the case that:
If r > 1 and A is better than B in e.g. 90 % of the worlds, then the conclusion that r > 1 is robust, i.e. we can be confident that A will continue to be better than B upon further investigation.
If r > 1 and A is better than B in e.g. just 25 % of the worlds, then the conclusion that r > 1 is not robust, i.e. we cannot be confident that A will continue to be better than B upon further investigation.
In this piece, we tried to characterize the problem we face when making claims about expected impacts in a high-uncertainty environment such as climate philanthropy.
How do you think about adaptation (e.g. economic growth, adoption of air conditioning, and migration)? I forgot to finish this sentence in my last comment.
Thanks for the clarifications, Johannes!
I meant we should in theory just care about r = E(“CE of A”)/E(“CE of B”)[1], and pick A over B if the expected cost-effectiveness of A is greater than that of B (i.e. if r > 1), even if A was worse than B in e.g. 90 % of the worlds. In practice, if A is better than B in 90 % of the worlds (in which case the 10th precentile of “CE of A”/”CE of B” would be 1), r will often be higher than 1, so focussing on r or E(“CE of A”/”CE of B”) will lead to the same decisions.
If r is what matters, to investigate whether one’s decision to pick A over B is robust, the aim of the sensitivity analysis would be ensuring that r > 1 under various plausible conditions. So, instead of checking whether the CE of A is often higher than the CE of B, one should be testing whether the expected CE of A if often higher than the expected CE of B.
In practice, it might be the case that:
If r > 1 and A is better than B in e.g. 90 % of the worlds, then the conclusion that r > 1 is robust, i.e. we can be confident that A will continue to be better than B upon further investigation.
If r > 1 and A is better than B in e.g. just 25 % of the worlds, then the conclusion that r > 1 is not robust, i.e. we cannot be confident that A will continue to be better than B upon further investigation.
How do you think about adaptation (e.g. economic growth, adoption of air conditioning, and migration)? I forgot to finish this sentence in my last comment.
Note E(X/Y) is not equal to E(X)/E(Y).