This is an important reflection, and one I’ve found myself querying when seeing various programs claim they are hyper effective. Incredibly well performing interventions are rare, but we might expect to see a higher number of them to be showcased on this forum given there is already a selection bias from the membership/readership here.
However, I do feel the community naturally creates an incentive to inflate (conciously or not) the CEA of interventions—afterall, if you aren’t working on something which can compete with AMF, then why take money away from that? The fix to this being you live in the ambiguity of your intervention and argue that under certain assumptions, your program could be better.
As you effectively note, the problem is could (a priori) judgments are riddled with reasoning risks and errors, which is why I feel the community could do more to better support and also challenge reasoning methods (cognitive and computational). For example, lots of posts mention key uncertainties people have on their interventions, but they often don’t state the second order probabilities of them (not even GiveWell does this consistently) along with how much that uncertainty fundamentally underpins the intervention. A relatively simple fix, which could be a community norm.
I agree about the incentives/motivated reasoning problem. I suspect that uncertainty intervals would be uninformatively huge, so I don’t know if they really are useful in practice. Remember that cost effectiveness is the ratio of two uncertain quantities (benefits and costs), and the ratio of two random variables follows a ratio distribution which generally has huge tails.
FWIW I think it’s a bad solution, but why not quantify the uncertainty in the ex ante CEA? See this GiveWell Change Our Minds submission as an example—I don’t think the uncertainty intervals are uninformatively large, although there is a rather strong assumption that the GiveWell models capture the right structure of the problem. Once the uncertainty is quantified, we could run something like the Bayesian adjustment I demonstrate in this PDF to (in theory!) eliminate the positive bias for more uncertain estimates. And then compare the posterior distribution to an analogous distribution for AMF/other relevant benchmark.
Conceptually, the difference between the ex ante and ex post CEA isn’t categorical. It is a matter of degree—the degree of uncertainty about the model and its parameters. This difference could be captured with an adequate explicit treatment of uncertainty in the CEA.
Interesting, I don’t know why the tails aren’t larger, and I find Squiggle kinda hard to parse. Do you quantify cost uncertainty in addition to benefit uncertainty? Because that would, I think, make the bounds huge.
This is an important reflection, and one I’ve found myself querying when seeing various programs claim they are hyper effective. Incredibly well performing interventions are rare, but we might expect to see a higher number of them to be showcased on this forum given there is already a selection bias from the membership/readership here.
However, I do feel the community naturally creates an incentive to inflate (conciously or not) the CEA of interventions—afterall, if you aren’t working on something which can compete with AMF, then why take money away from that? The fix to this being you live in the ambiguity of your intervention and argue that under certain assumptions, your program could be better.
As you effectively note, the problem is could (a priori) judgments are riddled with reasoning risks and errors, which is why I feel the community could do more to better support and also challenge reasoning methods (cognitive and computational). For example, lots of posts mention key uncertainties people have on their interventions, but they often don’t state the second order probabilities of them (not even GiveWell does this consistently) along with how much that uncertainty fundamentally underpins the intervention. A relatively simple fix, which could be a community norm.
I agree about the incentives/motivated reasoning problem. I suspect that uncertainty intervals would be uninformatively huge, so I don’t know if they really are useful in practice. Remember that cost effectiveness is the ratio of two uncertain quantities (benefits and costs), and the ratio of two random variables follows a ratio distribution which generally has huge tails.
FWIW I think it’s a bad solution, but why not quantify the uncertainty in the ex ante CEA? See this GiveWell Change Our Minds submission as an example—I don’t think the uncertainty intervals are uninformatively large, although there is a rather strong assumption that the GiveWell models capture the right structure of the problem. Once the uncertainty is quantified, we could run something like the Bayesian adjustment I demonstrate in this PDF to (in theory!) eliminate the positive bias for more uncertain estimates. And then compare the posterior distribution to an analogous distribution for AMF/other relevant benchmark.
Conceptually, the difference between the ex ante and ex post CEA isn’t categorical. It is a matter of degree—the degree of uncertainty about the model and its parameters. This difference could be captured with an adequate explicit treatment of uncertainty in the CEA.
Interesting, I don’t know why the tails aren’t larger, and I find Squiggle kinda hard to parse. Do you quantify cost uncertainty in addition to benefit uncertainty? Because that would, I think, make the bounds huge.