2. Yes, this makes a lot of sense and probably more closely captures GW’s intent behind the adjustments.
re: versions of this model, stuff being broken, etc (mostly for the benefit of other readers since I think Nuño knows all of this already)
The version linked in this post is still working perfectly fine for me, even when I am not logged into my account. There is a new version from November 2022 that is broken; this was the version used in the GiveWell Change Our Minds contest entry with Sam Nolan and Hannah Rokebrand (here). The main contest entry notebook is not broken because it uses a downloaded csv of the results for each CEA rather than directly importing the results from the respective CEA notebooks (I believe Sam did this because of performance issues, but I guess it had an unintended benefit).
Since the GiveWell contest entry was submitted, I haven’t made any updates to the code or anything else related to this project, and don’t intend to (although others are of course very welcome to fork, etc.). Readers curious about the rough methods used can check out the notebook linked to this blogpost, which is still displaying properly (and is probably a bit easier to follow than the November 2022 version, because it does way less stuff). Readers curious about the end results of the analysis can read our main submission document, either on Observable or on the EA Forum.
Do share whatever you end up doing around worldview diversification! I’d be curious to read, and have spent some time thinking about these issues especially in the global health context.
FWIW I think it’s a bad solution, but why not quantify the uncertainty in the ex ante CEA? See this GiveWell Change Our Minds submission as an example—I don’t think the uncertainty intervals are uninformatively large, although there is a rather strong assumption that the GiveWell models capture the right structure of the problem. Once the uncertainty is quantified, we could run something like the Bayesian adjustment I demonstrate in this PDF to (in theory!) eliminate the positive bias for more uncertain estimates. And then compare the posterior distribution to an analogous distribution for AMF/other relevant benchmark.
Conceptually, the difference between the ex ante and ex post CEA isn’t categorical. It is a matter of degree—the degree of uncertainty about the model and its parameters. This difference could be captured with an adequate explicit treatment of uncertainty in the CEA.