In December 2022, GiveWell announced the results of its “Change Our Mind Contest”. GiveWell awarded joint first place and a $20,000 prize to Noah Haber for “GiveWell’s Uncertainty Problem”. According to GiveWell:
The author argues that without properly accounting for uncertainty, GiveWell is likely to allocate its portfolio of funding suboptimally, and proposes methods for addressing uncertainty.
Going forward, we plan to: Incorporate sensitivity analysis in published models. We’ve begun this process by incorporating basic sensitivity analysis on key parameters in recent grant and intervention report pages (e.g., zinc/ORS; vitamin A supplementation; MiracleFeet). We’re currently revamping our top charity CEAs to make them more legible, and we plan to incorporate sensitivity analysis and Monte Carlos into these before publishing.
[...]
In the three linked pages, GiveWell has conducted a so-called one-at-a-time sensitivity analysis, an approach which as GiveWell acknowledges is unable to quantify the overall uncertainty in cost-effectiveness.
After the contest winners were announced in December 2022, GiveWell has as far as I can tell made the following grants (source):
January 2023 to June 2024
January 2024 to June 2024
To top charities
$295 M
$89 M
To organisations other than top charities
$143 M
$31 M
I cannot find on GiveWell’s website any investigation where the full uncertainty (from all parameters) is modelled. Has GiveWell published such a thing? If not, do they still plan to?
Hey, thanks for the question! I’m Alex Cohen, a researcher at GiveWell, and wanted to chime in.
We did say we’d include a 25th/75th percentile range on bottom line cost-effectiveness (in addition to the one-way sensitivity checks). We haven’t added that yet, and we should. We ran into some issues running the full sensitivity analyses (instead of the one-way sensitivity checks we do have), and we prioritized publishing updated intervention reports and cost-effective analyses without them.
We’ll add those percentile ranges to our top charity intervention reports (so the simple cost-effective analyses will also include a bottom line cost-effectiveness 25⁄75 range, in addition to one-way sensitivity checks) and ensure that new intervention reports/grant pages have them included before publishing. We think it’s worth emphasizing how uncertain our cost-effectiveness estimates are, and this is one way to do that (though it has limitations).
We’re still not planning to base our decision-making on this uncertainty in the bottom line cost-effectiveness (like the “Change Our Mind Contest” post recommended) or model uncertainty on every parameter. To defend against the Optimizer’s Curse, we prefer our approach of skeptically adjusting our inputs, rather than an all-in adjustment to bottom-line cost-effectiveness. We explain why in the uncertainty post.
Really appreciate you raising this. Sorry this has taken so long, and grateful for the nudge!
[Question] Does GiveWell still plan to model the uncertainty of their cost-effectiveness estimates?
In December 2022, GiveWell announced the results of its “Change Our Mind Contest”. GiveWell awarded joint first place and a $20,000 prize to Noah Haber for “GiveWell’s Uncertainty Problem”. According to GiveWell:
One of the honorable mention awards was also a critique of GiveWell’s approach to uncertainty.
Both essays recommended using Monte Carlo simulation to properly model uncertainty in cost-effectiveness.
In December 2023, GiveWell published “How We Plan to Approach Uncertainty in Our Cost-Effectiveness Models”, in which they explained their next steps:
In the three linked pages, GiveWell has conducted a so-called one-at-a-time sensitivity analysis, an approach which as GiveWell acknowledges is unable to quantify the overall uncertainty in cost-effectiveness.
After the contest winners were announced in December 2022, GiveWell has as far as I can tell made the following grants (source):
I cannot find on GiveWell’s website any investigation where the full uncertainty (from all parameters) is modelled. Has GiveWell published such a thing? If not, do they still plan to?
Hey, thanks for the question! I’m Alex Cohen, a researcher at GiveWell, and wanted to chime in.
We did say we’d include a 25th/75th percentile range on bottom line cost-effectiveness (in addition to the one-way sensitivity checks). We haven’t added that yet, and we should. We ran into some issues running the full sensitivity analyses (instead of the one-way sensitivity checks we do have), and we prioritized publishing updated intervention reports and cost-effective analyses without them.
We’ll add those percentile ranges to our top charity intervention reports (so the simple cost-effective analyses will also include a bottom line cost-effectiveness 25⁄75 range, in addition to one-way sensitivity checks) and ensure that new intervention reports/grant pages have them included before publishing. We think it’s worth emphasizing how uncertain our cost-effectiveness estimates are, and this is one way to do that (though it has limitations).
We’re still not planning to base our decision-making on this uncertainty in the bottom line cost-effectiveness (like the “Change Our Mind Contest” post recommended) or model uncertainty on every parameter. To defend against the Optimizer’s Curse, we prefer our approach of skeptically adjusting our inputs, rather than an all-in adjustment to bottom-line cost-effectiveness. We explain why in the uncertainty post.
Really appreciate you raising this. Sorry this has taken so long, and grateful for the nudge!
not that I really mind, but why is the author anonymous? it seems like such a tame criticism it’s hard to imagine anyone getting upset about it
I’m really interested in the answer to this question and happy to say that publicly
Hypothetically they could work at GiveWell?