Three structural patterns across eleven AIM cost-effectiveness analyses

Link post

I read eleven cost-effectiveness analyses published by Ambitious Impact (formerly Charity Entrepreneurship) over the last few weeks. Seven were global health and development; four were animal welfare. The earliest was from 2022, the rest from 2024. What I was looking for was not whether any individual CEA had the right answer, but whether the same conceptual parameters were being treated consistently across analyses. They mostly weren’t, and three of the inconsistencies seem worth flagging in public.

The short version: AIM’s “probability of success” parameter is constructed three completely different ways across the corpus, with values ranging from 0.2 to 1.0 for what is supposed to be the same conceptual quantity. The 2024 GHD template’s internal and external validity adjustments are applied inconsistently from one CEA to the next, including one where they’re explicitly zeroed out. And template “suggested defaults” are often left at their default values rather than customized, most strikingly in Digital Pulmonary Rehabilitation 2024, which draws on far more template parameters than the other CEAs and ends up with 30 of 55 suggested values left at defaults across the live model. None of these are individual errors. They’re patterns about how the template gets used in practice, and I think they might be useful to the people who maintain it.