If you also include interventions whose effectiveness can’t be measured in advance, then I’d expect the spread to be larger by another factor of 2–10, though it’s hard to say how the results would generalise to areas without data.
I found this claim very interesting. @Cody_Fenwick would you be open to giving a little more detail on this range and how you came to it?:)
The article is by Ben Todd, not Cody :) The fuller quote from Ben in the article is
If we were to expand this to also include non-measurable interventions, I would estimate the spread is somewhat larger, perhaps another 2–10 fold. This is mostly based on my impression of cost-effectiveness estimates that have been made of these interventions — it can’t (by definition) be based on actual data. So, it’s certainly possible that non-measurable interventions could vary by much more or much less.
Ah thanks for pointing out my mistake! And yes, I read this paragraph in the article, but still couldn’t work out how they could provide such a precise range
I found this claim very interesting. @Cody_Fenwick would you be open to giving a little more detail on this range and how you came to it?:)
The article is by Ben Todd, not Cody :) The fuller quote from Ben in the article is
Ah thanks for pointing out my mistake! And yes, I read this paragraph in the article, but still couldn’t work out how they could provide such a precise range