Overall, I roughly estimate that the most effective measurable interventions in an area are usually around 3–10 times more cost effective than the mean of measurable interventions (where the mean is the expected effectiveness you’d get from picking randomly). If you also include interventions whose effectiveness can’t be measured in advance, then I’d expect the spread to be larger by another factor of 2–10, though it’s hard to say how the results would generalise to areas without data.
Also this section:
3. How much can we gain from being data-driven?
People in effective altruism sometimes say things like “the best charities achieve 10,000 times more than the worst” — suggesting it might be possible to have 10,000 times as much impact if we only focus on the best interventions — often citing the DCP2 data as evidence for that.
This is true in the sense that the differences across all cause areas can be that large. But it would be misleading if someone was talking about a specific cause area in two important ways.
First, as we’ve just seen, the data most likely overstates the true, forward-looking differences between the best and worst interventions.
Second, it often seems fairer to compare the best with the mean intervention, rather than the worst intervention. …
Overall, my guess is that, in an at least somewhat data-rich area, using data to identify the best interventions can perhaps boost your impact in the area by 3–10 times compared to picking randomly, depending on the quality of your data.
This is still a big boost, and hugely underappreciated by the world at large. However, it’s far less than I’ve heard some people in the effective altruism community claim.
In addition, there are downsides to being data-driven in this way — by insisting on a data-driven approach, you might be ruling out many of the interventions in the tail (which are often hard to measure, and so will be missing).
(“Hits-based rather than data-driven” is quite counterintuitive, especially to someone like me who’s worked most of my career in data-for-decision-guidance roles, but a useful corrective to the streetlight effect.)
Edit: whoops just saw Cody’s comment above pointing to the same article.
If you also include interventions whose effectiveness can’t be measured in advance, then I’d expect the spread to be larger by another factor of 2–10, though it’s hard to say how the results would generalise to areas without data.
I found this claim very interesting. @Cody_Fenwick would you be open to giving a little more detail on this range and how you came to it?:)
The article is by Ben Todd, not Cody :) The fuller quote from Ben in the article is
If we were to expand this to also include non-measurable interventions, I would estimate the spread is somewhat larger, perhaps another 2–10 fold. This is mostly based on my impression of cost-effectiveness estimates that have been made of these interventions — it can’t (by definition) be based on actual data. So, it’s certainly possible that non-measurable interventions could vary by much more or much less.
Ah thanks for pointing out my mistake! And yes, I read this paragraph in the article, but still couldn’t work out how they could provide such a precise range
You may find this 80K article useful, both for their analysis and for all the data they collected: How much do solutions to social problems differ in their effectiveness? A collection of all the studies we could find. Bottomline is 3–10x not >1,000x for measurable interventions, and stack on a 2–10x spread for harder-to-measure interventions:
Also this section:
(“Hits-based rather than data-driven” is quite counterintuitive, especially to someone like me who’s worked most of my career in data-for-decision-guidance roles, but a useful corrective to the streetlight effect.)
Edit: whoops just saw Cody’s comment above pointing to the same article.
I found this claim very interesting. @Cody_Fenwick would you be open to giving a little more detail on this range and how you came to it?:)
The article is by Ben Todd, not Cody :) The fuller quote from Ben in the article is
Ah thanks for pointing out my mistake! And yes, I read this paragraph in the article, but still couldn’t work out how they could provide such a precise range