In general, I am sympathetic to this argument, but it think the BOTEC is hinging on something that is far beyond implausible.
Imagine that with a $20 million investment in research, we could identify two new interventions that are 15x as effective as cash, rather than 5x, and therefore GiveWell could move donations to the 15x category rather than 5x. The additional social value from deploying, let’s say, $100 million of GiveWell donations to those new interventions would be ($100m * 15) - ($100m * 5), or $1 billion. That would be a 50x return on the $20 million investment in research, and that’s just counting one year’s worth of giving!
Let’s say you can do a well-managed RCT on a intervention for a half million dollars. That seems pretty cheap, but not implausible for a developing country RCT. That implies that you think that 5% of the best candidate not-already-exhaustively-researched interventions would turn out to be 50% better than anything we’ve found so far? That seems implausible, at best.
And even if you could find them, you wouldn’t trust a single RCT—you’d need to do several more over time before you’d be willing to strongly trust that these have such high ROIs. And you would have reversion to the mean, which you probably would—but remember that the mean for development interventions isn’t “as effective as cash”—it’s much much less effective. So we’re talking about finding good evidence for something that works many standard deviations better than the mean. Unless the distribution of quantifiable interventions looks very strange, that seems like a strange claim to make. And I’m certainly not going to claim that I think there is an efficient market in developmental economics, but I’m still very skeptical that billion-dollar bills are quite that easy to find.
Fair point! Perhaps a more modest standard would be appropriate—i.e., “giving that produces a net positive effect in the world, something more than 1x.”
If the bar is set so high, then obviously there will be almost nothing worth funding except for a miniscule set of interventions on a miniscule number of issues, and large foundations will be left with piles of money that they don’t know what to do with, and meanwhile the world still has lots of problems that need solving even if there’s no 10X intervention in sight.
I don’t think we were advocating leaving money on the sidelines for that reason—patient philanthropy is largely a different argument.
I think that we buy down the 10x interventions, then the 9x, 8x, etc. But even if they are not known, discovering those interventions may possible without the same level of investment in RCTs.
I’m also sympathetic to the argument, but I think the BOTEC overstates the potential benefit for another reason. If Givewell finds an opportunity to give $100 million per year at an effectiveness of 15x of cash transfers rather than 5x (and assuming there is a large supply of giving opportunities at 5x), I think the benefit is $200 million per year rather than $1 billion. The $100 million spent on the 15x intervention achieves what they could have achieved by spending $300 million on a 5x intervention. Of course, as noted, that is for only one year, so the number over a longer time horizon would be much larger.
Even with that adjustment, and considering the issues raised by David Manheim and other commenters, I find this post quite compelling – thank you for sharing it.
In general, I am sympathetic to this argument, but it think the BOTEC is hinging on something that is far beyond implausible.
Let’s say you can do a well-managed RCT on a intervention for a half million dollars. That seems pretty cheap, but not implausible for a developing country RCT. That implies that you think that 5% of the best candidate not-already-exhaustively-researched interventions would turn out to be 50% better than anything we’ve found so far? That seems implausible, at best.
And even if you could find them, you wouldn’t trust a single RCT—you’d need to do several more over time before you’d be willing to strongly trust that these have such high ROIs. And you would have reversion to the mean, which you probably would—but remember that the mean for development interventions isn’t “as effective as cash”—it’s much much less effective. So we’re talking about finding good evidence for something that works many standard deviations better than the mean. Unless the distribution of quantifiable interventions looks very strange, that seems like a strange claim to make. And I’m certainly not going to claim that I think there is an efficient market in developmental economics, but I’m still very skeptical that billion-dollar bills are quite that easy to find.
Fair point! Perhaps a more modest standard would be appropriate—i.e., “giving that produces a net positive effect in the world, something more than 1x.”
If the bar is set so high, then obviously there will be almost nothing worth funding except for a miniscule set of interventions on a miniscule number of issues, and large foundations will be left with piles of money that they don’t know what to do with, and meanwhile the world still has lots of problems that need solving even if there’s no 10X intervention in sight.
I don’t think we were advocating leaving money on the sidelines for that reason—patient philanthropy is largely a different argument.
I think that we buy down the 10x interventions, then the 9x, 8x, etc. But even if they are not known, discovering those interventions may possible without the same level of investment in RCTs.
I’m also sympathetic to the argument, but I think the BOTEC overstates the potential benefit for another reason. If Givewell finds an opportunity to give $100 million per year at an effectiveness of 15x of cash transfers rather than 5x (and assuming there is a large supply of giving opportunities at 5x), I think the benefit is $200 million per year rather than $1 billion. The $100 million spent on the 15x intervention achieves what they could have achieved by spending $300 million on a 5x intervention. Of course, as noted, that is for only one year, so the number over a longer time horizon would be much larger.
Even with that adjustment, and considering the issues raised by David Manheim and other commenters, I find this post quite compelling – thank you for sharing it.