How many hits does hits-based giving get? A concrete study idea to find out (and a $1500 offer for implementation)

I have a project that I want to run by the community.

A while ago, Holden Karnofsky declared that the Open Philanthropy Project is dedicated to “hits-based giving”, a framework that accepts “philanthropic risk” where 90% of grants have zero impact but the number of grants that do have impact have sufficient impact to make the entire project worthwhile. This could be compared to the more traditional approach of GiveWell, where all grants made are to organizations where the expected value is relatively well known (even if the organizations may still have zero impact).

While I’m quite sympathetic to the classic GiveWell approach, this kind of “hits-based” investment policy sounds quite plausibly effective to me. When we’re in a world with many different projects, with only a limited amount of time to get to know them, and with way too many unresolvable unknowns, we have to try to get some hits. This is quite analogous to what I think pretty much every major for-profit venture capital firm does with their for-profit investments.

However, I do have some room to doubt about the “hits-based” approach. With poor selection, it could resolve to be “random giving”, which I would expect to hit at approximately the mean intervention—even if it were in a cause area with a high top 1%, we may not be able to find that top 1%, and the mean intervention may be worse than the top global poverty intervention we already know about.

I also don’t really know if all major investing can be described as “hits-based”. Perhaps stories we hear about this “hits-based” strategy being successful are mere survivorship bias. I imagine many VC and non-VC investing firms do frequently research their investments significantly, perhaps more significantly than the Open Philanthropy Project does. And even if the strategy does work well for for-profit VCs, the strategy may not be easily applied to the non-profit world, where incentives are noticeably worse.

But fear not, for I think these questions can be answered empirically. All we would have to do is run Open Phil for long enough and try to track down, best we can, how well the grants perform compared to AMF. For example, arguably Open Phil’s commitment to cage-free corporate campaigning could qualify as a “hit” that potentially surpasses AMF (assuming pledge promises are successfully kept without significantly more spending and that the future investments in campaigning get comparable returns) and does account for 12.5% of OpenPhil’s non-GiveWell grants to date[1].

Given that a substantial comparison over time would still take a few years, if not decades, to fully resolve (plus the value of existential risk mitigation may never be known), we might instead want to turn to people who have already done this for a long time and see how they have done.

A decent reference class that came to mind was comparing some historical big foundations’ hits and misses (that tend to take more of a hits-based giving approach) with some comparable government programs that do similar sorts of projects but with a more evidence-based low-variance strategy. I think it would take some research to find the right and a large enough sample of foundations and government agencies to compare but they seem to often differ in this way, so it seems like it could be possible. For example, the Gates Foundation seems to pursue hits-based giving while the DFID does not seem to… is this characterization true? If so, which one seems to be more cost-effective on average?

As another example, if you took an objective criterion like “top 10 biggest foundations 1975-2000” and looked at all the biggest hits over those 25 years and divided it by all the money over those 25 years, would the cost-effectiveness justify all that spending? If it turned out to be around the same as GiveDirectly, I’d be pretty convinced by the model of “hits-based giving”, though we would have to adjust for the fact that many major foundations are non-utilitarian and don’t aim to bring about the greatest possible good.

And, of course, this whole idea will not be perfect. It will vary a lot in quality based on the time and effort put into it, but it would be a huge step forward from the pretty soft intuitions I have seen on this question so far. But I could see 40 hours of research making a good deal of progress on this problem and I’m surprised that GiveWell, despite committing to studying the history of philanthropy, has not produced something comprehensive like this in defending their worldview.

Resolving this question would be pretty action-relevant for myself and a few other people, as we may personally be more inclined to try to take big risks on big bets with our own projects, rather than relying on high-quality evidence or working to create more high-quality evidence.

Previously I paid $100 to commission a project that I suggested on the EA forum and that went pretty well. I think this one is important enough that I’d be willing to wager money on this too. I’d pay $1500 for the first person that answers the question to my satisfaction. Please contact me at peter@peterhurford.com prior to undertaking this so I can help guide you and to avoid duplication of work.

-

Update − 2 March 2017: See here for a more detailed elaboration of the project.

Update − 23 Aug 2017: It ended up being the case that the data on grants from the top ten biggest foundations is simply not available enough to make this project feasible in its current form. Most foundations do not have public digital grant records and those that do typically start after 2000.

-

[1]: $154,008,339 total grants given, minus $95,885,518 to GiveWell top charities = $58,122,821 non-GiveWell grants. Cage-free campaigns equal $7,239,392 of granting, which is 12.5% of $58,122,821.