GiveWell and the problem of partial funding
At the end of 2015, GiveWell wrote up its reasons for recommending that Good Ventures partially but not fully fund the GiveWell top charities. This reasoning seemed incomplete to me, and when I talked about it with others in the EA community, their explanations tended to switch between what seemed to me to be incomplete and mutually exclusive models of what was going on. This bothered me, because the relevant principles are close to the core of what EA is.
A foundation that plans to move around ten billion dollars and is relying on advice from GiveWell isn’t enough to get the top charities fully funded. That’s weird and surprising. The mysterious tendency to accumulate big piles of money and then not do anything with most of it seemed like a pretty important problem, and I wanted to understand it before trying to add more money to this particular pile.
So I decided to write up, as best I could, a clear, disjunctive treatment of the main arguments I’d seen for the behavior of GiveWell, the Open Philanthropy Project, and Good Ventures. Unfortunately, my writeup ended up being very long. I’ve since been encouraged to write a shorter summary with more specific recommendations. This is that summary.
It is much shorter than the original series, and only very briefly sketches the argument. If you’re interested in the full argument, I’d encourage you to click through from the section headings to the original six parts.
Recap of the argument
Part 1: The problem of splitting
There’s a commonsense notion of how to do good—do what seems best to you, speak freely about it, try to encourage others when you see opportunities for them to do good, and help out others trying to do good. Then there’s the sorts of considerations GiveWell brought up in its 2015 post on splitting, implying that the correct thing to do—at least with money—is to adopt a guarded stance and give sparingly, no more than your fair share as you assess it, to make sure that your interests are fairly represented in the final outcome.
This isn’t necessarily wrong, but it’s troubling enough to be worth thinking through very carefully. What implied beliefs about the world might justify a “splitting” recommendation, rather than a recommendation that Good Ventures fully fund the GiveWell top charities?
Part 2: Superior giving opportunities
It could be that the GiveWell and the Open Philanthropy Project expect that the Open Philanthropy Project’s last dollar will be a better giving opportunity than the GiveWell top charities. For the very large amount of money they expect to move, this is a bold claim about their long-run impact.
Increasing returns to scale
The simplest construal of this claim is a claim of increasing returns to scale. In this case, the Open Philanthropy Project shouldn’t be trying to make grants itself, but should delegate this to more established organizations in its focus areas, where they exist.
In addition, if the Open Philanthropy Project does not think that the GiveWell Top Charities would be part of its optimal giving portfolio on impact considerations – if the “last dollar” beats AMF – then it’s unclear why it’s funding the top charities at all.
Diminishing returns to scale
If the Open Philanthropy Project rejects the increasing returns to scale argument, then this implies diminishing returns to scale at its size. This suggests that the Open Philanthropy Project has a massive disadvantage on spending its last dollar relative to smaller donors of similar judgment quality, so it should be looking for ways to move money to people and smaller institutions whose judgment it respects, to regrant at their discretion.
Part 3: Bargaining power
GiveWell and the Open Philanthropy Project might believe that GiveWell’s top charities are a part of the Open Philanthropy Project’s optimal giving portfolio. This would would imply a commitment to full funding if their actions did not affect those of other donors. In this scenario, if Good Ventures committed to fully funding the GiveWell top charities, other donors might withdraw funding to fund the next-best thing by their values, confident that they’d be offset. A commitment to “splitting” would prevent this.
I have two main objections to this. First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start? Second, if you take into account the difference in scale between Good Ventures and other GiveWell donors, Good Ventures’s “fair share” seems more likely to be in excess of 80%, than a 50-50 split.
GiveWell also brought up an ethical objection to a commitment to filling any funding gap: since it would rely on people assuming they have impact commensurate with GiveWell’s cost-per-life-saved numbers, it would be deceptive. This ethical objection doesn’t make sense. It implies that it’s unethical to cooperate on the iterated Prisoner’s Dilemma. It also assumes that people are taking the cost-per-life-saved numbers at face value, and if so, then GiveWell already thinks they’ve been misled.
Part 4: Influence, access, and independence
Influence via habituation vs track record
Partial support by Good Ventures for the GiveWell top charities might be motivated by a desire to influence more donors to give to effective, evidence-backed charities. If this is the motivation behind partial funding, then the strategy is inherently deceptive (which undercuts the ethical reservations addressed at the end of Part 3). The mechanism by which partial funding influences other donors to give, is by leading them to believe that both these facts are true:
The GiveWell Top Charities are part of the Open Philanthropy Project’s optimal philanthropic portfolio, when only direct impact is considered.
There’s not enough money to cover the whole thing.
These are highly unlikely to both be true. Global poverty cannot plausibly be an unfillable money pit at GiveWell’s current cost-per-life-saved numbers. At least one of these three things must be true:
GiveWell’s cost per life saved numbers are wrong and should be changed.
The top charities’ interventions will reach substantially diminishing returns long before they’ve managed to massively scale up.
A few billion dollars can totally wipe out major categories of disease in the developing world.
The right way to influence future donations is to establish an unambiguous track record.
Access via size
The Open Philanthropy Project might want to spend down its available funds slowly in order to preserve its status as a large foundation, which might get it a seat at the table where it otherwise might not. But there are two obvious problems with this argument. First, if you think that there’s some threshold like $5 billion below which it’s hard to get attention, then the obvious thing to do is set aside the $5 billion, and spend the rest without this constraint. Then you can slow down again if necessary once you approach that threshold. The other objection is more important: actually making more grants seems like the obvious way to signal willingness to make grants, and thus to make potential grantees eager to talk to you. This is an argument for looser spending, not tighter spending.
Independence via many funders
GiveWell might be reluctant to accept a single-funder situation, because it would jeopardize GiveWell’s or its recommended charities’ independence. In the case of the recommended charities, this should be dealt with on a case-by-case basis. In the case of GiveWell itself, the “splitting” recommendation seems more like evidence of, than a solution to, independence or conflict of interest problems. The obvious thing to do is either fully separate the organization recommending charities to the public from the organization advising a single major philanthropic foundation, or make it clear that GiveWell’s recommendations are recommendations by what’s effectively Good Ventures staff.
Part 5: Other people know things too
When people give based on GiveWell recommendations, this is in some sense outside validation that the recommended charities are good. It would be a bad sign if they independently decided to stop. If Good Ventures crowds out other donors, it might destroy this information source.
If indeed GiveWell donors are a good source of outside validation, this undercuts the argument in Part 2 that they’re strictly worse at giving than the Open Philanthropy Project, especially under conditions of diminishing returns. In this scenario, crowding out is good, not bad.
If we reject the claim that GiveWell donors are seriously evaluating the top charities, then their apparent informativeness is illusory, and the only harm from crowding them out is loss of a funding source.
I think the crowding out problem is real, but the biggest problem is crowding out of attention, not money. I personally know of several cases in which EAs were reluctant to independently investigate a potential giving opportunity because they worried that it would step on OPP’s toes. (Likewise, I’ve heard many EAs assume that simply because a charity was recommended by GiveWell, the intervention works with near-certainty.) This is wrong and EAs should get back to work. The Open Philanthropy Project might be able to help by making it easier to check, for any given focus area, whether:
They have already evaluated it (and decided to fund, not fund because money’s not the limiting factor, or not fund because it’s not interesting).
An evaluation is in progress.
An evaluation is not in progress.
An evaluation has been discontinued (and why).
Part 6: Recommendations
In my original series on GiveWell and splitting, I focused on principles, leaving recommendations for the end, and making them fairly general. This is because I don’t really think that organizing a pressure group to extract specific concessions has good prospects.
What I actually want GiveWell, the Open Philanthropy Project, and Good Ventures to do is consider my arguments, combine them with any inside info I might lack, and then do the right thing as they judge it. As an outsider, I shouldn’t try to micromanage them – all I should really do is try to figure out whether they’re trying to cooperate, and if so, try to help them when I see opportunities to do so. So it’s with some reluctance, and only on account of a fair amount of encouragement from others, that I actually try to tell them how to do their jobs.
These are all worded as fixes to problems, but if I saw these implemented, I’d be affirmatively excited about it.
Assess outcomes
Evidence of positive impact is at the very core of GiveWell’s value proposition. GiveWell’s impact page tracks two inputs GiveWell has influenced: money moved, and attention (in the form of web traffic). These are important costs, but GiveWell should also measure benefits. The impact page should assess outcomes of the sort GiveWell attributes to its top charities.
That means empirical after-the-fact estimates of things like how many kids’ lives were saved by AMF, how health and test scores and incomes got better due to the efforts of SCI, Sightsavers, Deworm the World, and END Fund, and what measurable improvements to people’s well-being were made by GiveDirectly. It should also be easy to find after-the-fact estimates for former top charities such as VillageReach, whose funding gaps were completely filled according to GiveWell.
GiveWell’s before-the-fact cost per life saved estimates are a good starting point, but it’s important to test whether those numbers are accurate. These numbers may be noisy. It’s fine to have ample disclaimers about that. But they should be the most prominently featured numbers.
If there aren’t numbers available for this, that’s fine. But in that case, the impact page should say so, prominently, until such time as they are. And GiveWell has some ability to make such numbers available; it has a fair amount of leverage over many of these charities. The Open Philanthropy Project also has the capacity to make grants for this specific purpose. If that means changing the ways these charities operate – preregistering predictions, measuring before and after – then so much the better.
As an aside, it would be great to get GiveDirectly to test more explicitly the macroeconomic offsetting problem. Do cash transfers increase absolute wealth, or only shift it around? What happens if you give to everyone in a village? What happens to the neighboring villages? Do you get offsetting inflation? The footnotes to GiveWell’s page on GiveDirectly say that there’s a study under way to answer these sorts of questions, but the study has an $8 Million funding gap. Fully funding this study should be a priority if GiveDirectly stays on the Top Charities list.
Communicate scope
Organizations like the Open Philanthropy Project (and the Centre for Effective Altruism) have very broad missions. I’ve talked to people who are tempted to defer to such organizations because their implied scope is “everything”. As a result, EAs may preemptively crowd themselves out of areas these organizations might potentially look into, but aren’t currently doing much about. The Open Philanthropy Project and similar organizations can mitigate this problem by making its scope clearer.
If such organizations make it clearer what they’re likely to be focusing on and what they’re not, I think this would help. The Open Philanthropy Project has done a good job communicating this on the level of major focus areas, like political advocacy and global catastrophic risks. It would be helpful to have more granular information, such as lists of:
Investigations that have been abandoned or put on ice.
Investigations in progress.
Investigations that are planned but have not yet begun.
Potential focus areas rejected because you don’t think money is the limiting factor.
If there were a page simply listing these things somewhere on the Open Philanthropy Project’s website, it would be easy for outsiders to see whether they’re at risk of duplicating effort.
Symmetry
The Open Philanthropy Project is currently massively capacity-constrained. I think this is in large part due to the lack of a clear position on whether it faces increasing or diminishing returns. Either position, held consistently, suggests that more giving decisions should be delegated to outsiders, though in different ways.
If you’re giving away money, and you find someone who you think is doing things in your optimal portfolio of strategies, you should believe with at least some credence that their next project will also be good, and you’ll want to save yourself and them the overhead of checking in if possible. This motivation is not that compelling if you’re tight on cash, but it’s pretty compelling if you have many years of reserves at your current rate of giving.
I have four specific suggestions to resolve the current bottlenecking problem:
Overgranting
Prize grants
Unaccountable delegation to individuals
Very large grants to established organizations
The first three make the most sense under a diminishing returns scenario. The last makes more sense if returns to scale are increasing at the Open Philanthropy Project’s size.
The main thing that would persuade me this wasn’t a good idea would be very clear post-hoc impact tracking, that made it clear that the Open Philanthropy Project was learning & doing large amounts of good per dollar, and that experience implied large gains from holding off on fully funding until it had finished evaluating an org.
Overgranting
The Open Philanthropy Project could make grants big enough that grantees have a multi-year reserve, much like the Open Philanthropy Project itself. In 2016, the Open Philanthropy project didn’t manage to give away even its “overall budget for the year, which [it] set at 5% of available capital.” Of the remainder, 100% went to the Open Philanthropy Project’s implied reserve, and 0% went to bolstering the reserves of grantees. (The 95% of available capital that wasn’t budgeted was of course also allocated to the Open Philanthropy Project’s implied reserve.) This implies that the Open Philanthropy Project thinks it has an extremely strong judgment advantage over grantees (or small donors).
The Open Philanthropy Project currently sets its giving budget at 5% of eventual money moved. Before expected return on investment, that amounts to twenty years of reserves. After taking return on investment into account, implied reserves are much greater.
The obvious thing to do here would be to increase grant sizes severalfold over their current sizes, in some proportion to the extent to which the Open Philanthropy Project thinks grantees have good independent judgment. (Note that this argument applies even if potential grantees do not currently know how they’d manage to spend the last dollar – neither does the Open Philanthropy project!)
The equilibrium solution is one where the Open Philanthropy Project estimates that its last dollar has similar expected impact to each grantee’s last dollar. It seems reasonable to make an exception for some learning grants.
Prize grants
The Open Philanthropy Project occasionally talks about track record as a reason to give money even in the absence of specific future programs that seem promising. My sense is that there’s substantially more willingness to give money to people with good track records than the public record suggests. If so, this sort of granting should be scaled up and publicized. One potential mechanism could be Paul Christiano’s and Katja Grace’s impact certificates idea, but after-the-fact grants could be made without that infrastructure too.
Unaccountable delegation to individuals
Another vehicle for low-overhead delegation is the EA Funds, currently advised primarily by current Open Philanthropy Project staff. Likewise, the Open Philanthropy Project could make grants to individuals who’ve chosen to focus on promising areas – such as the people interviewed in the course of investigations – to be regranted or spent as they think best. As a bonus, this would probably make people more interested in helping the Open Philanthropy Project learn about things! Grants to Nobel-winning scientists might also be good here.
Very large grants to established organizations
The Open Philanthropy Project is unusually cause-neutral in its outlook, but in many of the major focus areas it’s identified, there are established large organizations. If there are increasing returns to scale, those organizations seem likely to do a better job spending the money. I give some examples in Part 2. For instance, IARPA and Skoll Global Threats both have interests in mitigating global catastrophic risks. The Gates Foundation’s working on global health and development. The CDC has a mandate to do things about biosecurity. The NIH specializes in funding scientific research.
If these organizations won’t take additional money, that’s some evidence against returns to scale.
Market humbly
GiveWell and the Open Philanthropy Project have been very careful not to make false claims in their explicit public statements. They’ve taken proactive steps to clear up misconceptions. This is good.
But your public image and marketing is part of your message. GiveWell’s public promotion strategy does not seem to have a correspondingly strong track record of accurately informing people.
If GiveWell doesn’t think the GiveWell top charities are the best options (e.g. because you think the Open Philanthropy Project’s last dollar has greater impact than the top charities’ marginal dollar), call them something other than “top-rated charities” or make it much clearer that they’re only “top” within some restricted category. For instance, “our top charities are evidence-backed” elides the difference between these two statements:
The judgment that these are the best charities is strongly evidence-based.
These are the best charities we could find within the limited category of charities with a strong evidence base.
If, on the other hand, GiveWell does think the top charities are the best options, it needs to make that disagreement with the Open Philanthropy Project clear.
As a second example of the sort of thing I mean, the Atlantic reported many experts on philanthropy as saying, “if you want to save lives with certainty, you have to go to GiveWell.” If that reputation is not accurate, write a letter to the editor correcting the record. For whatever reason, people frequently get the impression that GiveWell’s top charities are ways to have an impact with certainty. GiveWell’s blog post on its uncertainty around deworming was a good first step towards resolving this, but I don’t expect it to be enough.
I expect this kind of issue to be unusually difficult to resolve. In part this is because standard advice on how to promote an organization will tend to include advice to engage in deceptive practices. In this case, I think that GiveWell is trying to meet the apparent demand for simple recommendations, by making its recommendations simple. As an accidental side effect, GiveWell’s promotional messaging implies – while never explicitly stating, because no one intends deception – that the underlying problem of which charity is best is correspondingly simple. This is of course false, as the GiveWell website makes abundantly clear to anyone who reads it carefully and in detail (i.e. almost no one).
Unwind partial funding
As mentioned in Part 4, “splitting” has the unfortunate side effect of creating the impression that these two things are true:
The Open Philanthropy Project and Good Ventures have credibly vouched for the Top Charities, by funding them as part of their optimal giving portfolio.
Money for those organizations’ priorities is scarce, because they lack the capacity to fully fund the top charities.
I’m very ready to believe that no one had any intent to deceive. GiveWell, the Open Philanthropy Project, and Good Ventures could make this very clear by refusing to profit from any accidental deception that may have occurred.
If the optimal level of Good Ventures funding for GiveWell top charities is full funding, then the thing to do is to fully fund them. During the unfortunate accidental episode of funding gap theater, it may be the case that some people gave to GiveWell top charities, who wouldn’t have given if there had been a commitment to fully funding them. I suggest simply offering to refund the money of anyone who can verify they gave to the Top Charities in this period and feels misled. If this turns out to be difficult to pull off lawfully, then offer a donation swap to the nonprofit of their choice.
Likewise, if the optimal level of Good Ventures funding for GiveWell top charities is none, then stop funding them – perhaps gradually to avoid the “whiplash” concerns mentioned in GiveWell’s 2016 follow-up post on partial funding, but make the intention clear up front. GiveWell should recommend whatever it thinks is best for its audience, but it shouldn’t additionally try to get donors to think that Good Ventures thinks the top charities are competitive with their other options. Again, for the years of accidental funding gap theater, they can simply offer to refund any verified top charities donor who says their donation decision was affected by the fact that Good Ventures was giving.
Of course, if a donor says keep the money, then that’s fine! I expect fairly few donors would accept this offer. But it still seems like it would be a powerful, credible signal of cooperative intent.
Separate organizations and offices
I’ve been holding GiveWell to a higher standard than I’d apply to most other donors. This is because it’s in the somewhat unusual position of simultaneously making large private donation decisions, and asking the public to trust it as an objective judge of charities. I’ve written a bit about the communication and conflict of interest problems that naturally follow from collocating and sharing staff among GiveWell, the Open Philanthropy Project, and Good Ventures. I recommend separate organizations with separate offices.
Doubtless there are some efficiency gains to having informal conversations with Open Philanthropy Project staff – and to sharing staff – but it sure seems like there are huge costs as well, since this has led to incoherent and misleading behavior. If GiveWell’s public pages had to be good enough to persuade the Open Philanthropy Project to invest in the GiveWell top charities – if GiveWell’s public product were the main way it communicated with the Open Philanthropy Project – that would align incentives better, away from opaqueness, misdirection, and unprincipled horse-trading among insiders, and towards making a clear public case for whatever GiveWell actually thinks is best.
As I’ve said before, GiveWell and Open Philanthropy Project staff have made strenuous efforts to avoid such temptations and to instead tell the truth and do the right thing. But it seems better to simply avoid situations where avoiding such temptations requires such strenuous effort.
(Cross-posted from my personal blog.)
- Effective altruism is self-recommending by 23 Apr 2017 6:11 UTC; 57 points) (
- 20 Mar 2021 15:44 UTC; 21 points) 's comment on Ben Hoffman & Holden Karnofsky by (
Hi Ben,
Thanks for putting so much thought into this topic and sharing your feedback.
I’m going to discuss the reasoning behind the “splitting” recommendation that was made in 2015, as well as our current stance, and how they relate to your points. I’ll start with the latter because I think that will make this comment easier to follow. I’ll then address some more specific points and suggestions.
I’m not addressing recommendations addressed to GiveWell—I think it will make more sense for someone more involved in GiveWell to do that—though I will address both the 2015 and 2016 decisions about how much to recommend that Good Ventures support GiveWell’s top charities, because I was closely involved in those decisions.
Current stance on Good Ventures support for GiveWell’s top charities. As noted here, we (Open Phil) currently feel that the “last dollar” probably beats GiveWell’s top charities according to our (extrapolated) values. We are quite uncertain of this view at this time and are hoping to do a more thorough investigation and writeup this year. We recommended $50 million to top charities for the 2016 giving season, for reasons laid out in that post and not discussed in the original post on this thread.
You seem to find our take on the “last dollar” a difficult-to-justify conclusion (or at least difficult to square with the fact that we are currently well under eventual peak giving, and not closing the gap via the actions you list under “symmetry”). You argue that the key issue here is the question of returns to scale, and say that we should regrant to larger organizations if we think returns are increasing, and smaller organizations if returns are decreasing. But I don’t think the question “Are returns to scale increasing or decreasing?” is a particularly core question here (nor does it have a single general answer). Instead, our reason for thinking our “last dollar” can beat top charities and many other options is largely bound up in our model of ourselves as people who aspire to become “experts” in the domain of giving away large amounts of money effectively and according to the basic stance of effective altruism. I’ve written about my model of broad market efficiency in the past; I don’t think it is trivial to “beat the market,” but nor do I think it is prohibitively difficult, and I expect that we can do so in the long run. Another key part of the view is that there is more than one plausible worldview under which it looks (in the long run) quite tractable to spend essentially arbitrary amounts of money in a way that has better value for money than top charities (this is also discussed in the post on our current view ).
Previously, our best guess was different. We thought that the “last dollar” was worse than top charities—but not much worse, and with very low confidence. We fully funded things we thought were much better than the “last dollar” (including certain top charities grants) but not things we thought were relatively close when they also posed coordination issues. For this case, fully funding top charities would have had pros and cons relative to splitting: we think the dollars we spent would’ve done slightly more good, but the dollars spent by others would’ve done less good (and we think we have a good sense of the counterfactual for most of those dollars). We guessed that the latter outweighed the former.
I think that an important factor playing into both decisions, and a potentially key factor causing you and me to see things differently, pertains to conservatism. For the 2015 decision in particular, we didn’t have much time to think carefully about these issues, and “fully funding” might be the kind of thing we couldn’t easily walk back (we worried about a consistent dynamic in which our entering a cause led to other donors’ immediately fleeing it). It’s often the case that when we need to make high-stakes decisions without sufficient time or information, we err on the side of preserving option value and avoiding particularly bad outcomes (especially those that pose risks to GiveWell or Open Phil as an organization); this often leads to “hacky” actions that are knowably not ideal for any particular set of facts and values, if we had confidently sorted these facts and values out (but we haven’t).
Responses to more specific points
“First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start?”
I don’t think this is a case of “defecting” or “adversarial framing.” We were trying to approximate the outcome we would’ve reached if we’d been able to have a friendly, open discussion and coordination with individual donors, which we couldn’t.
“if you take into account the difference in scale between Good Ventures and other GiveWell donors, Good Ventures’s ‘fair share’ seems more likely to be in excess of 80%, than a 50-50 split.”
We expected individual giving to grow over time, and thought that it would grow less if we had a policy of fully funding top charities. Calculating “fair share” based on current giving alone, as opposed to giving capacity construed more broadly and over a longer-term, would have created the kinds of problematic incentives we wrote that we were worried about. 50% is within range of what I’d guess would be a long-term fair share. Given that it is within range, 50% was chosen as a proportion that would (accurately) signal that we had chosen it fairly arbitrarily, in order to commit credibly to splitting, as mentioned in the post.
“This ethical objection doesn’t make sense. It implies that it’s unethical to cooperate on the iterated Prisoner’s Dilemma.” The ethical objection was to being misleading, not to the game-theoretic aspects of the approach.
I don’t follow your argument under “Influence via habituation vs track record.” The reason there was “not enough money to cover the whole thing” was because we were unwilling to pay more than what we considered our fair share, due to the incentives it would create and the long-run implications for total positive impact. We were open about that. I also think that the “surface case” for low-engagement donors who didn’t read our work was about as close to the truth as a surface case could be. (I would describe the “surface case” as something like: “If I give this money, then bednets will be delivered; if I do not, that will not happen.” I do not believe that the majority of GiveWell donors—including very large donors—base their giving on Open Phil’s opinions, or in many cases even know what Open Phil is.) I don’t see how this situation implies any of your #1-#3, and I don’t see how it is deceptive.
“Access via size” and “Independence via many funders” were not part of our reasoning.
(Continued in next comment)
(Continued from previous comment)
Thoughts on your recommendations. I appreciate your making suggestions, and providing helpful context on the spirit in which you intend them. Here I only address suggestions for Open Phil.
Maintaining a list of open investigations: I see some case for this, but at the moment we don’t plan on it. I don’t think we can succinctly and efficiently maintain such a list without incurring a number of risks (e.g., causing people to excessively plan on our support; causing controversy due to hasty communication or miscommunication). Instead, we encourage people who want to know whether we’re working on something to contact us and ask.
We have considered and in some cases done some (limited) execution on all of the suggestions you make under “Symmetry,” and all remain potential tools if we want to ramp up giving further in the future. I think they are all good ideas, perhaps things we should have done more of already, and perhaps things we will do more of later on. However, I do not think the situation is “symmetrical” as you imply because our mission—which we are building up expertise and capacity around optimizing for—is giving away large sums of money effectively and according to the basic stance of effective altruism. The same is not generally true of our grantees. We generally try to do something approximating “give to grantees up until the point where marginal dollars would be worse than our last dollar” (though of course very imprecisely and with many additional considerations. Finally, I’ll add that any of the four options you list—and many more—are things we could probably find a way of doing if we put in some time and internal discussion, resulting in good outcomes. But we think that time and internal discussion is better spent on other priorities that will lead to better outcomes. In general, any new idea we pursue involves a fair amount of discussion and refinement, which itself has major opportunity costs, so we accept a degree of inertia in our policies and approaches.
For reasons stated above and in previous posts, I don’t believe the optimal level of funding for top charities is 100% of the gap or 0%. I also wish to note that your comment “I expect fairly few donors would accept this offer. But it still seems like it would be a powerful, credible signal of cooperative intent.” highlights what I suspect may be one of the most important disagreements underlying this discussion. As noted above, we are comfortable with “hacky” approaches to dilemmas that let us move on to our next priority, and we are very unlikely to undertake time-consuming projects with little expected impact other than to signal cooperative intent in a general and undirected way. For us, a disagreement whose importance is mostly symbolic is not likely to become a priority. We would be more likely to prioritize disagreements that implied we could do much more good (or much less harm) if we took some action, such that this action is competitive with our other priorities.
I think your final suggestion would have substantial costs, and don’t agree that you’ve identified sufficient harms to consider it.
I’m not sure I’ve understood all of your points, but hopefully this is helpful in identifying which threads would be useful to pursue further. Thanks again for your thoughtful feedback.
Thanks for the detailed response! I wanted to quickly point out something you did here that I think is good practice, and wish more people did:
Marking which parts of someone’s argument you think are relevant and which you think aren’t—and, relatedly, which branches of a disjunction you accept and which you reject—are an important part of how arguments can lead to shared models. A lot of people neglect this sort of thing, because it’s not a clear way to score points for their side. You took care to address it here. Thanks.
(More to follow when I’ve had time to take this in.)
Hi Ben,
Thanks for the post! I wanted to reply to a couple ideas you raised for GiveWell:
(1) Assess outcomes.
Many of the points you raised, such as making empirical after-the-fact estimates, relate to the question of why GiveWell isn’t putting more effort into collecting and examining post-hoc data demonstrating the impacts of our top charities.
We provide an estimate of the impact of a donation to each of our top charities, in humanitarian terms, here: http://www.givewell.org/charities/top-charities/impact. As you note, this is based on the expected impact of donations made to the charity today, rather than a look back at the impact of past work by the charity. We don’t currently collect outcomes data of the type you’re describing (we have generally focused on collecting data, for example, to show that children received deworming treatments, rather than that recipients’ income later improved; we rely on previously conducted studies to estimate the connection between the two).
Our impression is that post-hoc outcomes data of the type you’re describing—on lives saved, health outcomes, future test scores, or income—is largely not available from charities, even our top charities, such that we’d need to fund or collect it ourselves. We are taking some steps to do this in our work with IDinsight, an organization that supports and conducts rigorous evaluations of development interventions. As part of GiveWell Incubation Grants, Good Ventures supported the creation of an IDinsight embedded team at GiveWell to improve monitoring of GiveWell top charities and support the development of potential future GiveWell recommendations. Our work with IDinsight has so far furthered our impression that this information is quite challenging to collect, even with additional funding and effort, such that we’d most likely want to take a judicious approach and complete such work in conjunction with specialized, third-party organizations. In general, we expect significant time and resources are needed to gather data of the type you’re referring to. We’re willing to put these resources in when they seem likely to further our understanding, as demonstrated by our work with IDinsight, but expect a high ratio of resource-input to better-information output.
I’m unsure whether the GiveDirectly study remains underfunded; the footnote you cited referenced a conversation from 2014, and it appears from looking at our subsequent communications that the study has gone forward. But to your point about whether we should fund studies like these: We agree that we should strongly consider doing so, and decisions to do so will depend on room for more funding, as well as our expectation that the study will have sufficient power to be informative for our recommendations.
(Continued in next comment)
**Comment was edited to clarify impact of deworming treatment.
(Continued from previous comment)
(2) Market humbly.
We agree that not everyone has an accurate view of GiveWell’s work, and that we should continue to improve our communications around the kinds of opportunities we recommend. Publishing information about our reasoning and goals on our website and blog is one way we aim to do this, as is speaking with the media and donors who use our research, but we agree there is room for improvement. In my experience working on GiveWell’s outreach, it has been particularly challenging to effectively communicate around the following:
a) The uncertainty associated with deworming research. b) The limitations of our cost-effectiveness analyses. c) The type of opportunities GiveWell considers as potential top charities, and why.
We think we can continue to improve in our written and verbal communications around these topics. A goal on our website and in our own communications is to provide the most accurate picture we can at any given level of detail, within reasonable bounds of staff capacity and time. Someone who only reads a headline on our website should have the most accurate picture it’s possible to have after reading only a headline; someone who only reads our page listing top charities should have the most accurate picture associated with that level of detail, and so on.
Going forward, we think we can improve by more proactively reaching out when we become aware of a misimpression of GiveWell. We’ve had internal discussions about this, prompted by this post, and plan to more proactively communicate about mistakes or misunderstandings of our work when we become aware of them, including emailing the media with clarifications. We did not do this in the case of the quote you cite from The Atlantic article, and on reflection think this is something we should do in the future.
On the name of “top charities”: We’d be interested in whether there is a term that you feel would more succinctly and accurately convey our views on our recommendations than “top charities.” We want to avoid projecting overconfidence, but we also don’t want to suggest we’re less confident or think these are less good options for most donors than we believe they are. A concern with applying a restricted category, such as “top charities within global health and development,” would be suggesting that we hadn’t considered opportunities that fall outside of this category or that we chose this category arbitrarily, neither of which is true: GiveWell focuses on global health and development because our initial research led us to believe that the charities most likely to succeed by GiveWell’s criteria of cost-effectiveness and a strong, generalizable evidence base work in global health and development. (More on this here and here.) Similarly, a concern with referring to GiveWell’s top charities as something like “reasonably good options” could lead donors to be confused about whether we were recommending them—which, in the case of most low-time, low-trust donors (more below; this is the group we believe makes up most of our donor base), we do think they likely represent the best options.
GiveWell and the Open Philanthropy Project, the “last dollar,” and whether GiveWell’s top charities are better giving opportunities
Here, it’s helpful to distinguish between GiveWell and the Open Philanthropy Project, which are currently part of the same organization, but which we plan to legally separate this year. GiveWell—referring to our longtime work to find and recommend top charities, as described on www.givewell.org—does not have an organizational position on the “last dollar” question for Good Ventures, because GiveWell’s mission is to recommend and move money to the charities that meet its criteria; it does not have an organizational mission to consider Good Ventures’ overall or long-term budget. However, staff members serving the Open Philanthropy Project do, since theirs is a long-term partnership with Good Ventures and they are actively weighing tradeoffs between grantmaking opportunities that Good Ventures will have resources to fund in the short- and long-run. The Open Philanthropy Project views the “last dollar” discussion you refer to above (http://www.openphilanthropy.org/blog/good-ventures-and-giving-now-vs-later-2016-update) as unstable, and thinks it would be a mistake to view its recent writeup as high confidence that Open Philanthropy Project opportunities are better—in some sort of absolute sense—than GiveWell top charities.
GiveWell believes the following is true:
We do think GiveWell’s top charities represent the best giving opportunities we’re aware of for donors who have limited time to spend on their decision. That’s due to the strength of the evidence base, cost-effectiveness, and their transparency and ability to be vetted and spot-checked by donors with even a low degree of trust in GiveWell. As noted on GiveWell’s top charities page, “They represent the best opportunities we’re aware of to help low-income people with relatively high confidence and relatively short time horizons.”
We also think that it’s possible that more cost-effective or otherwise ‘better’ giving opportunities exist, but that we a) haven’t found them yet, and/or b) don’t consider them as potential top charities because they fail to meet our criteria (e.g., not having a strong evidence base or not needing additional funding), which were designed to serve low-time donors and produce the types of recommendations described above.
Some donors—who have a high degree of trust in a particular person or organization and want to outsource their thinking about giving to that person or organization, or who have a large amount of time with which to spend identifying and assessing giving opportunities—might identify other opportunities they feel offer the best bang for their buck, such as those identified by the Open Philanthropy Project.
This is discussed on GiveWell’s top charities page: www.givewell.org/charities/top-charities#Proscons.
We hope that the separation of the Open Philanthropy Project from GiveWell this year clarifies the difference in our approaches, which we believe is also a source of confusion.
Thanks again for your thoughtful post on our work.
Personally, I’ve noticed that being casually aware of smaller projects that seem cash-strapped has given me the intuition that it would be better for Good Ventures to fund more of the things it thinks should be funded, since that might give some talented EAs more autonomy. On the other hand, I suspect that people who prefer the “opposite” strategy, of being more positive on the pledge and feeling quite comfortable with Givewell’s approach to splitting, are seeing a very different social landscape than I am. Maybe they’re aware of people who wouldn’t have engaged with EA in any way other than by taking the pledge, or they’ve spent relatively more time engaging with Givewell-style core EA material than I have?
Between the fact that filter bubbles exist, and the fact that I don’t get out much (see the last three characters of my username), I think I’d be likely to not notice if lots of the disagreement on this whole cluster of related topics (honesty/pledging/partial funding/etc.) was due to people having had differing social experiences with other EAs.
So, perhaps this is a nudge towards reconciliation on both the pledge and on Good Ventures’ take on partial funding. If people’s social circles tend to be homogeneous-ish, some people will know of lots of underfunded promising EAs and projects (which indirectly compete with GV and GiveWell top charities for resources), and others will know of few such EAs/projects. If this is case, we should expect most people’s intuitions on how many funding opportunities for small projects (which only small donors can identify effectively) there are, to be systematically off in one way or another. Perhaps a reasonable thing to do here would be to discuss ways to estimate how many underfunded small projects, which EAs would be eager to fund if only they knew about them, there are.
Thanks for summarizing this, Ben!
I might be getting this wrong, but my understanding is that a bunch of donors immediately started ‘defecting’ (= pulling out of funding the kinds of work GV is excited about) once they learned of GV’s excitement for GW/OPP causes, on the assumption that GV would at some future point adopt a general policy of (unconditionally?) ‘cooperating’ (= fully funding everything to the extent it cares about those things).
I think GW/GV/OPP arrived at their decision in an environment where they saw a non-trivial number of donors preemptively ‘defecting’ either based on a misunderstanding of whether GW/GV/OPP was already ‘cooperating’ (= they didn’t realize that GW/GV/OPP was funding less than the full amount it wanted funded), or based on the assumption that GW/GV/OPP was intending to do so later (and perhaps could even be induced to do if others withdrew their funding). If my understanding of this is right, then it both made the cooperative equilibrium seem less likely, and made it seem extra important for GW/GV/OPP to very loudly and clearly communicate their non-CooperateBot policy lest the misapprehension spread even further.
I think the difficulty of actually communicating en masse with smaller GW donors, much less having a real back-and-forth negotiation with them, played a very large role in GW/GV/OPP’s decisions here, including their decision to choose an ‘obviously arbitrary’ split number like 50% rather than something more subtle.
I’m not sure I understand this point. Is this saying that if people are already misled to some extent, or in some respect, then it doesn’t matter what related ways one’s actions might confuse them?
(Disclaimer: I work for MIRI, which has received an Open Phil grant. As usual, the above is me speaking on my own behalf, not on MIRI’s.)
Cross-posted from Ben’s blog:
If GV fully funded the top charities, and others also funded them, then they would be overfunded by GV’s lights. if A and B both like X (and have the same desired funding level for it), but have different second choices of Y and Z, the fully cooperative solution would not involve either A or B funding X alone.
[CoI notice: I consult for OpenPhil.]
I’m not sure this is right. What if A and B both commit to fully funding their top charities, as soon as they find such opportunities (i.e., without taking other people’s reactions into consideration)? That seems like a fully cooperative solution that on expectation would work as well as A and B trying to negotiate a “fair division” of funding for X. Also, I’m not sure this analogy applies to the situation where A is a single big donor and B is a bunch of small donors, since in that case A and B can’t actually negotiate so A unilaterally deciding on a split would seem to lead to some deadweight loss (e.g., missed funding opportunities).
BTW, are you aware of a fully thought-out analysis of Good Venture’s “splitting” policy (whether such a policy is a good idea, and what the optimal split is)? For such an important question, I’m surprised how little apparent deliberation and empirical investigation has been done on it. Even if the value of information here is just 1% of the total funding, that would amount to about $100,000,000. (Not to mention that the analysis could be applied to other analogous situations with large and small donors.)
That’s true! Fortunately, there are a few important mitigating factors:
This game proceeds in continuous time, so there’s plenty of opportunity for donors to inform each other of their actions. For the GiveWell top charities, this often happens by reporting the donation to—or making it through—GiveWell.
As you’ve pointed out, excess donations—if they in fact turn out to be excess—can simply be funged against implicitly via lower room for more funding estimates in the following year.
A commitment to full funding doesn’t have to take the form of initially giving them the whole amount—for instance, if the estimated funding gap is X, and GV would expect other donors to contribute amount X-Y if it weren’t around, it can give Y, monitor other donations, and fill in gaps as they occur. It could even wait until after “giving season” to get more info.
On cost per life saved numbers, I’m saying that “defend the state where these #s are true” is a silly goal for an organization that doesn’t think anyone should have taken them literally in the first place. One of the many complicating factors for the cost per life saved numbers is that there are other inputs, some of which are complements to your donation, others of which are substitutes.
If a substantial share of other donors were already observed defecting, that seems like it would be the single most important consideration to mention in the 2015 post explaining splitting, and I am baffled as to why it was left out.
It seems like genuinely unfriendly behavior on the part of other donors and it would have been a public service at that time to call them out on this.
Ben, you have advocated just giving to the best thing at the margin, simply. Doing that while taking room for more funding into account automatically results in what you are calling ‘defecting’ here in this post (which I object to, since the game theoretic analogy is dubious, and you’re using it in a highly morally charged way to criticize a general practice with respect to a single actor). That’s a normal way of assessing donations in effective altruism, and common among strategic philanthropists.
The ‘driving away donors’ bit was repeatedly discussed, as was the routine occurrence of such issues in large-scale philanthropy (where foundations bargain with each other over shares of funding in areas of common interest).
I don’t actually think it’s defecting to take into account room for more funding. I do think it’s defecting to try to control the behavior of other donors, who have more info about their opportunity cost than you do. Defecting is not always unjustified, but it’s nice when we can find and maintain cooperate-cooperate equilibria.
I don’t think it’s unreasonable to describe major foundations as engaged in an iterated game where they display a combination of cooperative and uncooperative behavior to test each other’s boundaries and guard their own in a moderately low-trust equilibrium. If you think there’s something especially good about the EA way, it shouldn’t be that surprising that large established charities sometimes engage in uncooperative behavior. I’m holding the Open Philanthropy Project and Good Ventures to a higher standard because they say they want to do better and I believe them.
My understanding is that GiveWell has mostly counted “leveraged” donations as costs towards their cost per life saved figures, rather than counting them as free money, and I think it’s been right to do so. This seems like basically the same thing.
The prospect of driving away donors was discussed. Direct evidence of a reduction in donations wasn’t, unless I missed something big. My impression is that donations from other sources were growing at the time and have continued to grow substantially from year to year.
Given that, I could maybe see the case for committing not to give more than the anticipated remainder assuming growth in other donations continued apace, as a credible threat against shirking, but 50-50 “splitting” massively undershoots that mark.
A very interesting piece, my initial reaction to GiveWell’s splitting approach was similar to yours. Via the comments in that Dec 15 blogpost on the GiveWell website we narrowed the point of disagreement between myself and Holden to the effectiveness of Good Ventures’ future giving opportunities.
At some time GV hope to (and I believe will) have made themselves experts in donation, having much more information than they do now about how to give best. However, given the pace at which the world is improving through economic growth and the impact of other charitable donations, I am concerned/hopeful that there will not be such low hanging fruit as exists and has been identified by GW right now. However, I also believe the people within GW/GV/OPP have considered this and have more information than me to make the decision. Still, until convinced I remain in the belief that GV should look to contribute Z—Y to GW’s top charities, where Z is the total room for more funding (in the top 5 categories of priority) and Y is the total expected amount from other donors.
Holden, to put it your terms I agree with your ‘broad market efficiency’ assumption, and I don’t doubt your ability to be able to beat the market in time given the work you are putting in. However, I do believe that the market rate of return changes over time. As more competing capital flows in to Effective Altruism and the number of opportunities to cheaply saves lives diminish, the market rate is likely to be lowered dramatically. Therefore, you could end up dramatically beating the market in a number of years time, and still end up with a rate of effectiveness which is lower than the current market rate.
Ben, to point 2) I would echo what Holden says and add the following: Aside from the direct impact GiveWell has by influencing donations in the short term, I think it is also adding a huge amount of value by the way it is changing the whole way in which people think about philanthropy and charitable donation. The most important thing Good Ventures can do for GiveWell is provide it with a stable funding base and positive signalling, by donating large amounts to the top rated charities. This I think is an excellent argument for GV’s current approach even if they believe they will have better opportunities to give in future, comparing purely on direct impact.
I don’t think I understand the trilemma you presented here.
As a sanity check, under-5 mortality is about 6 million worldwide. Assuming that more than 2⁄3 is preventable (which I think is a reasonable assumption if you compare with developed world numbers on under-5 mortality), this means there are 4 million+ preventable deaths (and corresponding suffering) per year. At $10000 to prevent a death, this is already way more money than Open Phil has in a few months. At $3500 to prevent a death, this is still more money than Open Phil has in even a single year.
We would expect the numbers to also be much larger if we’re not prioritizing just deaths, but also prevention of suffering.
GBD 2015 estimates that communicable, maternal, neonatal, and nutritional deaths worldwide amount to about 10 million in 2015. And they are declining at a rate of about 20% per decade. If at current cost-effectiveness levels, top charities could scale up to solve that whole problem, then if we assume a cost of $5,000 per life saved, the whole thing would cost $50 billion/yr. That’s more than Good Ventures has on hand—but it’s not an order of magnitude more. It’s not more than Good Ventures and its donors and the Gates foundation ($40 billion) and Warren Buffett’s planned gifts to the Gates Foundation add up to—and all of those parties seem to be interested in this program area.
That’s an extreme upper bound. It’s not limited to the developing world, or to especially tractable problems. You almost certainly can’t scale up that high at current costs—after all, the GiveWell top charities are supposed to be the ones pursuing the most important low-hanging fruit, tractable interventions for important but straightforward problems. But then, how high can you scale up at similar cost-effectiveness numbers? Can you do a single disease? For one continent? One region? One country? Now, we’re getting to magnitudes that may fall well within Good Ventures’s ability to fund the whole thing. (Starting with a small area where you can show clear gains is not a new idea—it’s the intuition behind Jeffrey Sachs’s idea of millennium villages.) And remember that once you wipe out a communicable disease, it’s much cheaper to keep it away; when’s the last time people were getting smallpox? Similarly, nutritional interventions such as food fortification tend to be permanent. There’s a one-time cost, and then it’s standard practice.
GBD 2015 estimates that there are only about 850,000 deaths due to neglected tropical diseases each year, worldwide. At $5,000 per life saved, that’s about $4.2 billion to wipe out the whole category. Even less if you focus on one continent, or one region, or one country. To name one example, Haiti is a poor island with 0.1% of the world’s population; can we wipe out neglected tropical diseases for $4.2 million there? $40 million?
I don’t think linear giving opportunities closely analogous to bednets will take $10BB without diminishing returns (although you might be able to beat that with R&D, advocacy, gene drives, and other leveraged strategies for a longer period). But I think this is a flawed argument.
The original text strongly suggested a one-time cost, not a recurring annual cost. When you have diminishing returns in a single year (especially as programs are scaled up; BMGF has ramped up its spending over time), the fact that they don’t spend everything in a firehose in a single year is far from shocking (note BMGF has spent a lot on US education too, it’s not a pure global poverty focus although that is its main agenda).
GWWC’s FAQ claims:
The annual figure for this is ~$14 billion (and not all spent where the evidence is best, including corruption, etc).
Gates Foundation spending is several billion dollars per year spread across a number of areas.
Total spending in these areas is not so large that a billion dollars a year is a drop in the bucket, and theses diseases have been massively checked or reduced (e.g. malaria, vaccinations, slowing HIV infections, smallpox eradication, salt iodization, etc).
And we haven’t explicitly talked about possible leverage from R&D and advocacy in poverty.
Those were criticized at the time for spending so much on the same people, including less well-supported interventions and over diminishing returns, rather than doing more cost-effective interventions across a larger number of people. Local effectiveness of medical interventions is tested in clinical trials.
Smallpox was a disease found only in humans with a highly effective vaccine. Such diseases are regularly locally extirpated, although getting universal coverage around the world to the last holdout regions (civil war, conspiracy theories about the vaccinations) can be very hard, as in polio eradication, and infectious diseases can quickly recolonize afterwards (malaria rebounded from the 60s failed eradication effort in places without continuing high quality prevention). But polio eradication is close and is a priority of e.g. Gates Foundation funding. It’s also quite expensive, more than $10 billion so far. For harder to control diseases without vaccines like malaria, even moreso (and you couldn’t just spend more in a big bang one year and be sure you haven’t missed a spot).
This seems like evidence for a combination of the second and third possibilities in the trilemma. Either GiveWell should expect to be able to point to empirical evidence of dramatic results soon (if not already), or it should expect to reach substantially diminishing returns, or both.
I agree that there are lots of practical reasons why you can’t just firehose this stuff—that’s part of the diminishing returns story!
I could imagine a scenario that slips in between 2 and 3, like you don’t hit substantially diminishing returns on malaria until the last 1% of incidence, but is there reason to think that’s the case?
I suggest reading about the Gates malaria eradication plans, including the barriers to that which lead Gates to think ITINs alone can’t achieve eradication.