At the end of 2015, GiveWell wrote up its reasons for recommending that Good Ventures partially but not fully fund the GiveWell top charities. This reasoning seemed incomplete to me, and when I talked about it with others in the EA community, their explanations tended to switch between what seemed to me to be incomplete and mutually exclusive models of what was going on. This bothered me, because the relevant principles are close to the core of what EA is.
A foundation that plans to move around ten billion dollars and is relying on advice from GiveWell isn’t enough to get the top charities fully funded. That’s weird and surprising. The mysterious tendency to accumulate big piles of money and then not do anything with most of it seemed like a pretty important problem, and I wanted to understand it before trying to add more money to this particular pile.
So I decided to write up, as best I could, a clear, disjunctive treatment of the main arguments I’d seen for the behavior of GiveWell, the Open Philanthropy Project, and Good Ventures. Unfortunately, my writeup ended up being very long. I’ve since been encouraged to write a shorter summary with more specific recommendations. This is that summary.
It is much shorter than the original series, and only very briefly sketches the argument. If you’re interested in the full argument, I’d encourage you to click through from the section headings to the original six parts.
There’s a commonsense notion of how to do good—do what seems best to you, speak freely about it, try to encourage others when you see opportunities for them to do good, and help out others trying to do good. Then there’s the sorts of considerations GiveWell brought up in its 2015 post on splitting, implying that the correct thing to do—at least with money—is to adopt a guarded stance and give sparingly, no more than your fair share as you assess it, to make sure that your interests are fairly represented in the final outcome.
This isn’t necessarily wrong, but it’s troubling enough to be worth thinking through very carefully. What implied beliefs about the world might justify a “splitting” recommendation, rather than a recommendation that Good Ventures fully fund the GiveWell top charities?
It could be that the GiveWell and the Open Philanthropy Project expect that the Open Philanthropy Project’s last dollar will be a better giving opportunity than the GiveWell top charities. For the very large amount of money they expect to move, this is a bold claim about their long-run impact.
Increasing returns to scale
The simplest construal of this claim is a claim of increasing returns to scale. In this case, the Open Philanthropy Project shouldn’t be trying to make grants itself, but should delegate this to more established organizations in its focus areas, where they exist.
In addition, if the Open Philanthropy Project does not think that the GiveWell Top Charities would be part of its optimal giving portfolio on impact considerations – if the “last dollar” beats AMF – then it’s unclear why it’s funding the top charities at all.
Diminishing returns to scale
If the Open Philanthropy Project rejects the increasing returns to scale argument, then this implies diminishing returns to scale at its size. This suggests that the Open Philanthropy Project has a massive disadvantage on spending its last dollar relative to smaller donors of similar judgment quality, so it should be looking for ways to move money to people and smaller institutions whose judgment it respects, to regrant at their discretion.
GiveWell and the Open Philanthropy Project might believe that GiveWell’s top charities are a part of the Open Philanthropy Project’s optimal giving portfolio. This would would imply a commitment to full funding if their actions did not affect those of other donors. In this scenario, if Good Ventures committed to fully funding the GiveWell top charities, other donors might withdraw funding to fund the next-best thing by their values, confident that they’d be offset. A commitment to “splitting” would prevent this.
I have two main objections to this. First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start? Second, if you take into account the difference in scale between Good Ventures and other GiveWell donors, Good Ventures’s “fair share” seems more likely to be in excess of 80%, than a 50-50 split.
GiveWell also brought up an ethical objection to a commitment to filling any funding gap: since it would rely on people assuming they have impact commensurate with GiveWell’s cost-per-life-saved numbers, it would be deceptive. This ethical objection doesn’t make sense. It implies that it’s unethical to cooperate on the iterated Prisoner’s Dilemma. It also assumes that people are taking the cost-per-life-saved numbers at face value, and if so, then GiveWell already thinks they’ve been misled.
Partial support by Good Ventures for the GiveWell top charities might be motivated by a desire to influence more donors to give to effective, evidence-backed charities. If this is the motivation behind partial funding, then the strategy is inherently deceptive (which undercuts the ethical reservations addressed at the end of Part 3). The mechanism by which partial funding influences other donors to give, is by leading them to believe that both these facts are true:
The GiveWell Top Charities are part of the Open Philanthropy Project’s optimal philanthropic portfolio, when only direct impact is considered.
There’s not enough money to cover the whole thing.
These are highly unlikely to both be true. Global poverty cannot plausibly be an unfillable money pit at GiveWell’s current cost-per-life-saved numbers. At least one of these three things must be true:
GiveWell’s cost per life saved numbers are wrong and should be changed.
The top charities’ interventions will reach substantially diminishing returns long before they’ve managed to massively scale up.
A few billion dollars can totally wipe out major categories of disease in the developing world.
The right way to influence future donations is to establish an unambiguous track record.
Access via size
The Open Philanthropy Project might want to spend down its available funds slowly in order to preserve its status as a large foundation, which might get it a seat at the table where it otherwise might not. But there are two obvious problems with this argument. First, if you think that there’s some threshold like $5 billion below which it’s hard to get attention, then the obvious thing to do is set aside the $5 billion, and spend the rest without this constraint. Then you can slow down again if necessary once you approach that threshold. The other objection is more important: actually making more grants seems like the obvious way to signal willingness to make grants, and thus to make potential grantees eager to talk to you. This is an argument for looser spending, not tighter spending.
Independence via many funders
GiveWell might be reluctant to accept a single-funder situation, because it would jeopardize GiveWell’s or its recommended charities’ independence. In the case of the recommended charities, this should be dealt with on a case-by-case basis. In the case of GiveWell itself, the “splitting” recommendation seems more like evidence of, than a solution to, independence or conflict of interest problems. The obvious thing to do is either fully separate the organization recommending charities to the public from the organization advising a single major philanthropic foundation, or make it clear that GiveWell’s recommendations are recommendations by what’s effectively Good Ventures staff.
When people give based on GiveWell recommendations, this is in some sense outside validation that the recommended charities are good. It would be a bad sign if they independently decided to stop. If Good Ventures crowds out other donors, it might destroy this information source.
If indeed GiveWell donors are a good source of outside validation, this undercuts the argument in Part 2 that they’re strictly worse at giving than the Open Philanthropy Project, especially under conditions of diminishing returns. In this scenario, crowding out is good, not bad.
If we reject the claim that GiveWell donors are seriously evaluating the top charities, then their apparent informativeness is illusory, and the only harm from crowding them out is loss of a funding source.
I think the crowding out problem is real, but the biggest problem is crowding out of attention, not money. I personally know of several cases in which EAs were reluctant to independently investigate a potential giving opportunity because they worried that it would step on OPP’s toes. (Likewise, I’ve heard many EAs assume that simply because a charity was recommended by GiveWell, the intervention works with near-certainty.) This is wrong and EAs should get back to work. The Open Philanthropy Project might be able to help by making it easier to check, for any given focus area, whether:
They have already evaluated it (and decided to fund, not fund because money’s not the limiting factor, or not fund because it’s not interesting).
In my original series on GiveWell and splitting, I focused on principles, leaving recommendations for the end, and making them fairly general. This is because I don’t really think that organizing a pressure group to extract specific concessions has good prospects.
What I actually want GiveWell, the Open Philanthropy Project, and Good Ventures to do is consider my arguments, combine them with any inside info I might lack, and then do the right thing as they judge it. As an outsider, I shouldn’t try to micromanage them – all I should really do is try to figure out whether they’re trying to cooperate, and if so, try to help them when I see opportunities to do so. So it’s with some reluctance, and only on account of a fair amount of encouragement from others, that I actually try to tell them how to do their jobs.
These are all worded as fixes to problems, but if I saw these implemented, I’d be affirmatively excited about it.
Assess outcomes
Evidence of positive impact is at the very core of GiveWell’s value proposition. GiveWell’s impact page tracks two inputs GiveWell has influenced: money moved, and attention (in the form of web traffic). These are important costs, but GiveWell should also measure benefits. The impact page should assess outcomes of the sort GiveWell attributes to its top charities.
That means empirical after-the-fact estimates of things like how many kids’ lives were saved by AMF, how health and test scores and incomes got better due to the efforts of SCI, Sightsavers, Deworm the World, and END Fund, and what measurable improvements to people’s well-being were made by GiveDirectly. It should also be easy to find after-the-fact estimates for former top charities such as VillageReach, whose funding gaps were completely filled according to GiveWell.
GiveWell’s before-the-fact cost per life saved estimates are a good starting point, but it’s important to test whether those numbers are accurate. These numbers may be noisy. It’s fine to have ample disclaimers about that. But they should be the most prominently featured numbers.
If there aren’t numbers available for this, that’s fine. But in that case, the impact page should say so, prominently, until such time as they are. And GiveWell has some ability to make such numbers available; it has a fair amount of leverage over many of these charities. The Open Philanthropy Project also has the capacity to make grants for this specific purpose. If that means changing the ways these charities operate – preregistering predictions, measuring before and after – then so much the better.
As an aside, it would be great to get GiveDirectly to test more explicitly the macroeconomic offsetting problem. Do cash transfers increase absolute wealth, or only shift it around? What happens if you give to everyone in a village? What happens to the neighboring villages? Do you get offsetting inflation? The footnotes to GiveWell’s page on GiveDirectly say that there’s a study under way to answer these sorts of questions, but the study has an $8 Million funding gap. Fully funding this study should be a priority if GiveDirectly stays on the Top Charities list.
Communicate scope
Organizations like the Open Philanthropy Project (and the Centre for Effective Altruism) have very broad missions. I’ve talked to people who are tempted to defer to such organizations because their implied scope is “everything”. As a result, EAs may preemptively crowd themselves out of areas these organizations might potentially look into, but aren’t currently doing much about. The Open Philanthropy Project and similar organizations can mitigate this problem by making its scope clearer.
If such organizations make it clearer what they’re likely to be focusing on and what they’re not, I think this would help. The Open Philanthropy Project has done a good job communicating this on the level of major focus areas, like political advocacy and global catastrophic risks. It would be helpful to have more granular information, such as lists of:
Investigations that have been abandoned or put on ice.
Investigations in progress.
Investigations that are planned but have not yet begun.
Potential focus areas rejected because you don’t think money is the limiting factor.
If there were a page simply listing these things somewhere on the Open Philanthropy Project’s website, it would be easy for outsiders to see whether they’re at risk of duplicating effort.
Symmetry
The Open Philanthropy Project is currently massively capacity-constrained. I think this is in large part due to the lack of a clear position on whether it faces increasing or diminishing returns. Either position, held consistently, suggests that more giving decisions should be delegated to outsiders, though in different ways.
If you’re giving away money, and you find someone who you think is doing things in your optimal portfolio of strategies, you should believe with at least some credence that their next project will also be good, and you’ll want to save yourself and them the overhead of checking in if possible. This motivation is not that compelling if you’re tight on cash, but it’s pretty compelling if you have many years of reserves at your current rate of giving.
I have four specific suggestions to resolve the current bottlenecking problem:
Overgranting
Prize grants
Unaccountable delegation to individuals
Very large grants to established organizations
The first three make the most sense under a diminishing returns scenario. The last makes more sense if returns to scale are increasing at the Open Philanthropy Project’s size.
The main thing that would persuade me this wasn’t a good idea would be very clear post-hoc impact tracking, that made it clear that the Open Philanthropy Project was learning & doing large amounts of good per dollar, and that experience implied large gains from holding off on fully funding until it had finished evaluating an org.
Overgranting
The Open Philanthropy Project could make grants big enough that grantees have a multi-year reserve, much like the Open Philanthropy Project itself. In 2016, the Open Philanthropy project didn’t manage to give away even its “overall budget for the year, which [it] set at 5% of available capital.” Of the remainder, 100% went to the Open Philanthropy Project’s implied reserve, and 0% went to bolstering the reserves of grantees. (The 95% of available capital that wasn’t budgeted was of course also allocated to the Open Philanthropy Project’s implied reserve.) This implies that the Open Philanthropy Project thinks it has an extremely strong judgment advantage over grantees (or small donors).
The Open Philanthropy Project currently sets its giving budget at 5% of eventual money moved. Before expected return on investment, that amounts to twenty years of reserves. After taking return on investment into account, implied reserves are much greater.
The obvious thing to do here would be to increase grant sizes severalfold over their current sizes, in some proportion to the extent to which the Open Philanthropy Project thinks grantees have good independent judgment. (Note that this argument applies even if potential grantees do not currently know how they’d manage to spend the last dollar – neither does the Open Philanthropy project!)
The equilibrium solution is one where the Open Philanthropy Project estimates that its last dollar has similar expected impact to each grantee’s last dollar. It seems reasonable to make an exception for some learning grants.
Prize grants
The Open Philanthropy Project occasionally talks about track record as a reason to give money even in the absence of specific future programs that seem promising. My sense is that there’s substantially more willingness to give money to people with good track records than the public record suggests. If so, this sort of granting should be scaled up and publicized. One potential mechanism could be Paul Christiano’s and Katja Grace’s impact certificates idea, but after-the-fact grants could be made without that infrastructure too.
Unaccountable delegation to individuals
Another vehicle for low-overhead delegation is the EA Funds, currently advised primarily by current Open Philanthropy Project staff. Likewise, the Open Philanthropy Project could make grants to individuals who’ve chosen to focus on promising areas – such as the people interviewed in the course of investigations – to be regranted or spent as they think best. As a bonus, this would probably make people more interested in helping the Open Philanthropy Project learn about things! Grants to Nobel-winning scientists might also be good here.
Very large grants to established organizations
The Open Philanthropy Project is unusually cause-neutral in its outlook, but in many of the major focus areas it’s identified, there are established large organizations. If there are increasing returns to scale, those organizations seem likely to do a better job spending the money. I give some examples in Part 2. For instance, IARPA and Skoll Global Threats both have interests in mitigating global catastrophic risks. The Gates Foundation’s working on global health and development. The CDC has a mandate to do things about biosecurity. The NIH specializes in funding scientific research.
If these organizations won’t take additional money, that’s some evidence against returns to scale.
Market humbly
GiveWell and the Open Philanthropy Project have been very careful not to make false claims in their explicit public statements. They’ve taken proactive steps to clear up misconceptions. This is good.
But your public image and marketing is part of your message. GiveWell’s public promotion strategy does not seem to have a correspondingly strong track record of accurately informing people.
If GiveWell doesn’t think the GiveWell top charities are the best options (e.g. because you think the Open Philanthropy Project’s last dollar has greater impact than the top charities’ marginal dollar), call them something other than “top-rated charities” or make it much clearer that they’re only “top” within some restricted category. For instance, “our top charities are evidence-backed” elides the difference between these two statements:
The judgment that these are the best charities is strongly evidence-based.
These are the best charities we could find within the limited category of charities with a strong evidence base.
If, on the other hand, GiveWell does think the top charities are the best options, it needs to make that disagreement with the Open Philanthropy Project clear.
As a second example of the sort of thing I mean, the Atlantic reported many experts on philanthropy as saying, “if you want to save lives with certainty, you have to go to GiveWell.” If that reputation is not accurate, write a letter to the editor correcting the record. For whatever reason, people frequently get the impression that GiveWell’s top charities are ways to have an impact with certainty. GiveWell’s blog post on its uncertainty around deworming was a good first step towards resolving this, but I don’t expect it to be enough.
I expect this kind of issue to be unusually difficult to resolve. In part this is because standard advice on how to promote an organization will tend to include advice to engage in deceptive practices. In this case, I think that GiveWell is trying to meet the apparent demand for simple recommendations, by making its recommendations simple. As an accidental side effect, GiveWell’s promotional messaging implies – while never explicitly stating, because no one intends deception – that the underlying problem of which charity is best is correspondingly simple. This is of course false, as the GiveWell website makes abundantly clear to anyone who reads it carefully and in detail (i.e. almost no one).
Unwind partial funding
As mentioned in Part 4, “splitting” has the unfortunate side effect of creating the impression that these two things are true:
The Open Philanthropy Project and Good Ventures have credibly vouched for the Top Charities, by funding them as part of their optimal giving portfolio.
Money for those organizations’ priorities is scarce, because they lack the capacity to fully fund the top charities.
I’m very ready to believe that no one had any intent to deceive. GiveWell, the Open Philanthropy Project, and Good Ventures could make this very clear by refusing to profit from any accidental deception that may have occurred.
If the optimal level of Good Ventures funding for GiveWell top charities is full funding, then the thing to do is to fully fund them. During the unfortunate accidental episode of funding gap theater, it may be the case that some people gave to GiveWell top charities, who wouldn’t have given if there had been a commitment to fully funding them. I suggest simply offering to refund the money of anyone who can verify they gave to the Top Charities in this period and feels misled. If this turns out to be difficult to pull off lawfully, then offer a donation swap to the nonprofit of their choice.
Likewise, if the optimal level of Good Ventures funding for GiveWell top charities is none, then stop funding them – perhaps gradually to avoid the “whiplash” concerns mentioned in GiveWell’s 2016 follow-up post on partial funding, but make the intention clear up front. GiveWell should recommend whatever it thinks is best for its audience, but it shouldn’t additionally try to get donors to think that Good Ventures thinks the top charities are competitive with their other options. Again, for the years of accidental funding gap theater, they can simply offer to refund any verified top charities donor who says their donation decision was affected by the fact that Good Ventures was giving.
Of course, if a donor says keep the money, then that’s fine! I expect fairly few donors would accept this offer. But it still seems like it would be a powerful, credible signal of cooperative intent.
Separate organizations and offices
I’ve been holding GiveWell to a higher standard than I’d apply to most other donors. This is because it’s in the somewhat unusual position of simultaneously making large private donation decisions, and asking the public to trust it as an objective judge of charities. I’ve written a bit about the communication and conflict of interest problems that naturally follow from collocating and sharing staff among GiveWell, the Open Philanthropy Project, and Good Ventures. I recommend separate organizations with separate offices.
Doubtless there are some efficiency gains to having informal conversations with Open Philanthropy Project staff – and to sharing staff – but it sure seems like there are huge costs as well, since this has led to incoherent and misleading behavior. If GiveWell’s public pages had to be good enough to persuade the Open Philanthropy Project to invest in the GiveWell top charities – if GiveWell’s public product were the main way it communicated with the Open Philanthropy Project – that would align incentives better, away from opaqueness, misdirection, and unprincipled horse-trading among insiders, and towards making a clear public case for whatever GiveWell actually thinks is best.
As I’ve said before, GiveWell and Open Philanthropy Project staff have made strenuous efforts to avoid such temptations and to instead tell the truth and do the right thing. But it seems better to simply avoid situations where avoiding such temptations requires such strenuous effort.
GiveWell and the problem of partial funding
At the end of 2015, GiveWell wrote up its reasons for recommending that Good Ventures partially but not fully fund the GiveWell top charities. This reasoning seemed incomplete to me, and when I talked about it with others in the EA community, their explanations tended to switch between what seemed to me to be incomplete and mutually exclusive models of what was going on. This bothered me, because the relevant principles are close to the core of what EA is.
A foundation that plans to move around ten billion dollars and is relying on advice from GiveWell isn’t enough to get the top charities fully funded. That’s weird and surprising. The mysterious tendency to accumulate big piles of money and then not do anything with most of it seemed like a pretty important problem, and I wanted to understand it before trying to add more money to this particular pile.
So I decided to write up, as best I could, a clear, disjunctive treatment of the main arguments I’d seen for the behavior of GiveWell, the Open Philanthropy Project, and Good Ventures. Unfortunately, my writeup ended up being very long. I’ve since been encouraged to write a shorter summary with more specific recommendations. This is that summary.
It is much shorter than the original series, and only very briefly sketches the argument. If you’re interested in the full argument, I’d encourage you to click through from the section headings to the original six parts.
Recap of the argument
Part 1: The problem of splitting
There’s a commonsense notion of how to do good—do what seems best to you, speak freely about it, try to encourage others when you see opportunities for them to do good, and help out others trying to do good. Then there’s the sorts of considerations GiveWell brought up in its 2015 post on splitting, implying that the correct thing to do—at least with money—is to adopt a guarded stance and give sparingly, no more than your fair share as you assess it, to make sure that your interests are fairly represented in the final outcome.
This isn’t necessarily wrong, but it’s troubling enough to be worth thinking through very carefully. What implied beliefs about the world might justify a “splitting” recommendation, rather than a recommendation that Good Ventures fully fund the GiveWell top charities?
Part 2: Superior giving opportunities
It could be that the GiveWell and the Open Philanthropy Project expect that the Open Philanthropy Project’s last dollar will be a better giving opportunity than the GiveWell top charities. For the very large amount of money they expect to move, this is a bold claim about their long-run impact.
Increasing returns to scale
The simplest construal of this claim is a claim of increasing returns to scale. In this case, the Open Philanthropy Project shouldn’t be trying to make grants itself, but should delegate this to more established organizations in its focus areas, where they exist.
In addition, if the Open Philanthropy Project does not think that the GiveWell Top Charities would be part of its optimal giving portfolio on impact considerations – if the “last dollar” beats AMF – then it’s unclear why it’s funding the top charities at all.
Diminishing returns to scale
If the Open Philanthropy Project rejects the increasing returns to scale argument, then this implies diminishing returns to scale at its size. This suggests that the Open Philanthropy Project has a massive disadvantage on spending its last dollar relative to smaller donors of similar judgment quality, so it should be looking for ways to move money to people and smaller institutions whose judgment it respects, to regrant at their discretion.
Part 3: Bargaining power
GiveWell and the Open Philanthropy Project might believe that GiveWell’s top charities are a part of the Open Philanthropy Project’s optimal giving portfolio. This would would imply a commitment to full funding if their actions did not affect those of other donors. In this scenario, if Good Ventures committed to fully funding the GiveWell top charities, other donors might withdraw funding to fund the next-best thing by their values, confident that they’d be offset. A commitment to “splitting” would prevent this.
I have two main objections to this. First, the adversarial framing here seems unnecessary. If the other player hasn’t started defecting in the iterated prisoner’s dilemma, why start? Second, if you take into account the difference in scale between Good Ventures and other GiveWell donors, Good Ventures’s “fair share” seems more likely to be in excess of 80%, than a 50-50 split.
GiveWell also brought up an ethical objection to a commitment to filling any funding gap: since it would rely on people assuming they have impact commensurate with GiveWell’s cost-per-life-saved numbers, it would be deceptive. This ethical objection doesn’t make sense. It implies that it’s unethical to cooperate on the iterated Prisoner’s Dilemma. It also assumes that people are taking the cost-per-life-saved numbers at face value, and if so, then GiveWell already thinks they’ve been misled.
Part 4: Influence, access, and independence
Influence via habituation vs track record
Partial support by Good Ventures for the GiveWell top charities might be motivated by a desire to influence more donors to give to effective, evidence-backed charities. If this is the motivation behind partial funding, then the strategy is inherently deceptive (which undercuts the ethical reservations addressed at the end of Part 3). The mechanism by which partial funding influences other donors to give, is by leading them to believe that both these facts are true:
The GiveWell Top Charities are part of the Open Philanthropy Project’s optimal philanthropic portfolio, when only direct impact is considered.
There’s not enough money to cover the whole thing.
These are highly unlikely to both be true. Global poverty cannot plausibly be an unfillable money pit at GiveWell’s current cost-per-life-saved numbers. At least one of these three things must be true:
GiveWell’s cost per life saved numbers are wrong and should be changed.
The top charities’ interventions will reach substantially diminishing returns long before they’ve managed to massively scale up.
A few billion dollars can totally wipe out major categories of disease in the developing world.
The right way to influence future donations is to establish an unambiguous track record.
Access via size
The Open Philanthropy Project might want to spend down its available funds slowly in order to preserve its status as a large foundation, which might get it a seat at the table where it otherwise might not. But there are two obvious problems with this argument. First, if you think that there’s some threshold like $5 billion below which it’s hard to get attention, then the obvious thing to do is set aside the $5 billion, and spend the rest without this constraint. Then you can slow down again if necessary once you approach that threshold. The other objection is more important: actually making more grants seems like the obvious way to signal willingness to make grants, and thus to make potential grantees eager to talk to you. This is an argument for looser spending, not tighter spending.
Independence via many funders
GiveWell might be reluctant to accept a single-funder situation, because it would jeopardize GiveWell’s or its recommended charities’ independence. In the case of the recommended charities, this should be dealt with on a case-by-case basis. In the case of GiveWell itself, the “splitting” recommendation seems more like evidence of, than a solution to, independence or conflict of interest problems. The obvious thing to do is either fully separate the organization recommending charities to the public from the organization advising a single major philanthropic foundation, or make it clear that GiveWell’s recommendations are recommendations by what’s effectively Good Ventures staff.
Part 5: Other people know things too
When people give based on GiveWell recommendations, this is in some sense outside validation that the recommended charities are good. It would be a bad sign if they independently decided to stop. If Good Ventures crowds out other donors, it might destroy this information source.
If indeed GiveWell donors are a good source of outside validation, this undercuts the argument in Part 2 that they’re strictly worse at giving than the Open Philanthropy Project, especially under conditions of diminishing returns. In this scenario, crowding out is good, not bad.
If we reject the claim that GiveWell donors are seriously evaluating the top charities, then their apparent informativeness is illusory, and the only harm from crowding them out is loss of a funding source.
I think the crowding out problem is real, but the biggest problem is crowding out of attention, not money. I personally know of several cases in which EAs were reluctant to independently investigate a potential giving opportunity because they worried that it would step on OPP’s toes. (Likewise, I’ve heard many EAs assume that simply because a charity was recommended by GiveWell, the intervention works with near-certainty.) This is wrong and EAs should get back to work. The Open Philanthropy Project might be able to help by making it easier to check, for any given focus area, whether:
They have already evaluated it (and decided to fund, not fund because money’s not the limiting factor, or not fund because it’s not interesting).
An evaluation is in progress.
An evaluation is not in progress.
An evaluation has been discontinued (and why).
Part 6: Recommendations
In my original series on GiveWell and splitting, I focused on principles, leaving recommendations for the end, and making them fairly general. This is because I don’t really think that organizing a pressure group to extract specific concessions has good prospects.
What I actually want GiveWell, the Open Philanthropy Project, and Good Ventures to do is consider my arguments, combine them with any inside info I might lack, and then do the right thing as they judge it. As an outsider, I shouldn’t try to micromanage them – all I should really do is try to figure out whether they’re trying to cooperate, and if so, try to help them when I see opportunities to do so. So it’s with some reluctance, and only on account of a fair amount of encouragement from others, that I actually try to tell them how to do their jobs.
These are all worded as fixes to problems, but if I saw these implemented, I’d be affirmatively excited about it.
Assess outcomes
Evidence of positive impact is at the very core of GiveWell’s value proposition. GiveWell’s impact page tracks two inputs GiveWell has influenced: money moved, and attention (in the form of web traffic). These are important costs, but GiveWell should also measure benefits. The impact page should assess outcomes of the sort GiveWell attributes to its top charities.
That means empirical after-the-fact estimates of things like how many kids’ lives were saved by AMF, how health and test scores and incomes got better due to the efforts of SCI, Sightsavers, Deworm the World, and END Fund, and what measurable improvements to people’s well-being were made by GiveDirectly. It should also be easy to find after-the-fact estimates for former top charities such as VillageReach, whose funding gaps were completely filled according to GiveWell.
GiveWell’s before-the-fact cost per life saved estimates are a good starting point, but it’s important to test whether those numbers are accurate. These numbers may be noisy. It’s fine to have ample disclaimers about that. But they should be the most prominently featured numbers.
If there aren’t numbers available for this, that’s fine. But in that case, the impact page should say so, prominently, until such time as they are. And GiveWell has some ability to make such numbers available; it has a fair amount of leverage over many of these charities. The Open Philanthropy Project also has the capacity to make grants for this specific purpose. If that means changing the ways these charities operate – preregistering predictions, measuring before and after – then so much the better.
As an aside, it would be great to get GiveDirectly to test more explicitly the macroeconomic offsetting problem. Do cash transfers increase absolute wealth, or only shift it around? What happens if you give to everyone in a village? What happens to the neighboring villages? Do you get offsetting inflation? The footnotes to GiveWell’s page on GiveDirectly say that there’s a study under way to answer these sorts of questions, but the study has an $8 Million funding gap. Fully funding this study should be a priority if GiveDirectly stays on the Top Charities list.
Communicate scope
Organizations like the Open Philanthropy Project (and the Centre for Effective Altruism) have very broad missions. I’ve talked to people who are tempted to defer to such organizations because their implied scope is “everything”. As a result, EAs may preemptively crowd themselves out of areas these organizations might potentially look into, but aren’t currently doing much about. The Open Philanthropy Project and similar organizations can mitigate this problem by making its scope clearer.
If such organizations make it clearer what they’re likely to be focusing on and what they’re not, I think this would help. The Open Philanthropy Project has done a good job communicating this on the level of major focus areas, like political advocacy and global catastrophic risks. It would be helpful to have more granular information, such as lists of:
Investigations that have been abandoned or put on ice.
Investigations in progress.
Investigations that are planned but have not yet begun.
Potential focus areas rejected because you don’t think money is the limiting factor.
If there were a page simply listing these things somewhere on the Open Philanthropy Project’s website, it would be easy for outsiders to see whether they’re at risk of duplicating effort.
Symmetry
The Open Philanthropy Project is currently massively capacity-constrained. I think this is in large part due to the lack of a clear position on whether it faces increasing or diminishing returns. Either position, held consistently, suggests that more giving decisions should be delegated to outsiders, though in different ways.
If you’re giving away money, and you find someone who you think is doing things in your optimal portfolio of strategies, you should believe with at least some credence that their next project will also be good, and you’ll want to save yourself and them the overhead of checking in if possible. This motivation is not that compelling if you’re tight on cash, but it’s pretty compelling if you have many years of reserves at your current rate of giving.
I have four specific suggestions to resolve the current bottlenecking problem:
Overgranting
Prize grants
Unaccountable delegation to individuals
Very large grants to established organizations
The first three make the most sense under a diminishing returns scenario. The last makes more sense if returns to scale are increasing at the Open Philanthropy Project’s size.
The main thing that would persuade me this wasn’t a good idea would be very clear post-hoc impact tracking, that made it clear that the Open Philanthropy Project was learning & doing large amounts of good per dollar, and that experience implied large gains from holding off on fully funding until it had finished evaluating an org.
Overgranting
The Open Philanthropy Project could make grants big enough that grantees have a multi-year reserve, much like the Open Philanthropy Project itself. In 2016, the Open Philanthropy project didn’t manage to give away even its “overall budget for the year, which [it] set at 5% of available capital.” Of the remainder, 100% went to the Open Philanthropy Project’s implied reserve, and 0% went to bolstering the reserves of grantees. (The 95% of available capital that wasn’t budgeted was of course also allocated to the Open Philanthropy Project’s implied reserve.) This implies that the Open Philanthropy Project thinks it has an extremely strong judgment advantage over grantees (or small donors).
The Open Philanthropy Project currently sets its giving budget at 5% of eventual money moved. Before expected return on investment, that amounts to twenty years of reserves. After taking return on investment into account, implied reserves are much greater.
The obvious thing to do here would be to increase grant sizes severalfold over their current sizes, in some proportion to the extent to which the Open Philanthropy Project thinks grantees have good independent judgment. (Note that this argument applies even if potential grantees do not currently know how they’d manage to spend the last dollar – neither does the Open Philanthropy project!)
The equilibrium solution is one where the Open Philanthropy Project estimates that its last dollar has similar expected impact to each grantee’s last dollar. It seems reasonable to make an exception for some learning grants.
Prize grants
The Open Philanthropy Project occasionally talks about track record as a reason to give money even in the absence of specific future programs that seem promising. My sense is that there’s substantially more willingness to give money to people with good track records than the public record suggests. If so, this sort of granting should be scaled up and publicized. One potential mechanism could be Paul Christiano’s and Katja Grace’s impact certificates idea, but after-the-fact grants could be made without that infrastructure too.
Unaccountable delegation to individuals
Another vehicle for low-overhead delegation is the EA Funds, currently advised primarily by current Open Philanthropy Project staff. Likewise, the Open Philanthropy Project could make grants to individuals who’ve chosen to focus on promising areas – such as the people interviewed in the course of investigations – to be regranted or spent as they think best. As a bonus, this would probably make people more interested in helping the Open Philanthropy Project learn about things! Grants to Nobel-winning scientists might also be good here.
Very large grants to established organizations
The Open Philanthropy Project is unusually cause-neutral in its outlook, but in many of the major focus areas it’s identified, there are established large organizations. If there are increasing returns to scale, those organizations seem likely to do a better job spending the money. I give some examples in Part 2. For instance, IARPA and Skoll Global Threats both have interests in mitigating global catastrophic risks. The Gates Foundation’s working on global health and development. The CDC has a mandate to do things about biosecurity. The NIH specializes in funding scientific research.
If these organizations won’t take additional money, that’s some evidence against returns to scale.
Market humbly
GiveWell and the Open Philanthropy Project have been very careful not to make false claims in their explicit public statements. They’ve taken proactive steps to clear up misconceptions. This is good.
But your public image and marketing is part of your message. GiveWell’s public promotion strategy does not seem to have a correspondingly strong track record of accurately informing people.
If GiveWell doesn’t think the GiveWell top charities are the best options (e.g. because you think the Open Philanthropy Project’s last dollar has greater impact than the top charities’ marginal dollar), call them something other than “top-rated charities” or make it much clearer that they’re only “top” within some restricted category. For instance, “our top charities are evidence-backed” elides the difference between these two statements:
The judgment that these are the best charities is strongly evidence-based.
These are the best charities we could find within the limited category of charities with a strong evidence base.
If, on the other hand, GiveWell does think the top charities are the best options, it needs to make that disagreement with the Open Philanthropy Project clear.
As a second example of the sort of thing I mean, the Atlantic reported many experts on philanthropy as saying, “if you want to save lives with certainty, you have to go to GiveWell.” If that reputation is not accurate, write a letter to the editor correcting the record. For whatever reason, people frequently get the impression that GiveWell’s top charities are ways to have an impact with certainty. GiveWell’s blog post on its uncertainty around deworming was a good first step towards resolving this, but I don’t expect it to be enough.
I expect this kind of issue to be unusually difficult to resolve. In part this is because standard advice on how to promote an organization will tend to include advice to engage in deceptive practices. In this case, I think that GiveWell is trying to meet the apparent demand for simple recommendations, by making its recommendations simple. As an accidental side effect, GiveWell’s promotional messaging implies – while never explicitly stating, because no one intends deception – that the underlying problem of which charity is best is correspondingly simple. This is of course false, as the GiveWell website makes abundantly clear to anyone who reads it carefully and in detail (i.e. almost no one).
Unwind partial funding
As mentioned in Part 4, “splitting” has the unfortunate side effect of creating the impression that these two things are true:
The Open Philanthropy Project and Good Ventures have credibly vouched for the Top Charities, by funding them as part of their optimal giving portfolio.
Money for those organizations’ priorities is scarce, because they lack the capacity to fully fund the top charities.
I’m very ready to believe that no one had any intent to deceive. GiveWell, the Open Philanthropy Project, and Good Ventures could make this very clear by refusing to profit from any accidental deception that may have occurred.
If the optimal level of Good Ventures funding for GiveWell top charities is full funding, then the thing to do is to fully fund them. During the unfortunate accidental episode of funding gap theater, it may be the case that some people gave to GiveWell top charities, who wouldn’t have given if there had been a commitment to fully funding them. I suggest simply offering to refund the money of anyone who can verify they gave to the Top Charities in this period and feels misled. If this turns out to be difficult to pull off lawfully, then offer a donation swap to the nonprofit of their choice.
Likewise, if the optimal level of Good Ventures funding for GiveWell top charities is none, then stop funding them – perhaps gradually to avoid the “whiplash” concerns mentioned in GiveWell’s 2016 follow-up post on partial funding, but make the intention clear up front. GiveWell should recommend whatever it thinks is best for its audience, but it shouldn’t additionally try to get donors to think that Good Ventures thinks the top charities are competitive with their other options. Again, for the years of accidental funding gap theater, they can simply offer to refund any verified top charities donor who says their donation decision was affected by the fact that Good Ventures was giving.
Of course, if a donor says keep the money, then that’s fine! I expect fairly few donors would accept this offer. But it still seems like it would be a powerful, credible signal of cooperative intent.
Separate organizations and offices
I’ve been holding GiveWell to a higher standard than I’d apply to most other donors. This is because it’s in the somewhat unusual position of simultaneously making large private donation decisions, and asking the public to trust it as an objective judge of charities. I’ve written a bit about the communication and conflict of interest problems that naturally follow from collocating and sharing staff among GiveWell, the Open Philanthropy Project, and Good Ventures. I recommend separate organizations with separate offices.
Doubtless there are some efficiency gains to having informal conversations with Open Philanthropy Project staff – and to sharing staff – but it sure seems like there are huge costs as well, since this has led to incoherent and misleading behavior. If GiveWell’s public pages had to be good enough to persuade the Open Philanthropy Project to invest in the GiveWell top charities – if GiveWell’s public product were the main way it communicated with the Open Philanthropy Project – that would align incentives better, away from opaqueness, misdirection, and unprincipled horse-trading among insiders, and towards making a clear public case for whatever GiveWell actually thinks is best.
As I’ve said before, GiveWell and Open Philanthropy Project staff have made strenuous efforts to avoid such temptations and to instead tell the truth and do the right thing. But it seems better to simply avoid situations where avoiding such temptations requires such strenuous effort.
(Cross-posted from my personal blog.)