Matching-donation fundraisers can be harmfully dishonest
Anna Salamon, executive director of CFAR (named with permission), recently wrote to me asking for my thoughts on fundraisers using matching donations. (Anna, together with co-writer Steve Rayhawk, has previously written on community norms that promote truth over falsehood.) My response made some general points that I wish were more widely understood:
Pitching matching donations as leverage (e.g. “double your impact”) misrepresents the situation by overassigning credit for funds raised.
This sort of dishonesty isn’t just bad for your soul, but can actually harm the larger world—not just by eroding trust, but by causing people to misallocate their charity budgets.
“Best practices” for a charity tend to promote this kind of dishonesty, because they’re precisely those practices that work no matter what your charity is doing.
If your charity is impact-oriented—if you care about outcomes rather than institutional success—then you should be able to do substantially better than “best practices”.
So I’m putting an edited version of my response here. (UPDATE: Per Denkenberger’s comment below, see Jeff Kaufman’s earlier partly overlapping discussion of matching donations.)
Matched donation fundraisers are typically dishonest
In the typical matched donation fundraiser, a large donor pledges to match the donations from everyone else, up to a specified level, such as $500,000. The charity can then claim to other donors that this is an unusually good time to give, because for each dollar they give to the charity, the charity will receive an additional dollar from the matching donor. There are two levels on which such matched donation offers tend to be dishonest:
The match is often illusory.
Even when the match is real, it only motivates donors by overassigning credit.
GiveWell explains the problem of illusory matching fairly well:
We know that donors love donation matching. We know that if we could offer donation matching on gifts to our top charities this giving season, our money moved would rise. And we know that we could offer donation matching if we thought it was the right thing to do: there are donors planning six-figure gifts to our top charities this year who would almost certainly be willing to structure their gifts as “matches” if we asked. [...]
But we’ve decided not to do this because we would feel dishonest. We’d be advertising that you can “double your gift,” but the truth would be that we just restructured a gift from a six-figure donor that was going to happen anyway. We’ve discussed [...] finding a donor who would give to our top charities only on condition that others did – but not surprisingly, everyone we could think of who would be open to making a large gift to our top charities would be open to this whether or not we could match them up with smaller donors. Ultimately, the only match we can offer is illusory matching.
But the main problem with matching donation fundraisers is that even when they aren’t lying about the matching donor’s counterfactual behavior, they misrepresent the situation by overassigning credit for funds raised.
I’ll illustrate this with a toy example. Let’s say that a charity—call it Good Works—has two potential donors, Alice and Bob, who each have $1 to give, and don’t know each other. Alice decides to double her impact by pledging to match the next $1 of donations. If this works, and someone gives because of her match offer, then she’ll have caused $2 to go to Good Works. Bob sees the match offer and reasons similarly: if he gives $1, this causes another $1 to go to Good Works, so his impact is doubled—he’ll have caused Good Works to receive $2.
But if Alice and Bob each assess their impact as $2 of donations, then the total assessed impact is $4 - even though Good Works only receives $2. This is what I mean when I say that credit is overassigned—if you add up the amount of funding each donor is supposed to have caused, you get number that exceeds the total amount of funds raised.
If Alice is responsible for $2 of donations, then she has to reason that she’s overridden Bob’s agency and Bob’s not responsible for his actions. If Bob agrees that he gets zero credit, then there’s no problem. But if Bob reasons symmetrically to Alice, then each one can coherently think that they moved more than $1, only if they also believe that their agency’s been eroded by the match agreement. They can coherently think that the decision is symmetrical, and that they’re each responsible for more than $1 of donations, only if they each also agree that they’ve forfeited some share of their agency or optimization power by letting themselves be enticed by the other’s match potential.
I think this is what GiveWell means when discussing what it considers a non-illusory form of matching, “influence matching”:
Influence matching is something I think impact-maximizing donors ought to be concerned about. In the short run, influence matching makes it true that your $1 donation results in $2 donated to the charity in question. But it also means that you’ve let the matching funder influence your giving – perhaps pulling you away from the most impactful charity (in your judgment) to a less impactful one – just by the way they structured their gift. By giving, you are rewarding this behavior by the matching funder, and you may be encouraging them to take future unconditional gifts and turn them into conditional gifts, because of the ability to sway other donors.
Perhaps, rather than giving your $1 to the charity the matching funder is pushing, you should fight back by structuring your own influence matching – making a conditional commitment to the highest-impact charity you can find, in order to pull other dollars in toward it.
But is this just a nitpick by overly scrupulous moralists, or does it actually cause some harm?
Overassignment of credit obscures opportunity cost
I claim that the moral discomfort some such as GiveWell feel about matching donation fundraisers is related to an actual harm caused by dishonesty: it causes people motivated by it to make worse decisions. I first lay out a simple model of why you might explain overassignment of credit to be good. Then I’ll explain how this can instead cause harm.
Coordinating to shift from consumption to giving
Let’s go back to the example of Alice and Bob. Alice cares about her personal consumption, and about Good Works, but not about Bob’s personal consumption. She’d rather use $1 to buy ice cream than give it to Good Works, but if she can thereby redirect $1 from Bob’s personal consumption to Good Works as well, she thinks it’s worth it. Bob’s preferences are the mirror image of Alice’s.
Each of them prefers the world where Good Works gets $2 to the world where they buy ice cream. But if neither thinks they can affect the other’s action, then they each prefer to buy ice cream rather than giving $1 to Good Works. Thus, when Alice offers and Bob accepts a match, they move into a world-state they both prefer. This is true regardless of how “moral credit” is assigned.
Harms from double-counting
I suspect that in practice donations trade off against other donations more often than they trade off against consumption. This can lead to real harms from double-counting impact.
Let’s consider two new strangers, Carl and Denise, who each have a fixed charity budget of $1. Carl and Denise are effective altruists, and want to maximize total utility with their charity budgets.
Charities A creates 3 utils per dollar, and charity B creates 2 utils per dollar. By default, Carl and Denise will each give to charity A, creating 6 utils.
Charity B approaches Carl with the idea that he make a match offer. Carl jumps at the opportunity to cause $2 to be given to charity B, creating 4 utils, one more than he’d have saved before. Denise finds out about the match offer, and switches her donation to charity B, on the same basis. But the total amount of money moved to charity B is not the “doubled” $2+$2=$4, but just $2, resulting in 4 utils. This is less than before!
In general, the more Carl and Denise care about the same things, the more we should expect that the situation is like this, and not like the prior example with Alice and Bob.
Honest and open coordination
In the above toy example, the harm is directly caused by double-counting. I think this is a generalizable principle—strategies that get people excited about things by overassigning credit or underassigning costs will lead to well-intentioned donors misallocating their resources. So we should instead look for coordination mechanisms that work by clarifying, rather than obscuring, the incentives of the participants.
I’ll give two examples of how this might work:
Threshold coordination to fund projects that are only viable once some minimal funding threshold has been passed.
Giving pledges in which potential philanthropist match each other’s commitment to give a larger share of their wealth or income to charity, than they otherwise might have done.
Threshold coordination
There’s a version of “matching” that doesn’t depend on something like overassigning credit. Let’s say there’s some program that only makes sense if $X gets spent on it, but your charity budget is $0.1X. You don’t really want to dump your money into a money pit for no reason, it’s not super likely that your $0.1X makes the difference between funding and not funding the thing, but, if you found nine other people like you, you’d totally go for it.
This is the Kickstarter model: no one pays unless there’s enough money pledged to produce the thing people want. This model only makes sense if there really are natural thresholds. One natural threshold for a charity would be the level of long-run funding below which the charity would have to shut down. I can also imagine using a kickstarter-style campaign for special programs. If after prioritizing appropriately, a charity doesn’t have enough money to fund project X, but suspect some donors might be especially excited about it, a conditional pledge campaign could make a lot of sense.
GiveWell discusses this as the other non-illusory form of matching:
Coordination matching. A charity needs to raise a specific amount for a specific purpose. A large funder (the “matcher”) is happy to contribute part of the amount needed as long as the specific purpose is achieved; therefore, the matcher makes the gift conditional on other gifts.
Thresholds can be helpful and motivating even without conditionality. In its 2015 winter fundraiser, MIRI described how aggressive its program would be at different levels of funding:
Target 1 — $150k: Holding steady. At this level, we would have enough funds to maintain our runway in early 2016 while continuing all current operations, including running workshops, writing papers, and attending conferences.
Target 2 — $450k: Maintaining MIRI’s growth rate. At this funding level, we would be much more confident that our new growth plans are sustainable, and we would be able to devote more attention to academic outreach. We would be able to spend less staff time on fundraising in the coming year, and might skip our summer fundraiser.
Target 3 — $1M: Bigger plans, faster growth. At this level, we would be able to substantially increase our recruiting efforts and take on new research projects. It would be evident that our donors’ support is stronger than we thought, and we would move to scale up our plans and growth rate accordingly.
Target 4 — $6M: A new MIRI. At this point, MIRI would become a qualitatively different organization. With this level of funding, we would be able to diversify our research initiatives and begin branching out from our current agenda into alternative angles of attack on the AI alignment problem.
This seems like a pretty good thing for a charity to do regardless of whether it provides a coordination mechanism—creating motivation by revealing relevant information seems clearly good. I was more excited about spreading the word about that MIRI fundraiser than I have been about ones shortly before or after.
Predictions are hard, so claimed thresholds for different programs can be misleading. I agree, and have personal experience with this as a CFAR donor. But, I don’t think it’s dishonest to make mistaken predictions, especially if you indicate your uncertainty—and most especially if you follow up afterwards by checking what happened against what you predicted, and making a serious effort to calibrate your future predictions, taking into account past misalignment.
Giving pledges
In the first toy model, Alice and Bob successfully coordinated towards an outcome better aligned with their preferences. I don’t think that it’s a chance coincidence that this example involved shifting money from consumption to giving. This makes the opportunity cost argument less relevant, because Bob’s next-best option is not valued very highly by Alice, and vice versa.
To some extent, the benefit of this coordination is obscured by linking it to a particular charity. The benefit is in Alice and Bob agreeing to allocate their resources in a more public-spirited way, not in Alice’s influence over which charity Bob gives to. I don’t see any particular reason to mix these considerations. Why not just coordinate about the first thing, and let each person use their own judgment about what charity is best?
In real life, this looks like the Giving What We Can pledge, in which participants make a public pledge to give 10% of their income to effective charities. (There’s also a time-bounded trial pledge.) This is explicitly about shifting money from consumption to giving. If you wanted to use matching mechanisms, you might ask about whether there’s anyone who’s on the fence about taking the pledge, but would do it if that would move someone else to do so. Then pair them up, and have them take the pledge together.
Some other related pledges:
The Giving Pledge, for billionaires pledging to give away half of their wealth.
Founders Pledge, in which startup founders pledge to give 2% of the proceeds from selling their startup, to charity.
Raising for Effective Giving, in which people (they focus on professional poker players) pledge 2% of their income to charity (they also try to promote effective charities)
Best practices tend towards dishonesty
I think the problem with—and prevalence of—matching donations is part of a broader phenomenon. When the activity of extracting money from donors is abstracted away from the other core activities of an organization, like assessing and running programs, best practices tend towards distorting the truth. You end up with money-extraction strategies that work regardless of what the organization is doing, and those aren’t going to be honest strategies.
The messaging advice that works for any organization is, necessarily, advice that works for organizations with terrible programs. Making it easy to evaluate your programs on the merits seems unlikely to satisfy this requirement. So standard practice will be to obscure your impact.
But donors still want a better deal. So what do you do? The only “better deal” left is one that anyone could offer, a fully generic one that doesn’t depend on the details of your program. Offering “leverage”—a 2-for-1 sale—is a perfect example of this. Both sides of a matching drive get to think that they’re buying the other side’s participation “for free”. Of course, they’re not. Strategies that appear to buy influence “for free” only appear to work by hiding the ball.
Risk arbitrage for evil and for good
Genuinely impact-oriented organizations have the opportunity to implement a different class of strategies that compete less directly with the typical charity. In particular, if you’re uncertain how effective your program is, you mainly care about raising money in the scenarios where your program is effective. This means that some fundraising and communication strategies that increase your organization’s financial risk carry much less downside risk from an impact perspective. In particular, in the scenarios where your program isn’t effective, you shouldn’t treat failing to raise funds as a cost at all.
I’ll explain risk arbitrage in the normal case of finance, where it’s a best practice to cheat clients. Then I’ll explain how it applies to philanthropy, and can be used for good.
Hedge fund roulette
Matthew Yglesias uses roulette as a metaphor for hedge funds’ strategies, in order to explain why hedge fund managers have an incentive to pursue risk even if it doesn’t benefit their clients:
Good news for investors who like to lose all their money, “John Meriwether, the hedge fund manager and arbitrageur behind Long-Term Capital Management, is in the process of setting up a new hedge fund — his third.” What’s that, you ask, didn’t his first fund lose all its money? Why, yes. And didn’t the second fund fold because it lost a ton of money? Yes, quite so. So how will this new one be different? It won’t! It’s “expected use the same strategy as both LTCM and JWM to make money: so-called relative value arbitrage, a quantitative investment strategy Mr Meriwether pioneered when he led the hugely successful bond arbitrage group at Salomon Brothers in the 1980s.”
The way this works is that you identify arbitrage opportunities such that you make trades you’re overwhelmingly likely to make money on. But those opportunities only exist because the opportunities are very small. So to make them worth pursuing, you need to lever-up with huge amounts of debt. Which means that on the rare moments when the trades do go bad, everything falls apart: “The strategy typically has a high ‘blow-up’ risk because of the large amounts of leverage it uses to profit from often tiny pricing anomalies.”
As a friend puts it, this strategy is “literally the equivalent of putting a chip on 35 of the 36 roulette numbers and hoping for no zero/36.” But you’re doing it with borrowed money. I’m not a huge believer in human rationality, so I totally understand how this scam worked once. That he was able to get a second fund off the ground is pretty amazing.
Here’s how the “roulette” strategy looks to the hedge fund manager: At the beginning of each period, you take all your assets under management and distribute them evenly on the roulette numbers 1-35. Each of these numbers has a 1⁄37 chance of coming up. If the roulette ball lands on one of your numbers, you get 35 times the amount of money you put on that number, plus your initial bet back. If you’re managing a $35 fund, you’d have $1 on each number, so you’d end up with $36, a 1⁄35 or 2.8% gain. On the other hand, if the ball lands on 0 or 36, you lose all the money. At the end of each period, you get paid 15% of the return on your fund. That means that if you win, you get paid 15% / 35 = 0.43% of assets under management. If you lose, you get nothing. So each period, your expected payout is 15% / 35 * (35 / 37) = 0.4% of assets under management.
Here’s how the “roulette” strategy looks to the client: In “winning” periods, your holdings appreciate by (100% − 15%) / 35 = 2.43%, after accounting for the hedge fund manager’s fee. In “losing” periods, your holdings decline by 100%. So your expected return is 85% / 35 * 35 / 37 − 100% * 2 / 37 = −115% / 37 = −3.1%. Not a good deal!
It’s a best practice for hedge fund managers like John Meriwether to play roulette, because they make money when the client wins, but don’t lose money when the client loses.
Risk arbitrage for good
What’s the analogous altruistic risk-arbitrage strategy? If I’m running a charity because I care about having a positive impact about the world, then I only care about raising more funds if my program is effective at improving things. If my fundraising strategy has a risk of letting donors correctly conclude that my program doesn’t work, and consequently declining to fund it, then I don’t count that as a cost.
To most charities, this seems like an increase in risk. But from an altruistic perspective, you’re reallocating funding from the possible worlds where your charity doesn’t work, to the worlds where it does, and this is an unambiguous gain.
I’m going to start by working through a simple quantitative illustration of this principle. (Skip it if the principle already seems trivially true.) Then I’ll give a few examples of how someone might implement this kind of strategy.
Value of revealing information
Let’s say that after evaluating your program as well as you can, you think it has a 50% chance of not working, and a 50% chance of saving a life for each $1,000 of funding. So your expected cost per life saved is $2,000. There’s a philanthropist with a million dollars considering your program. Their next best option has a cost per life saved of $4,000.
Because the philanthropist knows that they have imperfect information and you might be misleading them, they discount your effectiveness estimates by another 50%, so that from their perspective both programs look equally good. They split their donation 50-50. By your estimate, they have saved $500,000 / $2,000 + $500,000 / $4,000 = 375 lives.
If you reveal more detailed information about your program, this could cause them to reallocate money to your program, if your case is persuasive. They could also reallocate money to the other program, if they correctly spot problems in your plan that you’d missed. To keep things simple, let’s say that if you reveal information about your program, there’s a 75% chance that they correctly judge which program works and reallocate all their money to that one, and a 25% chance that they reallocate all their money to the worse program.
You already think there’s a 50% chance your program doesn’t work at all. In that scenario, if you reveal information, there’s a 75% chance they fund the other organization fully, saving 250 lives, and a 25% chance that they decide to fund yours, saving no lives.
Then there’s a 50% chance your program saves lives for $1,000. In that scenario, if you reveal information, there’s a 75% chance that they fund your organization fully, saving 1,000 lives, and a 25% chance that they reallocate funds to the other organization, saving 250 lives.
Thus, the expected number of lives saved, if you reveal the information, is 50% * (75% * 250 + 25% * 0) + 50% * (75% * 1,000 + 25% * 250) = 500. This is a substantial improvement!
You did not increase the expected funding level of your organization. Instead of a 100% chance of 50% funding, you got a 50% * 75% + 50% * 25% = 50% chance of 100% funding. But what you did was reallocate your chance of getting funded, from the possible worlds where your program doesn’t work, into the possible worlds where it does.
Ways to reveal information
Instead of trying to optimize for appeal subject to honesty constraints, you might try writing a funding pitch to maximize the chance that someone already trying to fund something like your organizaiton would recognize it as the organization they’re looking for. This pays off disproportionately when donors agree with your judgment, which is some evidence that your judgment is correct.
Relatedly, if you argue for your plans, exposing your premises clearly enough that if you’re making a mistake, donors should be able to spot any mistake easily. This is likely to be more persuasive, in the scenario where potential donors don’t find mistakes or evidence of poor performance or prospects, at the price of being less persuasive, in the scenario where they do. It also opens you up to the upside risk of having someone correct an error in the planning stage, instead of having to try the thing before finding out it doesn’t work.
GiveWell is an excellent example of an organization that has written publicly about the reason for its actions. I gave an example above. (It also promotes charities that are willing to make themselves easier to evaluate.) I’m also in the middle of publishing a series of blog posts critiquing GiveWell based on the extensive information they’ve made publicly available. GiveWell even has a mistakes page, specifically highlighting its failings. This is the sort of thing you do when you want to succeed only in the worlds where you’re doing the right thing.
(Disclosure: I worked for GiveWell in the past. I don’t anymore and don’t expect to in the future, but am still on good terms with current and former GiveWell staff.)
The point of this argument isn’t so much to raise specific suggestions. Mainly, I’m hoping to promote the broader hypothesis to your attention: that this is a class of strategy that’s not a standard “best practice,” but works if you care about expected impact rather than conventional “success”.
I also hope this is an illustrative example of a broader principle: that things like honesty really are the best policy, that non-universalizable behavior really does tend to have nasty unintended consequences, that you should have a strong bias toward doing the right thing.
Utilitarian considerations shouldn’t be weighed against deontological scruples as though they were competing interests. While the articulable benefits of rule-breaking scale with the importance of the action, the unintended drawbacks are likely to similarly scale. We should override our moral inhibitions, not because it’s really important this time—not because the benefits are unusually large—but when we have some specific reason to believe that the costs are unusually small.
(Cross-posted from my personal blog.)
- 2018 AI Alignment Literature Review and Charity Comparison by 18 Dec 2018 4:46 UTC; 190 points) (LessWrong;
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 176 points) (
- 2021 AI Alignment Literature Review and Charity Comparison by 23 Dec 2021 14:06 UTC; 168 points) (LessWrong;
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:25 UTC; 155 points) (
- 2019 AI Alignment Literature Review and Charity Comparison by 19 Dec 2019 2:58 UTC; 147 points) (
- 2020 AI Alignment Literature Review and Charity Comparison by 21 Dec 2020 15:27 UTC; 137 points) (LessWrong;
- 2019 AI Alignment Literature Review and Charity Comparison by 19 Dec 2019 3:00 UTC; 130 points) (LessWrong;
- 2018 AI Alignment Literature Review and Charity Comparison by 18 Dec 2018 4:48 UTC; 118 points) (
- Incentivizing Donations through Mutual Matching by 23 Aug 2021 9:46 UTC; 25 points) (
- Effective Advertising and Animal Charity Evaluators by 13 Jun 2018 19:43 UTC; 21 points) (
- Donation Multiplier Stacking: Directing 1.65x to 6.6x More Funds by 18 May 2022 6:13 UTC; 12 points) (
- Should EAs participate in the Double Up Drive? by 24 Dec 2018 20:06 UTC; 9 points) (
- 25 Apr 2017 4:55 UTC; 5 points) 's comment on Effective altruism is self-recommending by (
- 22 Nov 2018 7:43 UTC; 3 points) 's comment on Literature Review: Why Do People Give Money To Charity? by (
When this has been discussed before (e.g. here), an important paper was cited that showed that people respond even more strongly to challenge grants (people giving money unconditionally) than matching grants. This avoids the ethical difficulty.
The difference between matching and challenge grants was not statistically significant, actually. More generally, that study’s evidence is suggestive at best; it was underpowered (couldn’t have distinguished a 30% increase in donations from noise) and didn’t correct for multiple (12 in the field, 10 in the lab) hypothesis tests. They also mis-described what a p-value means, which doesn’t directly invalidate their results but makes me pretty generally worried.
Part of your critique is mostly valid in cases where donors have a fixed donation budget and allocate it to the best cause they come across, taking into account a potential leverage factor. I wonder whether instead a lot of the donors—mind EAs are rare—donate on a whim, incentivized by the announcement of the matching, without that they would have donated that money anywhere else with any particularly high probability.
I see another critique to apply with the schemes that have matching “up to a specified level, say $500,000”, and I think you have not mentioned exactly that one explicitly. That additional critique is as follows: If that level of $500k is expected to be reached in due time, then anyone whose donation had been matched before the fund ran dry, has in fact led not to a total donation increase > his personal contribution, but instead in fact to one < than his personal contribution (in the most extreme case 0): because of his donation, the fund has run dry a bit earlier, leaving room for one other person less to donate within the scheme; the total matchmaker contribution remains anyway $500k, but one third person less was incentivized to contribute (because you ‘dried out’ the matching fund earlier). So the matching in reality means your donation has had less impact rather than more, even if you and all donors would not have had other opportunities to donate, i.e. even independently of what I see as one of the main critiques you mention.
Can you please let me know whether I am wrong. It seems to me that the first argument only makes sens if there is a limit set by the person who matches, and if there is no choice in charity.
I am asking this because my current employer does gift matching, and I was trying to find out what EA says about this kind of action. Your post is quite interesting, but I am far from sure it actually apply to the matching-donation they do, and I’d like to see whether it is because your argument only works in some context which were not explicited here.
However, I have an important choice of charity to which I can give, trying to solve very different problems. I would agree that thinking “I ought to give to this opera house because my gift will be matched” would be a poor choice; after all the impact I could have by giving 2N to an opera house is far smaller than the impact I’d have by giving N to an EA charity. But if I can actually give to funds helping people, even if they are not fundations evaluated by EA associations, it seems to me that giving them 2N $ may be better than giving N $ to a classical EA charity (assuming that they are not more than two time less efficient than the EA charity)
Actually, even if the employer already know that they want to give N$ to charity this year, it seems to me that influencing their choice and ensuring they give to more important charity is worth considering (I would not be surprised if it was actually a reason why they have this politic, to ensure that their gift is aligned with the one of their employees)
Furthermore, as my employer has not put a company-wide limit on the matching, it seems that It would not be dishonest, if all employees given the maximal matched amount of 10 000$/year, by its policy the company would actually give hundreds of millions$/year. So it seems that in this case, matching-donation is less dishonest than in the case where there is a limit and the donor already have a fixed amount they want to give in their head.