Matching-donation fundraisers can be harmfully dishonest

Anna Sala­mon, ex­ec­u­tive di­rec­tor of CFAR (named with per­mis­sion), re­cently wrote to me ask­ing for my thoughts on fundraisers us­ing match­ing dona­tions. (Anna, to­gether with co-writer Steve Ray­hawk, has pre­vi­ously writ­ten on com­mu­nity norms that pro­mote truth over false­hood.) My re­sponse made some gen­eral points that I wish were more widely un­der­stood:

  • Pitch­ing match­ing dona­tions as lev­er­age (e.g. “dou­ble your im­pact”) mis­rep­re­sents the situ­a­tion by overas­sign­ing credit for funds raised.

  • This sort of dishon­esty isn’t just bad for your soul, but can ac­tu­ally harm the larger world—not just by erod­ing trust, but by caus­ing peo­ple to mis­al­lo­cate their char­ity bud­gets.

  • “Best prac­tices” for a char­ity tend to pro­mote this kind of dishon­esty, be­cause they’re pre­cisely those prac­tices that work no mat­ter what your char­ity is do­ing.

  • If your char­ity is im­pact-ori­ented—if you care about out­comes rather than in­sti­tu­tional suc­cess—then you should be able to do sub­stan­tially bet­ter than “best prac­tices”.

So I’m putting an ed­ited ver­sion of my re­sponse here. (UPDATE: Per Denken­berger’s com­ment be­low, see Jeff Kauf­man’s ear­lier partly over­lap­ping dis­cus­sion of match­ing dona­tions.)

Matched dona­tion fundraisers are typ­i­cally dishonest

In the typ­i­cal matched dona­tion fundraiser, a large donor pledges to match the dona­tions from ev­ery­one else, up to a speci­fied level, such as $500,000. The char­ity can then claim to other donors that this is an un­usu­ally good time to give, be­cause for each dol­lar they give to the char­ity, the char­ity will re­ceive an ad­di­tional dol­lar from the match­ing donor. There are two lev­els on which such matched dona­tion offers tend to be dishon­est:

  1. The match is of­ten illu­sory.

  2. Even when the match is real, it only mo­ti­vates donors by overas­sign­ing credit.

GiveWell ex­plains the prob­lem of illu­sory match­ing fairly well:

We know that donors love dona­tion match­ing. We know that if we could offer dona­tion match­ing on gifts to our top char­i­ties this giv­ing sea­son, our money moved would rise. And we know that we could offer dona­tion match­ing if we thought it was the right thing to do: there are donors plan­ning six-figure gifts to our top char­i­ties this year who would al­most cer­tainly be will­ing to struc­ture their gifts as “matches” if we asked. [...]
But we’ve de­cided not to do this be­cause we would feel dishon­est. We’d be ad­ver­tis­ing that you can “dou­ble your gift,” but the truth would be that we just re­struc­tured a gift from a six-figure donor that was go­ing to hap­pen any­way. We’ve dis­cussed [...] find­ing a donor who would give to our top char­i­ties only on con­di­tion that oth­ers did – but not sur­pris­ingly, ev­ery­one we could think of who would be open to mak­ing a large gift to our top char­i­ties would be open to this whether or not we could match them up with smaller donors. Ul­ti­mately, the only match we can offer is illu­sory match­ing.

But the main prob­lem with match­ing dona­tion fundraisers is that even when they aren’t ly­ing about the match­ing donor’s coun­ter­fac­tual be­hav­ior, they mis­rep­re­sent the situ­a­tion by overas­sign­ing credit for funds raised.

I’ll illus­trate this with a toy ex­am­ple. Let’s say that a char­ity—call it Good Works—has two po­ten­tial donors, Alice and Bob, who each have $1 to give, and don’t know each other. Alice de­cides to dou­ble her im­pact by pledg­ing to match the next $1 of dona­tions. If this works, and some­one gives be­cause of her match offer, then she’ll have caused $2 to go to Good Works. Bob sees the match offer and rea­sons similarly: if he gives $1, this causes an­other $1 to go to Good Works, so his im­pact is dou­bled—he’ll have caused Good Works to re­ceive $2.

But if Alice and Bob each as­sess their im­pact as $2 of dona­tions, then the to­tal as­sessed im­pact is $4 - even though Good Works only re­ceives $2. This is what I mean when I say that credit is overas­signed—if you add up the amount of fund­ing each donor is sup­posed to have caused, you get num­ber that ex­ceeds the to­tal amount of funds raised.

If Alice is re­spon­si­ble for $2 of dona­tions, then she has to rea­son that she’s over­rid­den Bob’s agency and Bob’s not re­spon­si­ble for his ac­tions. If Bob agrees that he gets zero credit, then there’s no prob­lem. But if Bob rea­sons sym­met­ri­cally to Alice, then each one can co­her­ently think that they moved more than $1, only if they also be­lieve that their agency’s been eroded by the match agree­ment. They can co­her­ently think that the de­ci­sion is sym­met­ri­cal, and that they’re each re­spon­si­ble for more than $1 of dona­tions, only if they each also agree that they’ve forfeited some share of their agency or op­ti­miza­tion power by let­ting them­selves be en­ticed by the other’s match po­ten­tial.

I think this is what GiveWell means when dis­cussing what it con­sid­ers a non-illu­sory form of match­ing, “in­fluence match­ing”:

In­fluence match­ing is some­thing I think im­pact-max­i­miz­ing donors ought to be con­cerned about. In the short run, in­fluence match­ing makes it true that your $1 dona­tion re­sults in $2 donated to the char­ity in ques­tion. But it also means that you’ve let the match­ing fun­der in­fluence your giv­ing – per­haps pul­ling you away from the most im­pact­ful char­ity (in your judg­ment) to a less im­pact­ful one – just by the way they struc­tured their gift. By giv­ing, you are re­ward­ing this be­hav­ior by the match­ing fun­der, and you may be en­courag­ing them to take fu­ture un­con­di­tional gifts and turn them into con­di­tional gifts, be­cause of the abil­ity to sway other donors.
Per­haps, rather than giv­ing your $1 to the char­ity the match­ing fun­der is push­ing, you should fight back by struc­tur­ing your own in­fluence match­ing – mak­ing a con­di­tional com­mit­ment to the high­est-im­pact char­ity you can find, in or­der to pull other dol­lars in to­ward it.

But is this just a nit­pick by overly scrupu­lous moral­ists, or does it ac­tu­ally cause some harm?

Overas­sign­ment of credit ob­scures op­por­tu­nity cost

I claim that the moral dis­com­fort some such as GiveWell feel about match­ing dona­tion fundraisers is re­lated to an ac­tual harm caused by dishon­esty: it causes peo­ple mo­ti­vated by it to make worse de­ci­sions. I first lay out a sim­ple model of why you might ex­plain overas­sign­ment of credit to be good. Then I’ll ex­plain how this can in­stead cause harm.

Co­or­di­nat­ing to shift from con­sump­tion to giving

Let’s go back to the ex­am­ple of Alice and Bob. Alice cares about her per­sonal con­sump­tion, and about Good Works, but not about Bob’s per­sonal con­sump­tion. She’d rather use $1 to buy ice cream than give it to Good Works, but if she can thereby redi­rect $1 from Bob’s per­sonal con­sump­tion to Good Works as well, she thinks it’s worth it. Bob’s prefer­ences are the mir­ror image of Alice’s.

Each of them prefers the world where Good Works gets $2 to the world where they buy ice cream. But if nei­ther thinks they can af­fect the other’s ac­tion, then they each pre­fer to buy ice cream rather than giv­ing $1 to Good Works. Thus, when Alice offers and Bob ac­cepts a match, they move into a world-state they both pre­fer. This is true re­gard­less of how “moral credit” is as­signed.

Harms from dou­ble-counting

I sus­pect that in prac­tice dona­tions trade off against other dona­tions more of­ten than they trade off against con­sump­tion. This can lead to real harms from dou­ble-count­ing im­pact.

Let’s con­sider two new strangers, Carl and Denise, who each have a fixed char­ity bud­get of $1. Carl and Denise are effec­tive al­tru­ists, and want to max­i­mize to­tal util­ity with their char­ity bud­gets.

Char­i­ties A cre­ates 3 utils per dol­lar, and char­ity B cre­ates 2 utils per dol­lar. By de­fault, Carl and Denise will each give to char­ity A, cre­at­ing 6 utils.

Char­ity B ap­proaches Carl with the idea that he make a match offer. Carl jumps at the op­por­tu­nity to cause $2 to be given to char­ity B, cre­at­ing 4 utils, one more than he’d have saved be­fore. Denise finds out about the match offer, and switches her dona­tion to char­ity B, on the same ba­sis. But the to­tal amount of money moved to char­ity B is not the “dou­bled” $2+$2=$4, but just $2, re­sult­ing in 4 utils. This is less than be­fore!

In gen­eral, the more Carl and Denise care about the same things, the more we should ex­pect that the situ­a­tion is like this, and not like the prior ex­am­ple with Alice and Bob.

Hon­est and open coordination

In the above toy ex­am­ple, the harm is di­rectly caused by dou­ble-count­ing. I think this is a gen­er­al­iz­able prin­ci­ple—strate­gies that get peo­ple ex­cited about things by overas­sign­ing credit or un­der­as­sign­ing costs will lead to well-in­ten­tioned donors mis­al­lo­cat­ing their re­sources. So we should in­stead look for co­or­di­na­tion mechanisms that work by clar­ify­ing, rather than ob­scur­ing, the in­cen­tives of the par­ti­ci­pants.

I’ll give two ex­am­ples of how this might work:

  • Thresh­old co­or­di­na­tion to fund pro­jects that are only vi­able once some min­i­mal fund­ing thresh­old has been passed.

  • Giv­ing pledges in which po­ten­tial philan­thropist match each other’s com­mit­ment to give a larger share of their wealth or in­come to char­ity, than they oth­er­wise might have done.

Thresh­old coordination

There’s a ver­sion of “match­ing” that doesn’t de­pend on some­thing like overas­sign­ing credit. Let’s say there’s some pro­gram that only makes sense if $X gets spent on it, but your char­ity bud­get is $0.1X. You don’t re­ally want to dump your money into a money pit for no rea­son, it’s not su­per likely that your $0.1X makes the differ­ence be­tween fund­ing and not fund­ing the thing, but, if you found nine other peo­ple like you, you’d to­tally go for it.

This is the Kick­starter model: no one pays un­less there’s enough money pledged to pro­duce the thing peo­ple want. This model only makes sense if there re­ally are nat­u­ral thresh­olds. One nat­u­ral thresh­old for a char­ity would be the level of long-run fund­ing be­low which the char­ity would have to shut down. I can also imag­ine us­ing a kick­starter-style cam­paign for spe­cial pro­grams. If af­ter pri­ori­tiz­ing ap­pro­pri­ately, a char­ity doesn’t have enough money to fund pro­ject X, but sus­pect some donors might be es­pe­cially ex­cited about it, a con­di­tional pledge cam­paign could make a lot of sense.

GiveWell dis­cusses this as the other non-illu­sory form of match­ing:

Co­or­di­na­tion match­ing. A char­ity needs to raise a spe­cific amount for a spe­cific pur­pose. A large fun­der (the “matcher”) is happy to con­tribute part of the amount needed as long as the spe­cific pur­pose is achieved; there­fore, the matcher makes the gift con­di­tional on other gifts.

Thresh­olds can be helpful and mo­ti­vat­ing even with­out con­di­tion­al­ity. In its 2015 win­ter fundraiser, MIRI de­scribed how ag­gres­sive its pro­gram would be at differ­ent lev­els of fund­ing:

Tar­get 1 — $150k: Hold­ing steady. At this level, we would have enough funds to main­tain our run­way in early 2016 while con­tin­u­ing all cur­rent op­er­a­tions, in­clud­ing run­ning work­shops, writ­ing pa­pers, and at­tend­ing con­fer­ences.
Tar­get 2 — $450k: Main­tain­ing MIRI’s growth rate. At this fund­ing level, we would be much more con­fi­dent that our new growth plans are sus­tain­able, and we would be able to de­vote more at­ten­tion to aca­demic out­reach. We would be able to spend less staff time on fundrais­ing in the com­ing year, and might skip our sum­mer fundraiser.
Tar­get 3 — $1M: Big­ger plans, faster growth. At this level, we would be able to sub­stan­tially in­crease our re­cruit­ing efforts and take on new re­search pro­jects. It would be ev­i­dent that our donors’ sup­port is stronger than we thought, and we would move to scale up our plans and growth rate ac­cord­ingly.
Tar­get 4 — $6M: A new MIRI. At this point, MIRI would be­come a qual­i­ta­tively differ­ent or­ga­ni­za­tion. With this level of fund­ing, we would be able to di­ver­sify our re­search ini­ti­a­tives and be­gin branch­ing out from our cur­rent agenda into al­ter­na­tive an­gles of at­tack on the AI al­ign­ment prob­lem.

This seems like a pretty good thing for a char­ity to do re­gard­less of whether it pro­vides a co­or­di­na­tion mechanism—cre­at­ing mo­ti­va­tion by re­veal­ing rele­vant in­for­ma­tion seems clearly good. I was more ex­cited about spread­ing the word about that MIRI fundraiser than I have been about ones shortly be­fore or af­ter.

Pre­dic­tions are hard, so claimed thresh­olds for differ­ent pro­grams can be mis­lead­ing. I agree, and have per­sonal ex­pe­rience with this as a CFAR donor. But, I don’t think it’s dishon­est to make mis­taken pre­dic­tions, es­pe­cially if you in­di­cate your un­cer­tainty—and most es­pe­cially if you fol­low up af­ter­wards by check­ing what hap­pened against what you pre­dicted, and mak­ing a se­ri­ous effort to cal­ibrate your fu­ture pre­dic­tions, tak­ing into ac­count past mis­al­ign­ment.

Giv­ing pledges

In the first toy model, Alice and Bob suc­cess­fully co­or­di­nated to­wards an out­come bet­ter al­igned with their prefer­ences. I don’t think that it’s a chance co­in­ci­dence that this ex­am­ple in­volved shift­ing money from con­sump­tion to giv­ing. This makes the op­por­tu­nity cost ar­gu­ment less rele­vant, be­cause Bob’s next-best op­tion is not val­ued very highly by Alice, and vice versa.

To some ex­tent, the benefit of this co­or­di­na­tion is ob­scured by link­ing it to a par­tic­u­lar char­ity. The benefit is in Alice and Bob agree­ing to al­lo­cate their re­sources in a more pub­lic-spir­ited way, not in Alice’s in­fluence over which char­ity Bob gives to. I don’t see any par­tic­u­lar rea­son to mix these con­sid­er­a­tions. Why not just co­or­di­nate about the first thing, and let each per­son use their own judg­ment about what char­ity is best?

In real life, this looks like the Giv­ing What We Can pledge, in which par­ti­ci­pants make a pub­lic pledge to give 10% of their in­come to effec­tive char­i­ties. (There’s also a time-bounded trial pledge.) This is ex­plic­itly about shift­ing money from con­sump­tion to giv­ing. If you wanted to use match­ing mechanisms, you might ask about whether there’s any­one who’s on the fence about tak­ing the pledge, but would do it if that would move some­one else to do so. Then pair them up, and have them take the pledge to­gether.

Some other re­lated pledges:

  • The Giv­ing Pledge, for billion­aires pledg­ing to give away half of their wealth.

  • Founders Pledge, in which startup founders pledge to give 2% of the pro­ceeds from sel­l­ing their startup, to char­ity.

  • Rais­ing for Effec­tive Giv­ing, in which peo­ple (they fo­cus on pro­fes­sional poker play­ers) pledge 2% of their in­come to char­ity (they also try to pro­mote effec­tive char­i­ties)

Best prac­tices tend to­wards dishonesty

I think the prob­lem with—and prevalence of—match­ing dona­tions is part of a broader phe­nomenon. When the ac­tivity of ex­tract­ing money from donors is ab­stracted away from the other core ac­tivi­ties of an or­ga­ni­za­tion, like as­sess­ing and run­ning pro­grams, best prac­tices tend to­wards dis­tort­ing the truth. You end up with money-ex­trac­tion strate­gies that work re­gard­less of what the or­ga­ni­za­tion is do­ing, and those aren’t go­ing to be hon­est strate­gies.

The mes­sag­ing ad­vice that works for any or­ga­ni­za­tion is, nec­es­sar­ily, ad­vice that works for or­ga­ni­za­tions with ter­rible pro­grams. Mak­ing it easy to eval­u­ate your pro­grams on the mer­its seems un­likely to satisfy this re­quire­ment. So stan­dard prac­tice will be to ob­scure your im­pact.

But donors still want a bet­ter deal. So what do you do? The only “bet­ter deal” left is one that any­one could offer, a fully generic one that doesn’t de­pend on the de­tails of your pro­gram. Offer­ing “lev­er­age”—a 2-for-1 sale—is a perfect ex­am­ple of this. Both sides of a match­ing drive get to think that they’re buy­ing the other side’s par­ti­ci­pa­tion “for free”. Of course, they’re not. Strate­gies that ap­pear to buy in­fluence “for free” only ap­pear to work by hid­ing the ball.

Risk ar­bi­trage for evil and for good

Gen­uinely im­pact-ori­ented or­ga­ni­za­tions have the op­por­tu­nity to im­ple­ment a differ­ent class of strate­gies that com­pete less di­rectly with the typ­i­cal char­ity. In par­tic­u­lar, if you’re un­cer­tain how effec­tive your pro­gram is, you mainly care about rais­ing money in the sce­nar­ios where your pro­gram is effec­tive. This means that some fundrais­ing and com­mu­ni­ca­tion strate­gies that in­crease your or­ga­ni­za­tion’s fi­nan­cial risk carry much less down­side risk from an im­pact per­spec­tive. In par­tic­u­lar, in the sce­nar­ios where your pro­gram isn’t effec­tive, you shouldn’t treat failing to raise funds as a cost at all.

I’ll ex­plain risk ar­bi­trage in the nor­mal case of fi­nance, where it’s a best prac­tice to cheat clients. Then I’ll ex­plain how it ap­plies to philan­thropy, and can be used for good.

Hedge fund roulette

Matthew Ygle­sias uses roulette as a metaphor for hedge funds’ strate­gies, in or­der to ex­plain why hedge fund man­agers have an in­cen­tive to pur­sue risk even if it doesn’t benefit their clients:

Good news for in­vestors who like to lose all their money, “John Mer­iwether, the hedge fund man­ager and ar­bi­trageur be­hind Long-Term Cap­i­tal Man­age­ment, is in the pro­cess of set­ting up a new hedge fund — his third.” What’s that, you ask, didn’t his first fund lose all its money? Why, yes. And didn’t the sec­ond fund fold be­cause it lost a ton of money? Yes, quite so. So how will this new one be differ­ent? It won’t! It’s “ex­pected use the same strat­egy as both LTCM and JWM to make money: so-called rel­a­tive value ar­bi­trage, a quan­ti­ta­tive in­vest­ment strat­egy Mr Mer­iwether pi­o­neered when he led the hugely suc­cess­ful bond ar­bi­trage group at Salomon Brothers in the 1980s.”
The way this works is that you iden­tify ar­bi­trage op­por­tu­ni­ties such that you make trades you’re over­whelm­ingly likely to make money on. But those op­por­tu­ni­ties only ex­ist be­cause the op­por­tu­ni­ties are very small. So to make them worth pur­su­ing, you need to lever-up with huge amounts of debt. Which means that on the rare mo­ments when the trades do go bad, ev­ery­thing falls apart: “The strat­egy typ­i­cally has a high ‘blow-up’ risk be­cause of the large amounts of lev­er­age it uses to profit from of­ten tiny pric­ing anoma­lies.”
As a friend puts it, this strat­egy is “liter­ally the equiv­a­lent of putting a chip on 35 of the 36 roulette num­bers and hop­ing for no zero/​36.” But you’re do­ing it with bor­rowed money. I’m not a huge be­liever in hu­man ra­tio­nal­ity, so I to­tally un­der­stand how this scam worked once. That he was able to get a sec­ond fund off the ground is pretty amaz­ing.

Here’s how the “roulette” strat­egy looks to the hedge fund man­ager: At the be­gin­ning of each pe­riod, you take all your as­sets un­der man­age­ment and dis­tribute them evenly on the roulette num­bers 1-35. Each of these num­bers has a 137 chance of com­ing up. If the roulette ball lands on one of your num­bers, you get 35 times the amount of money you put on that num­ber, plus your ini­tial bet back. If you’re man­ag­ing a $35 fund, you’d have $1 on each num­ber, so you’d end up with $36, a 135 or 2.8% gain. On the other hand, if the ball lands on 0 or 36, you lose all the money. At the end of each pe­riod, you get paid 15% of the re­turn on your fund. That means that if you win, you get paid 15% /​ 35 = 0.43% of as­sets un­der man­age­ment. If you lose, you get noth­ing. So each pe­riod, your ex­pected pay­out is 15% /​ 35 * (35 /​ 37) = 0.4% of as­sets un­der man­age­ment.

Here’s how the “roulette” strat­egy looks to the client: In “win­ning” pe­ri­ods, your hold­ings ap­pre­ci­ate by (100% − 15%) /​ 35 = 2.43%, af­ter ac­count­ing for the hedge fund man­ager’s fee. In “los­ing” pe­ri­ods, your hold­ings de­cline by 100%. So your ex­pected re­turn is 85% /​ 35 * 35 /​ 37 − 100% * 2 /​ 37 = −115% /​ 37 = −3.1%. Not a good deal!

It’s a best prac­tice for hedge fund man­agers like John Mer­iwether to play roulette, be­cause they make money when the client wins, but don’t lose money when the client loses.

Risk ar­bi­trage for good

What’s the analo­gous al­tru­is­tic risk-ar­bi­trage strat­egy? If I’m run­ning a char­ity be­cause I care about hav­ing a pos­i­tive im­pact about the world, then I only care about rais­ing more funds if my pro­gram is effec­tive at im­prov­ing things. If my fundrais­ing strat­egy has a risk of let­ting donors cor­rectly con­clude that my pro­gram doesn’t work, and con­se­quently de­clin­ing to fund it, then I don’t count that as a cost.

To most char­i­ties, this seems like an in­crease in risk. But from an al­tru­is­tic per­spec­tive, you’re re­al­lo­cat­ing fund­ing from the pos­si­ble wor­lds where your char­ity doesn’t work, to the wor­lds where it does, and this is an un­am­bigu­ous gain.

I’m go­ing to start by work­ing through a sim­ple quan­ti­ta­tive illus­tra­tion of this prin­ci­ple. (Skip it if the prin­ci­ple already seems triv­ially true.) Then I’ll give a few ex­am­ples of how some­one might im­ple­ment this kind of strat­egy.

Value of re­veal­ing information

Let’s say that af­ter eval­u­at­ing your pro­gram as well as you can, you think it has a 50% chance of not work­ing, and a 50% chance of sav­ing a life for each $1,000 of fund­ing. So your ex­pected cost per life saved is $2,000. There’s a philan­thropist with a mil­lion dol­lars con­sid­er­ing your pro­gram. Their next best op­tion has a cost per life saved of $4,000.

Be­cause the philan­thropist knows that they have im­perfect in­for­ma­tion and you might be mis­lead­ing them, they dis­count your effec­tive­ness es­ti­mates by an­other 50%, so that from their per­spec­tive both pro­grams look equally good. They split their dona­tion 50-50. By your es­ti­mate, they have saved $500,000 /​ $2,000 + $500,000 /​ $4,000 = 375 lives.

If you re­veal more de­tailed in­for­ma­tion about your pro­gram, this could cause them to re­al­lo­cate money to your pro­gram, if your case is per­sua­sive. They could also re­al­lo­cate money to the other pro­gram, if they cor­rectly spot prob­lems in your plan that you’d missed. To keep things sim­ple, let’s say that if you re­veal in­for­ma­tion about your pro­gram, there’s a 75% chance that they cor­rectly judge which pro­gram works and re­al­lo­cate all their money to that one, and a 25% chance that they re­al­lo­cate all their money to the worse pro­gram.

You already think there’s a 50% chance your pro­gram doesn’t work at all. In that sce­nario, if you re­veal in­for­ma­tion, there’s a 75% chance they fund the other or­ga­ni­za­tion fully, sav­ing 250 lives, and a 25% chance that they de­cide to fund yours, sav­ing no lives.

Then there’s a 50% chance your pro­gram saves lives for $1,000. In that sce­nario, if you re­veal in­for­ma­tion, there’s a 75% chance that they fund your or­ga­ni­za­tion fully, sav­ing 1,000 lives, and a 25% chance that they re­al­lo­cate funds to the other or­ga­ni­za­tion, sav­ing 250 lives.

Thus, the ex­pected num­ber of lives saved, if you re­veal the in­for­ma­tion, is 50% * (75% * 250 + 25% * 0) + 50% * (75% * 1,000 + 25% * 250) = 500. This is a sub­stan­tial im­prove­ment!

You did not in­crease the ex­pected fund­ing level of your or­ga­ni­za­tion. In­stead of a 100% chance of 50% fund­ing, you got a 50% * 75% + 50% * 25% = 50% chance of 100% fund­ing. But what you did was re­al­lo­cate your chance of get­ting funded, from the pos­si­ble wor­lds where your pro­gram doesn’t work, into the pos­si­ble wor­lds where it does.

Ways to re­veal information

In­stead of try­ing to op­ti­mize for ap­peal sub­ject to hon­esty con­straints, you might try writ­ing a fund­ing pitch to max­i­mize the chance that some­one already try­ing to fund some­thing like your or­ga­ni­za­iton would rec­og­nize it as the or­ga­ni­za­tion they’re look­ing for. This pays off dis­pro­por­tionately when donors agree with your judg­ment, which is some ev­i­dence that your judg­ment is cor­rect.

Re­lat­edly, if you ar­gue for your plans, ex­pos­ing your premises clearly enough that if you’re mak­ing a mis­take, donors should be able to spot any mis­take eas­ily. This is likely to be more per­sua­sive, in the sce­nario where po­ten­tial donors don’t find mis­takes or ev­i­dence of poor perfor­mance or prospects, at the price of be­ing less per­sua­sive, in the sce­nario where they do. It also opens you up to the up­side risk of hav­ing some­one cor­rect an er­ror in the plan­ning stage, in­stead of hav­ing to try the thing be­fore find­ing out it doesn’t work.

GiveWell is an ex­cel­lent ex­am­ple of an or­ga­ni­za­tion that has writ­ten pub­li­cly about the rea­son for its ac­tions. I gave an ex­am­ple above. (It also pro­motes char­i­ties that are will­ing to make them­selves eas­ier to eval­u­ate.) I’m also in the mid­dle of pub­lish­ing a se­ries of blog posts cri­tiquing GiveWell based on the ex­ten­sive in­for­ma­tion they’ve made pub­li­cly available. GiveWell even has a mis­takes page, speci­fi­cally high­light­ing its failings. This is the sort of thing you do when you want to suc­ceed only in the wor­lds where you’re do­ing the right thing.

(Dis­clo­sure: I worked for GiveWell in the past. I don’t any­more and don’t ex­pect to in the fu­ture, but am still on good terms with cur­rent and former GiveWell staff.)

The point of this ar­gu­ment isn’t so much to raise spe­cific sug­ges­tions. Mainly, I’m hop­ing to pro­mote the broader hy­poth­e­sis to your at­ten­tion: that this is a class of strat­egy that’s not a stan­dard “best prac­tice,” but works if you care about ex­pected im­pact rather than con­ven­tional “suc­cess”.

I also hope this is an illus­tra­tive ex­am­ple of a broader prin­ci­ple: that things like hon­esty re­ally are the best policy, that non-uni­ver­sal­iz­able be­hav­ior re­ally does tend to have nasty un­in­tended con­se­quences, that you should have a strong bias to­ward do­ing the right thing.

Utili­tar­ian con­sid­er­a­tions shouldn’t be weighed against de­on­tolog­i­cal scru­ples as though they were com­pet­ing in­ter­ests. While the ar­tic­u­la­ble benefits of rule-break­ing scale with the im­por­tance of the ac­tion, the un­in­tended draw­backs are likely to similarly scale. We should over­ride our moral in­hi­bi­tions, not be­cause it’s re­ally im­por­tant this time—not be­cause the benefits are un­usu­ally large—but when we have some spe­cific rea­son to be­lieve that the costs are un­usu­ally small.

(Cross-posted from my per­sonal blog.)