A Red-Team Against the Impact of Small Donations

In a comment on Benjamin Todd’s article in favor of small donors, NunoSempere writes:

This article is kind of too “feel good” for my tastes. I’d also like to see a more angsty post that tries to come to grips with the fact that most of the impact is most likely not going to come from the individual people, and tries to see if this has any new implications, rather than justifying that all is good.

I am naturally an angsty person, and I don’t carry much reputational risk, so this seemed like a natural fit.

I agree with NunoSempere that Benjamin’s epistemics might be suffering from the nobility of his message. It’s a feel-good encouragement to give, complete with a sympathetic photo of a very poor person who might benefit from your generosity. Because that message is so good and important, it requires a different style of writing and thinking than “let’s try very hard to figure out what’s true.”

Additionally, I see Benjamin’s post as a reaction to some popular myths. This is great, but we shouldn’t mistake “some arguments against X are wrong” for “X is correct”.

As to not bury the lede: I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.

Funny enough, although this is framed as a “red-team” post, I think that Benjamin mostly agrees with that advice. You can choose to take this as evidence that the advice is robust to worldview diversification, or as evidence that I’m really bad at red-teaming and falling prey to justification drift.

In terms of epistemic status: I take my own arguments here seriously, but I don’t see them as definitive. Specifically, this post is meant to counterbalance Benjamin’s post, so you should read his first, or at least read it later as a counterbalance against this one.

1. Our default view should be that high-impact funding capacity is already filled.

Consider Benjamin’s explanation for why donating to LTFF is so valuable:

I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health.

I absolutely agree that those issues are very neglected, but only among the general population. They’re not at all neglected within EA. Specifically, the question we should be asking isn’t “do people care enough about this”, but “how far will my marginal dollar go?”

To answer that latter question, it’s not enough to highlight the importance of the issue, you would have to argue that:

  1. There are longtermist organizations that are currently funding-constrained,

  2. Such that more funding would enable them to do more or better work,

  3. And this funding can’t be met by existing large EA philanthropists.

It’s not clear to me that any of these points are true. They might be, but Benjamin doesn’t take the time to argue for them very rigorously. Lacking strong evidence, my default assumptions are that funding capacity for extremely high-impact organizations well aligned with EA ideology will be filled by donors.

Benjamin does admirably clarify that there are specific programs he has in mind:

there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic.

At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn’t OpenPhil just fund it?

In general, my default view for any EA cause is always going to be:

  • If this isn’t funded by OpenPhil, why should I think it’s a good idea?

  • If this is funded by OpenPhil, why should I contribute more money?

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy’s Law states, “no matter who you are, most of the smartest people work for someone else.”

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there’s an existing popular venue for crowdsourcing ideas, I’m even less willing to believe that that large EA foundations have simply missed a good opportunity.

Benjamin might argue that OpenPhil is just taking its time to evaluate CEPI, and we should fill its capacity with small donations in the meantime. That might be true, but would still greatly lower the expected impact of giving to CEPI. In this view, you’re accelerating CEPI’s agenda by however long it takes OpenPhil to evaluate them, but not actually funding work that wouldn’t happen otherwise. And of course, if it’s taking OpenPhil time to evaluate CEPI, I don’t feel that confident that my 5 minutes of thinking about it should be decisive anyway.

When I say “our default view”, I don’t mean that this is the only valid perspective. I mean it’s a good place to start, and we should then think about specific cases where it might not be true.

2. Donor coordination is difficult, especially with other donors thinking seriously about donor coordination.

Assuming that EA is a tightly knit high-trust environment, there seems to be a way to avoid this whole debate. Don’t try too hard to reason from first principles, just ask the relevant parties. Does OpenPhil think they’re filling the available capacity? Do charities feel like they’re funding-constraints despite support from large foundations?

The problem is that under Philanthropic Coordination Theory, there are altruistic reasons to lie, or at least not be entirely transparent. As GiveWell itself writes in their primer on the subject:

Alice and Bob, are both considering supporting a charity whose room for more funding is $X, and each is willing to give the full $X to close that gap. If Alice finds out about Bob’s plans, her incentive is to give nothing to the charity, since she knows Bob will fill its funding gap.

Large foundations are Bob in this situation, and small donors are Alice. Assuming GiveWell wants to maintain the incentive for small donors to give, they have to hide their plans.

But why would GiveWell even want to maintain the incentive? Why not just fill the entire capacity themselves? One simple answer is that GiveWell wants to keep more money for other causes. A better answer is that they don’t want to breed dependence on a single large donor. As OpenPhil writes:

We typically avoid situations in which we provide >50% of an organization’s funding, so as to avoid creating a situation in which an organization’s total funding is “fragile” as a result of being overly dependent on us.

The optimistic upshot of this comment is that small donors are essentially matched 1:1. If GiveWell has already provided 50% of AMF’s funding, then by giving AMF another $100, you “unlock” another $100 that GiveWell can provide without exceeding their threshold.

But the most pessimistic upshot is that assuming charities have limited capacity, it will be filled by either GiveWell or other small donors. In the extreme version of this view, a donation to AMF doesn’t really buy more bednets, it’s essentially a donation to GiveWell, or even a donation to Dustin Moskovitz.

Is that so bad? Isn’t donating to GiveWell good? That’s the argument I’ll address in the next section. [1]

3. Benjamin’s views on funging don’t make sense.

Okay, so maybe a donation to AMF is really a donation to GiveWell, but isn’t that fine? After all, it just frees GiveWell to use the money on the next most valuable cause, which is still pretty good.

This seems to be the view Benjamin holds. As he writes, if you donate $1000 to a charity that is OpenPhil backed, “then that means that Open Philanthropy has an additional $1,000 which they can grant somewhere else within their longtermist worldview bucket.” The upshot is that the counterfactual impact of your donation is equivalent to the impact of OpenPhil’s next-best cause, which is probably a bit lower, but still really good.

The nuances here depend a bit on your model of how OpenPhil operates. There seem to be a few reasonable views:

  1. OpenPhil will fund the most impactful things up to $Y/​year.

  2. OpenPhil will fund anything with an expected cost-effectiveness of above X QALYs/​$.

  3. OpenPhil tries to fund every highly impactful cause it has the time to evaluate.

In the first view, Benjamin is right. OpenPhil’s funding is freed up, and they can give it to something else. But I don’t really believe this view. By Benjamin’s own estimate, there’s around $46 billion committed to EA causes. He goes on to say that: “I estimate the community is only donating about 1% of available capital per year right now, which seems too low, even for a relatively patient philanthropist.”

What about the second view? In that case, you’re not freeing up any money since OpenPhil just stops donating once it’s filled the available capacity.

The third view seems most plausible to me, and is equally pessimistic. As Benjamin writes further on:

available funding has grown pretty quickly, and the amount of grantmaking capacity and research has not yet caught up. I expect large donors to start deploying a lot more funds over the coming years. This might be starting with the recent increase in funding for GiveWell.

But what exactly is “grantmaking capacity and research”? It would make sense if GiveWell has not had time to evaluate all possible causes and institutions, and so there are some opportunities that they’re missing. It would not make sense that GiveWell is unable to give more money to AMF due to a research bottleneck.

That implies that you might be justified in giving to a cause that OpenPhil simply hasn’t noticed (note the concerns in section 1), but not justified in giving more money to a cause OpenPhil already supports. If Benjamin’s view is that EA foundations are research bottlenecked rather than funding bottlenecked, small donations don’t “free up” more funding in an impact-relevant way.

4. Practical recommendations

Where does this all leave us? Surprisingly, about back where we started. Benjamin already noted in his post that “there’s an opportunity to do even more good than earning to give”.

First of all, think hard about the causes that large EA foundations are unable to fund, despite being high impact. As Scott Alexander wrote:

It’s not exactly true that EA “no longer needs more money”—there are still some edge cases where it’s helpful; a very lossy summary might be “things it would be too weird and awkward to ask Moskovitz + Tuna to spend money on”.

This is not exhaustive, but a short list of large foundation limitations include:

  • PR risk: It’s not worth funding a sperm bank for nobel-prize winners that might later get you labeled a racist. See also, the Copenhagen Interpretation of Ethics, i.e. it might not be worth funding a highly imperfect intervention, even if it’s net good.

    • More generally, it might not be worth funding an intervention that has a 90% chance of going well, but a 10% chance of going really poorly.

  • Small grants: When he launched Emergent Ventures, Tyler Cowen explained that “the high fixed costs of processing any request discriminate against very small proposals”. E.g., it’s not even worth OpenPhil’s time to consider, evaluate and dispense a $500 grant.

To be clear, I don’t think these are particular failings of OpenPhil, or EA Funds. Actually, I think that EA foundations do better on these axes than pretty much every other foundation. But there are still opportunities for small individual donors to exploit.

More positively, what are the opportunities I think you should pursue?

  • Fund individuals: As Dan Luu writes, some work depends entirely on who’s doing it. If you know a specific person whose work you think is likely to be high-impact, and if some of that knowledge is not institutionally legible, you should consider just funding them yourself.

  • Fund weird things: A decent litmus test is “would it be really embarrassing for my parents, friends or employer to find out about this?” and if the answer is yes, more strongly consider making the grant.

    • Of course, the weird things are still subject to more conventional cost-effectiveness estimates.

  • Fund yourself: Instead of earning-to-give, earn-to-retire, and then do direct work yourself with the freedom to ignore that’s “fundable” or laudable.

    • You might worry that “unfundable” work is unlikely to be high-impact, but again, you should think specifically about what work large foundations can’t fund.

Outside of funding, try to:

  • Be more ambitious: There’s some tradeoff curve between cost-effectiveness and scale. When EA was more funding constrained, a $1M grant with 10X ROI looked better than a $1B grant with 5x ROI, but now the reverse is true.

  • Be more entrepreneurial: Similarly, there’s a tradeoff between making marginal improvements to a high-impact org, and starting a new org with potentially lower-impact. When EA was more talent constrained, working at existing EA orgs was higher impact. A lot of people would argue that it’s still a very high impact, but relatively speaking, the value of starting a brand new org is higher.

    • This doesn’t mean starting Generic Longtermist Research Firm X, it means trying to do work outside the scope of current organizations.

But as I mentioned at the outset, that’s all fairly conventional, and advice that Benjamin would probably agree with. So given that my views differ, where are the really interesting recommendations?

The answer is that I believe in something I’ll call “high-variance angel philanthropy”. But it’s a tricky idea, so I’ll leave it for another post.


  1. ↩︎

    Is this whole section an infohazard? If thinking too hard about Philanthropic Coordination Theory risks leading to weird adversarial game theory, isn’t it better for us to be a little naive? OpenPhil and GiveWell have already discussed it, so I don’t personally feel bad about “spilling the beans”. In any case, OpenPhil’s report details a number of open questions here, and I think the benefits of discussing solutions publicly outweighs the harms of increasing awareness. More importantly, I just don’t think this view is hard to come up with on your own. I would rather make it public and thus publicly refutable than risk a situation where a bunch of edgelords privately think donations are useless due to crowding-out but don’t have a forum for subjecting those views to public scrutiny.