Clarifications on diminishing returns and risk aversion in giving

In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.

Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn’t impressed, thinking that I’d presented an analytic error as “sound EA doctrine”.

So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:

  • Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.

  • Returns become sublinear more quickly when you’re working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.

  • This sublinearity becomes especially pronounced when you’re considering giving on the scale of billions rather than millions of dollars.

  • There are other major practical considerations that point in favour of risk-aversion as well.

(SBF appears to think the effects above are smaller than Matt or I do, but it’s hard to know exactly what he believes, so I’ll set that aside here.)

———

The offending paragraphs in the original post were:

“If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.

But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.

This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.

The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one’s well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.

On the other hand, if you plan to spend the money to help others, such as by distributing it to the world’s poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.

But: the impact of the first dollar and the billionth dollar aren’t identical, and in fact could be very different, so calling the approach ‘totally rational’ was somewhere between an oversimplification and an error.

———

Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.

As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.

While in the hypothetical your downside is meant to be capped at zero, in reality, ‘swinging for the fences’ with all your existing funds can mean going far below zero in impact.

The fact that many risky actions can result in an outcome far worse than what would have happened if you simply did nothing, is a reason for much additional caution, one that we wrote about in a 2018 piece titled ‘Ways people trying to do good accidentally make things worse, and how to avoid them’. I regret that I failed to ask any questions that highlighted this critical point in the interview.

(This post won’t address the many other serious issues raised by the risk-taking at FTX, which, according to news reports, have gone far beyond accepting the possibility of not earning much profit, and which can’t be done justice here.

If those reports are accurate, the risk-taking at FTX was not just a coin flip that came up tails — it was immoral and perhaps criminal itself due to the misappropriation of other people’s money for the purpose of risky investments. This has resulted in incalculable harm to customers, investors, trust in broader society, and has set back all the causes some of FTX’s staff said they wanted to help.)

———

To return to the question of declining returns and risk aversion — just as one slice of pizza is delicious but a tenth may not be enjoyable to eat at all, people trying to use philanthropy to do good do face ‘declining marginal returns’ as they incrementally try to give away more and more money.

How fast that happens is a difficult empirical question.

But if one is funding the fairly niche and neglected problems SBF said he cared the most about, it’s fair to say that any foundation would find it difficult to disperse $15 billion to projects they were incredibly excited about.

That’s because a foundation with $15 billion would end up being a majority of funding for those areas, and so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold, depending on how broad a net they tried to cast. That ‘glut’ of funding would result in some more mediocre projects getting the green light.

Assuming someone funded projects starting with the ones they believed would have the most impact per dollar, and then worked down — the last grant made from such a large pot of money will be clearly worse, and probably have less than half the expected social impact per dollar as the first.

So between $1 billion with certainty versus a 10% chance of $15 billion, one could make a theoretical case for either option — but if it were me I would personally lean towards taking the $1 billion with certainty.[1]

Notice that by contrast, if I were weighing up a guaranteed $1 million against a 10% chance of $15 million, the situation would be very different. For the sectors I’d be most likely to want to fund, $15 million from me, spread out over a period of years, would represent less than a 1% increase, and so wouldn’t overwhelm their capacity to sensibly grow, leading the marginal returns to decline more slowly. So in that case, setting aside my personal interests, I would opt for the 10% chance of $15 million.

———

Another massive real-world consideration we haven’t mentioned yet which pushes in favour of risk aversion is the following: how much you are in a position to donate is likely to be strongly correlated with how much other donors are able to donate.

In practice risk-taking around philanthropy will mostly centres on investing in businesses. But businesses tend to do well and poorly together, in cycles, depending on broad economic conditions. So if your bets don’t pay off, say, because of a recession, there’s a good chance other donors will have less to give as well. As a result, you can’t just take the existing giving of other donors for granted.

This is one reason for even small donors to have a reasonable degree of risk aversion. If they all adopt a risk-neutral strategy they may all get hammered at once and have to massively reduce their giving simultaneously, adding up to a big negative impact in aggregate.

This is a huge can of worms that has been written about by Christiano and Tomasik as far back as 2013, and more recently by my colleague Benjamin Todd.

———

This post has only scratched the surface of the analysis one could do on this question, and attempted to show how tricky it can be. For instance, we haven’t even considered:

  • Uncertainty about how many other donors might join or drop out of funding similar work in future.

  • Indirect impacts from people funding similar work on adjacent projects.

  • Uncertainty about which problems you’ll want to fund solutions to in future.

I regret having swept those and other complications under the rug for the sake of simplicity in a way that may well have confused some listeners to the show and seemed like an endorsement of an approach that is risk-neutral with respect to dollar returns, which would in fact be severely misguided.

(If you’d like to hear my thoughts on FTX more generally as opposed to this technical question you can listen to some comments I put out on The 80,000 Hours Podcast feed.)