Thanks for the great post (and for your great writing in general)! It mostly makes a ton of sense to me, though I am a bit confused on this point:
“If Benjamin’s view is that EA foundations are research bottlenecked rather than funding bottlenecked, small donations don’t “free up” more funding in an impact-relevant way.”
EA foundations might be research bottlenecked now, but funding bottlenecked in the future. So if I donate $1 that displaces a donation that OpenPhil would have made, then OpenPhil has $1 more to donate to an effective cause in the future when we are not funding constrainedthe future.
So essentially, a $1 donation by me now is an exercise in patient philanthropy, with OpenPhil acting as the intermediary.
Does this fit within your framework, or is there something I’m missing?
I don’t think this “changes the answer” as far as your recommendation goes—we should fund more individuals, selves, and weirdos.
I think it depends partially on how confident you are that Dustin Moskovitz will give away all his money, and how altruistic you are. Moskovitz seems great, I think he’s pledged to give away “more than half” his wealth in his lifetime (though I current find a good citation, it might be much higher). My sense is that some other extremely generous billionaires (Gates/Buffet) also made pledges, and it doesn’t currently seem like they’re on track. Or maybe they do give away all their money, but it’s just held by the foundation, not actually dolled out to causes. And then you have to think about how foundations drift over time, and if you think OpenPhil 2121 will have values you still agree with.
So maybe you can think of this roughly as: “I’m going to give Dustin Moskovitz more money, and trust that he’ll do the right thing with it eventually”. I’m not sure how persuasive that feels to people.
(Practically, a lot of this hinges on how good the next best alternatives actually are. If smart weirdos you know personally are only 1% as effective as AMF, it’s probably still not worth it even if the funding is more directly impactful. Alternatively, GiveDirectly is ~10% as good as GiveWell top charities, and even then I think it’s a somewhat hard sell that all my arguments here add up to a 10x reduction in efficacy. But it’s not obviously unreasonable either.)
That’s helpful thank you! I think the mode is more “I’m going to give OpenPhil more money”. It only becomes “I’m going to give Dustin more money” if it’s true that Dustin adjusts his donations to OpenPhil every year based on how much OpenPhil disburses, such that funging OpenPhil = funging Dustin
But in any case I’d say most EAs are probably optimistic that these organizations and individuals will continue to be altruistic and will continue to have values we agree with.
And in any any case, I strongly agree that we should be more entrepreneurial
Strong upvote, I think the “GiveDirectly of longtermism” is investing* the money and deploying it to CEPI-like (but more impactful) opportunities later on.
* Donors should invest it in ways that return ≥15% annually (and plausibly 30-100% on smaller amounts, with current crypto arbitrage opportunities). If you don’t know how to do this yourself, funging with a large EA donor may achieve this.
The claim that large EA donors are likely to return ≥15% annually, and plausibly 30%-100%, is incredibly optimistic. Why would we expect large EA donors to get so much higher returns on investment than everyone else, and why would such profitable opportunities still be funding-constrained? This is not a case where EA is aiming for something different from others; everyone is trying to maximize their monetary ROI with their investments.
Markets are made efficient by really smart people with deep expertise. Many EAs fit that description, and have historically achieved such returns doing trades/investments with a solid argument and without taking crazy risks.
Examples include: crypto arbitrage opportunities like these (without exposure to crypto markets), the Covid short, early crypto investments (high-risk, but returns were often >100x, implying very favorable risk-adjusted returns), prediction markets, meat alternatives.
Overall, most EA funders outperformed the market over the last 10 years, and they typically had pretty good arguments for their trades.
But I get your skepticism and also find it hard to believe (and would also be skeptical of such claims without further justification).
Also note that returns will get a lot lower once more capital is allocated in this way. It’s easy to make such returns on $100 million, but really
But the more you think everyone else is doing that, the more important it is to give now right? Just as an absurd example, say the $46b EA-related funds grows 100% YoY for 10 years, then we wake up in 2031 with $46 trillion. If anything remotely like that is actually true, we’ll feel pretty dumb for not giving to CEPI now.
Thanks for the great post (and for your great writing in general)! It mostly makes a ton of sense to me, though I am a bit confused on this point:
“If Benjamin’s view is that EA foundations are research bottlenecked rather than funding bottlenecked, small donations don’t “free up” more funding in an impact-relevant way.”
EA foundations might be research bottlenecked now, but funding bottlenecked in the future. So if I donate $1 that displaces a donation that OpenPhil would have made, then OpenPhil has $1 more to donate to an effective cause in the future when we are not funding constrainedthe future.
So essentially, a $1 donation by me now is an exercise in patient philanthropy, with OpenPhil acting as the intermediary.
Does this fit within your framework, or is there something I’m missing?
I don’t think this “changes the answer” as far as your recommendation goes—we should fund more individuals, selves, and weirdos.
Hey, thanks. That’s a good point.
I think it depends partially on how confident you are that Dustin Moskovitz will give away all his money, and how altruistic you are. Moskovitz seems great, I think he’s pledged to give away “more than half” his wealth in his lifetime (though I current find a good citation, it might be much higher). My sense is that some other extremely generous billionaires (Gates/Buffet) also made pledges, and it doesn’t currently seem like they’re on track. Or maybe they do give away all their money, but it’s just held by the foundation, not actually dolled out to causes. And then you have to think about how foundations drift over time, and if you think OpenPhil 2121 will have values you still agree with.
So maybe you can think of this roughly as: “I’m going to give Dustin Moskovitz more money, and trust that he’ll do the right thing with it eventually”. I’m not sure how persuasive that feels to people.
(Practically, a lot of this hinges on how good the next best alternatives actually are. If smart weirdos you know personally are only 1% as effective as AMF, it’s probably still not worth it even if the funding is more directly impactful. Alternatively, GiveDirectly is ~10% as good as GiveWell top charities, and even then I think it’s a somewhat hard sell that all my arguments here add up to a 10x reduction in efficacy. But it’s not obviously unreasonable either.)
That’s helpful thank you! I think the mode is more “I’m going to give OpenPhil more money”. It only becomes “I’m going to give Dustin more money” if it’s true that Dustin adjusts his donations to OpenPhil every year based on how much OpenPhil disburses, such that funging OpenPhil = funging Dustin
But in any case I’d say most EAs are probably optimistic that these organizations and individuals will continue to be altruistic and will continue to have values we agree with.
And in any any case, I strongly agree that we should be more entrepreneurial
Strong upvote, I think the “GiveDirectly of longtermism” is investing* the money and deploying it to CEPI-like (but more impactful) opportunities later on.
* Donors should invest it in ways that return ≥15% annually (and plausibly 30-100% on smaller amounts, with current crypto arbitrage opportunities). If you don’t know how to do this yourself, funging with a large EA donor may achieve this.
(Made a minor edit)
The claim that large EA donors are likely to return ≥15% annually, and plausibly 30%-100%, is incredibly optimistic. Why would we expect large EA donors to get so much higher returns on investment than everyone else, and why would such profitable opportunities still be funding-constrained? This is not a case where EA is aiming for something different from others; everyone is trying to maximize their monetary ROI with their investments.
Markets are made efficient by really smart people with deep expertise. Many EAs fit that description, and have historically achieved such returns doing trades/investments with a solid argument and without taking crazy risks.
Examples include: crypto arbitrage opportunities like these (without exposure to crypto markets), the Covid short, early crypto investments (high-risk, but returns were often >100x, implying very favorable risk-adjusted returns), prediction markets, meat alternatives.
Overall, most EA funders outperformed the market over the last 10 years, and they typically had pretty good arguments for their trades.
But I get your skepticism and also find it hard to believe (and would also be skeptical of such claims without further justification).
Also note that returns will get a lot lower once more capital is allocated in this way. It’s easy to make such returns on $100 million, but really
(Made some edits)
But the more you think everyone else is doing that, the more important it is to give now right? Just as an absurd example, say the $46b EA-related funds grows 100% YoY for 10 years, then we wake up in 2031 with $46 trillion. If anything remotely like that is actually true, we’ll feel pretty dumb for not giving to CEPI now.
Yeah, I agree. (Also, I think it’s a lot harder / near-impossible to sustain such high returns on a $100b portfolio than on a $1b portfolio.)