Because a double-or-nothing coin-flip scales; it doesn’t stop having high EV when we start dealing with big bucks.
Risky bets aren’t themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA’s impact scales with the logarithm of its wealth. If you’re gambling a small amount of money, that means you should be ~indifferent to 50⁄50 double or nothing (note that even in this case it doesn’t have positive EV). But if you’re gambling with the majority of wealth that’s predictably committed to EA causes, you should be much more scared about risky bets.
(Also in this case the downside isn’t “nothing” — it’s much worse.)
I think marginal returns probably don’t diminish nearly as quickly as the logarithm for neartermist cause areas, but maybe that’s true for longtermist ones (where FTX/Alameda and associates were disproportionately donating), although my impression is that there’s no consensus on this, e.g. 80,000 Hours has been arguing for donations still being very valuable.
(I agree that the downside (damage to the EA community and trust in EAs) is worse than nothing relative to the funds being gambled, but that doesn’t really affect the spirit of the argument. It’s very easy to underappreciate the downside in practice, though.)
I’d actually guess that longtermism diminishes faster than logarithmic, given how much funders have historically struggled to find good funding opportunities.
Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes.
Re 80,000 Hours: I don’t know exactly what they’ve argued, but I think “very valuable” is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn’t mean that money becomes unimportant compared to people, or anything like that.
I’d especially recommend this part from section 1:
My sense is that the bar within longtermism has come down a little bit compared to a few years ago – back then we weren’t providing much funding for things like PhD programmes, which strike me as somewhat less effective than funding core organisations (though still well worth it).
On the other hand, since longtermism is so new, there is also a lot more potential to generate and discover highly effective opportunities as the capacity of the community grows. It wouldn’t surprise me if the bar stays similar in the coming years.
Again, in a worst case scenario, there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic – that could easily be topped up by $1bn (ideally restricted to work to develop vaccines for novel pathogens). (See more ideas.) These kinds of scalable opportunities are likely 10-100x less effective than the top longtermist opportunities we’re able to find today, but still very good (and if you put reasonable credence in longtermism, plausibly still more effective than GiveWell recommended charities).
I also expect research will uncover better scalable longtermist donation opportunities in the coming years, which means that investing to give when those opportunities arise is a more attractive option (compared to donors focused on global health).
If longtermism attracts supporters ahead of our expectations, the bar may fall further. But again, society spends less on reducing existential risk than it does on ice cream, so we could spend orders of magnitude more on longtermist aligned issues, and it would still be a minor global priority.
(Extra info on diminishing returns in longtermism: Returns probably diminish faster in longtermism than in neartermism. But longtermists also care more about the all time total amount of resources invested in an issue than how much is invested each year. This means what matters for diminishing returns are changes in how much you expect to be spent in longtermism aligned ways in the future. This means that additional funding only drives down expected returns if it’s ahead of what you already expected to be spent. So we care more about ‘positive surprises’ than changes in the total of committed funds.)
So he thought the marginal cost-effectiveness hadn’t changed much while funding had dramatically increased within longtermism over these years. I suppose it’s possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it.
Personally, I’d guess funding students’ university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing, the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses.
I also got the impression that the Future Fund’s bar was much lower, but I think this was after Ben Todd’s post.
Risky bets aren’t themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA’s impact scales with the logarithm of its wealth. If you’re gambling a small amount of money, that means you should be ~indifferent to 50⁄50 double or nothing (note that even in this case it doesn’t have positive EV). But if you’re gambling with the majority of wealth that’s predictably committed to EA causes, you should be much more scared about risky bets.
(Also in this case the downside isn’t “nothing” — it’s much worse.)
I think marginal returns probably don’t diminish nearly as quickly as the logarithm for neartermist cause areas, but maybe that’s true for longtermist ones (where FTX/Alameda and associates were disproportionately donating), although my impression is that there’s no consensus on this, e.g. 80,000 Hours has been arguing for donations still being very valuable.
(I agree that the downside (damage to the EA community and trust in EAs) is worse than nothing relative to the funds being gambled, but that doesn’t really affect the spirit of the argument. It’s very easy to underappreciate the downside in practice, though.)
I’d actually guess that longtermism diminishes faster than logarithmic, given how much funders have historically struggled to find good funding opportunities.
Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes.
Re 80,000 Hours: I don’t know exactly what they’ve argued, but I think “very valuable” is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn’t mean that money becomes unimportant compared to people, or anything like that.
(I didn’t vote on your comment.)
Here’s Ben Todd’s post on the topic from last November:
Despite billions of extra funding, small donors can still have a significant impact
I’d especially recommend this part from section 1:
So he thought the marginal cost-effectiveness hadn’t changed much while funding had dramatically increased within longtermism over these years. I suppose it’s possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it.
Personally, I’d guess funding students’ university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing, the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses.
I also got the impression that the Future Fund’s bar was much lower, but I think this was after Ben Todd’s post.