Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes.
Re 80,000 Hours: I don’t know exactly what they’ve argued, but I think “very valuable” is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn’t mean that money becomes unimportant compared to people, or anything like that.
I’d especially recommend this part from section 1:
My sense is that the bar within longtermism has come down a little bit compared to a few years ago – back then we weren’t providing much funding for things like PhD programmes, which strike me as somewhat less effective than funding core organisations (though still well worth it).
On the other hand, since longtermism is so new, there is also a lot more potential to generate and discover highly effective opportunities as the capacity of the community grows. It wouldn’t surprise me if the bar stays similar in the coming years.
Again, in a worst case scenario, there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic – that could easily be topped up by $1bn (ideally restricted to work to develop vaccines for novel pathogens). (See more ideas.) These kinds of scalable opportunities are likely 10-100x less effective than the top longtermist opportunities we’re able to find today, but still very good (and if you put reasonable credence in longtermism, plausibly still more effective than GiveWell recommended charities).
I also expect research will uncover better scalable longtermist donation opportunities in the coming years, which means that investing to give when those opportunities arise is a more attractive option (compared to donors focused on global health).
If longtermism attracts supporters ahead of our expectations, the bar may fall further. But again, society spends less on reducing existential risk than it does on ice cream, so we could spend orders of magnitude more on longtermist aligned issues, and it would still be a minor global priority.
(Extra info on diminishing returns in longtermism: Returns probably diminish faster in longtermism than in neartermism. But longtermists also care more about the all time total amount of resources invested in an issue than how much is invested each year. This means what matters for diminishing returns are changes in how much you expect to be spent in longtermism aligned ways in the future. This means that additional funding only drives down expected returns if it’s ahead of what you already expected to be spent. So we care more about ‘positive surprises’ than changes in the total of committed funds.)
So he thought the marginal cost-effectiveness hadn’t changed much while funding had dramatically increased within longtermism over these years. I suppose it’s possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it.
Personally, I’d guess funding students’ university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing, the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses.
I also got the impression that the Future Fund’s bar was much lower, but I think this was after Ben Todd’s post.
Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes.
Re 80,000 Hours: I don’t know exactly what they’ve argued, but I think “very valuable” is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn’t mean that money becomes unimportant compared to people, or anything like that.
(I didn’t vote on your comment.)
Here’s Ben Todd’s post on the topic from last November:
Despite billions of extra funding, small donors can still have a significant impact
I’d especially recommend this part from section 1:
So he thought the marginal cost-effectiveness hadn’t changed much while funding had dramatically increased within longtermism over these years. I suppose it’s possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it.
Personally, I’d guess funding students’ university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing, the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses.
I also got the impression that the Future Fund’s bar was much lower, but I think this was after Ben Todd’s post.