I’ve heard grantmakers in the LT space say that “everything that is above the bar is getting funded and what we need are more talented people filling roles or new orgs to start up.” So it seems that any marginal donation going to the EA Funds LTFF is currently being underutilized/not used at all and doesn’t even funge well. So that makes me lean more neartermist, even if you accept that LT interventions are ultimately more impactful.
Apologies if that is addressed in the video above—don’t have time to view it now, but from your description it looks like it is more geared around general effectiveness and not on-the-current-margin giving right now.
I’ve heard grantmakers in the LT space say that “everything that is above the bar is getting funded and what we need are more talented people filling roles or new orgs to start up”.
On the one hand, I think that bar may still be higher than that of neartermist interventions. I have estimated here the marginal cost-effectiveness of longtermism and catastrophic risk prevention is 9 (= 3.95/0.431) times as high as that of global health and development.
On the other hand, I think one can still say the bar of longtermist interventions is more constrained by labour than that of neartermist ones. Fields like global health and development have been around for a while, so it makes sense that funding is more likely to be undersuplied relative to ideas. Whereas fields like AI safety only have very few people involved[1], so the ideas space is still very nascent, and more people are needed to explore it.
So it seems that any marginal donation going to the EA Funds LTFF is currently being underutilized/not used at all and doesn’t even funge well. So that makes me lean more neartermist, even if you accept that LT interventions are ultimately more impactful.
If funding is currently oversuplied relative to labour in the longtermist space, one can prioritise interventions which focus on ensuring more people will be able to contribure to the area in the future. The LTFF makes many (most?) grants with this goal. Some examples from this quarter (taken from here):
“The Alignable Structures workshop in Philadelphia”.
“Financial support for: Finishing Master’s (AI upskilling), independent research and career exploration”.
“6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety”.
“Funding from 15th September 2022 until 31st January 2023 for doing alignment research and upskilling”.
Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this
I’ve heard grantmakers in the LT space say that “everything that is above the bar is getting funded and what we need are more talented people filling roles or new orgs to start up.” So it seems that any marginal donation going to the EA Funds LTFF is currently being underutilized/not used at all and doesn’t even funge well. So that makes me lean more neartermist, even if you accept that LT interventions are ultimately more impactful.
Apologies if that is addressed in the video above—don’t have time to view it now, but from your description it looks like it is more geared around general effectiveness and not on-the-current-margin giving right now.
Curious if you have any thoughts on that.
Thanks for sharing!
On the one hand, I think that bar may still be higher than that of neartermist interventions. I have estimated here the marginal cost-effectiveness of longtermism and catastrophic risk prevention is 9 (= 3.95/0.431) times as high as that of global health and development.
On the other hand, I think one can still say the bar of longtermist interventions is more constrained by labour than that of neartermist ones. Fields like global health and development have been around for a while, so it makes sense that funding is more likely to be undersuplied relative to ideas. Whereas fields like AI safety only have very few people involved[1], so the ideas space is still very nascent, and more people are needed to explore it.
Benjamin Todd’s post Let’s stop saying ‘funding overhang’ clarifies this matter.
If funding is currently oversuplied relative to labour in the longtermist space, one can prioritise interventions which focus on ensuring more people will be able to contribure to the area in the future. The LTFF makes many (most?) grants with this goal. Some examples from this quarter (taken from here):
“The Alignable Structures workshop in Philadelphia”.
“Financial support for: Finishing Master’s (AI upskilling), independent research and career exploration”.
“6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety”.
“Funding from 15th September 2022 until 31st January 2023 for doing alignment research and upskilling”.
According to 80,000 Hours’ page: