Iāve heard grantmakers in the LT space say that āeverything that is above the bar is getting funded and what we need are more talented people filling roles or new orgs to start upā.
On the one hand, I think that bar may still be higher than that of neartermist interventions. I have estimated here the marginal cost-effectiveness of longtermism and catastrophic risk prevention is 9 (= 3.95/ā0.431) times as high as that of global health and development.
On the other hand, I think one can still say the bar of longtermist interventions is more constrained by labour than that of neartermist ones. Fields like global health and development have been around for a while, so it makes sense that funding is more likely to be undersuplied relative to ideas. Whereas fields like AI safety only have very few people involved[1], so the ideas space is still very nascent, and more people are needed to explore it.
So it seems that any marginal donation going to the EA Funds LTFF is currently being underutilized/ānot used at all and doesnāt even funge well. So that makes me lean more neartermist, even if you accept that LT interventions are ultimately more impactful.
If funding is currently oversuplied relative to labour in the longtermist space, one can prioritise interventions which focus on ensuring more people will be able to contribure to the area in the future. The LTFF makes many (most?) grants with this goal. Some examples from this quarter (taken from here):
āThe Alignable Structures workshop in Philadelphiaā.
āFinancial support for: Finishing Masterās (AI upskilling), independent research and career explorationā.
ā6-month budget to self-study ML and research the possible applications of a Neuro/āCogScience perspective for AGI Safetyā.
āFunding from 15th September 2022 until 31st January 2023 for doing alignment research and upskillingā.
Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this
Thanks for sharing!
On the one hand, I think that bar may still be higher than that of neartermist interventions. I have estimated here the marginal cost-effectiveness of longtermism and catastrophic risk prevention is 9 (= 3.95/ā0.431) times as high as that of global health and development.
On the other hand, I think one can still say the bar of longtermist interventions is more constrained by labour than that of neartermist ones. Fields like global health and development have been around for a while, so it makes sense that funding is more likely to be undersuplied relative to ideas. Whereas fields like AI safety only have very few people involved[1], so the ideas space is still very nascent, and more people are needed to explore it.
Benjamin Toddās post Letās stop saying āfunding overhangā clarifies this matter.
If funding is currently oversuplied relative to labour in the longtermist space, one can prioritise interventions which focus on ensuring more people will be able to contribure to the area in the future. The LTFF makes many (most?) grants with this goal. Some examples from this quarter (taken from here):
āThe Alignable Structures workshop in Philadelphiaā.
āFinancial support for: Finishing Masterās (AI upskilling), independent research and career explorationā.
ā6-month budget to self-study ML and research the possible applications of a Neuro/āCogScience perspective for AGI Safetyā.
āFunding from 15th September 2022 until 31st January 2023 for doing alignment research and upskillingā.
According to 80,000 Hoursā page: