I have donated 10 % of my net income since I started working, and will be donating to the Long-Term Future Fund from EA Funds:
I believe longtermist interventions are more effective than neartermist ones for the reasons described by Hilary Greaves here (I highly recommend watching the whole talk!). In short, I think:
The expected cost-effectiveness of interventions (including neartermist ones) is driven by their longterm effects.
One can better increase the longterm effects by explicitly focussing on them.
Iâve heard grantmakers in the LT space say that âeverything that is above the bar is getting funded and what we need are more talented people filling roles or new orgs to start up.â So it seems that any marginal donation going to the EA Funds LTFF is currently being underutilized/ânot used at all and doesnât even funge well. So that makes me lean more neartermist, even if you accept that LT interventions are ultimately more impactful.
Apologies if that is addressed in the video aboveâdonât have time to view it now, but from your description it looks like it is more geared around general effectiveness and not on-the-current-margin giving right now.
Iâve heard grantmakers in the LT space say that âeverything that is above the bar is getting funded and what we need are more talented people filling roles or new orgs to start upâ.
On the one hand, I think that bar may still be higher than that of neartermist interventions. I have estimated here the marginal cost-effectiveness of longtermism and catastrophic risk prevention is 9 (= 3.95/â0.431) times as high as that of global health and development.
On the other hand, I think one can still say the bar of longtermist interventions is more constrained by labour than that of neartermist ones. Fields like global health and development have been around for a while, so it makes sense that funding is more likely to be undersuplied relative to ideas. Whereas fields like AI safety only have very few people involved[1], so the ideas space is still very nascent, and more people are needed to explore it.
So it seems that any marginal donation going to the EA Funds LTFF is currently being underutilized/ânot used at all and doesnât even funge well. So that makes me lean more neartermist, even if you accept that LT interventions are ultimately more impactful.
If funding is currently oversuplied relative to labour in the longtermist space, one can prioritise interventions which focus on ensuring more people will be able to contribure to the area in the future. The LTFF makes many (most?) grants with this goal. Some examples from this quarter (taken from here):
âThe Alignable Structures workshop in Philadelphiaâ.
âFinancial support for: Finishing Masterâs (AI upskilling), independent research and career explorationâ.
â6-month budget to self-study ML and research the possible applications of a Neuro/âCogScience perspective for AGI Safetyâ.
âFunding from 15th September 2022 until 31st January 2023 for doing alignment research and upskillingâ.
Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this
Thanks for the initiative!
I have donated 10 % of my net income since I started working, and will be donating to the Long-Term Future Fund from EA Funds:
I believe longtermist interventions are more effective than neartermist ones for the reasons described by Hilary Greaves here (I highly recommend watching the whole talk!). In short, I think:
The expected cost-effectiveness of interventions (including neartermist ones) is driven by their longterm effects.
One can better increase the longterm effects by explicitly focussing on them.
So longtermist interventions tend to be better.
I like the grantmaking approach of EA Funds, as it accounts for:
Expected value.
Marginal impact and room for more funding.
Counterfactual impact.
Track record.
Information value.
Direct and indirect effects.
+1, I also found that Greaves talk very valuable and that it influenced my thinking a lot!
Iâve heard grantmakers in the LT space say that âeverything that is above the bar is getting funded and what we need are more talented people filling roles or new orgs to start up.â So it seems that any marginal donation going to the EA Funds LTFF is currently being underutilized/ânot used at all and doesnât even funge well. So that makes me lean more neartermist, even if you accept that LT interventions are ultimately more impactful.
Apologies if that is addressed in the video aboveâdonât have time to view it now, but from your description it looks like it is more geared around general effectiveness and not on-the-current-margin giving right now.
Curious if you have any thoughts on that.
Thanks for sharing!
On the one hand, I think that bar may still be higher than that of neartermist interventions. I have estimated here the marginal cost-effectiveness of longtermism and catastrophic risk prevention is 9 (= 3.95/â0.431) times as high as that of global health and development.
On the other hand, I think one can still say the bar of longtermist interventions is more constrained by labour than that of neartermist ones. Fields like global health and development have been around for a while, so it makes sense that funding is more likely to be undersuplied relative to ideas. Whereas fields like AI safety only have very few people involved[1], so the ideas space is still very nascent, and more people are needed to explore it.
Benjamin Toddâs post Letâs stop saying âfunding overhangâ clarifies this matter.
If funding is currently oversuplied relative to labour in the longtermist space, one can prioritise interventions which focus on ensuring more people will be able to contribure to the area in the future. The LTFF makes many (most?) grants with this goal. Some examples from this quarter (taken from here):
âThe Alignable Structures workshop in Philadelphiaâ.
âFinancial support for: Finishing Masterâs (AI upskilling), independent research and career explorationâ.
â6-month budget to self-study ML and research the possible applications of a Neuro/âCogScience perspective for AGI Safetyâ.
âFunding from 15th September 2022 until 31st January 2023 for doing alignment research and upskillingâ.
According to 80,000 Hoursâ page: