Longtermist giving has fewer donation opportunities
Could you clarify what do you mean by donation opportunities? The way I think about it, it makes sense for small donors to donate to whatever organisation/âfund they think is more cost-effective at the margin (for large donors, diminishing marginal returns are important, so it makes sense to follow a portfolio approach).
can absorb less extra funding and deploy it effectively
Personally, I would say interventions around biosecurity and pandemic preparedness and civilisation resilience can aborb and deploy funding much more cost-effectively that GiveWellâs top charities, whose sign of impact is unclear to me.
has very long feedback loops
There is a sense in which feedback loops are short. Some examples:
Outcome: decreasing extinction risk from climate change. Goal: decreasing emissions. Nearterm proxy: adoption of a policy to decrease emissions.
Outcome: decreasing extinction risk from nuclear war. Goal: decrease number of nuclear weapons. Nearterm proxy: agreements to limit nuclear arsenals.
Outcome: decreasing extinction risk from engineered pandemics. Goal: increase pandemic preparedness. Nearterm proxy: ability to rapidly scale up the production of vaccines.
In the case of AI safety, it also seems to have become subject to the forces of mainstream capitalism in a way that makes it less funding constrained and considerably less tractable (e.g. can even Open Philanthropy, with >$10bn to spend, really slow the pace of capabilities research?).
You can think about it in another way. Can projects funded by Open Philanthropy (or other) meaningfully increase the tiny amount of people working on AI Safety (90 % confidence interval, 200 to 1 k), or improve their ability to do so?
I think a better way to think about this is to look at Open Philanthropy, a specialist giving org which maintains worldview diversity because there is really genuine debate about how to allocate funding between different cause areas.
I agree there is lots of uncertainty, but I do not think one should assume Open Philanthropy has figured out the best way to handle it. I believe individuals could use Open Philâs views as a starting point, but then should (to a certain extent) try to look into the arguments, and update their views (and donations) accordingly. Personally, I used to distribute all my donations evenly across the 4 EA Funds (25 % each), but am now directing 100 % of my donations to the Long-Term Future Fund. This does not mean I think neartermist interventions should receive 0 donations. Even if I thought the optimal fraction of longtermist donations in the portfolio was only 1 pp higher, since my donations are much smaller than 1 % of the overall donations, it makes sense for me to direct all my donations to the longtermist projects.
There is an important distinction between what small donors and Open Philanthropy should do:
Small donations have a negligible impact on the marginal cost-effectiveness of the organisations/âfunds receiving them. So it makes sense for me to donate to the organisation/âfund which I view as most cost-effective at the margin.
Large donations can shift the marginal cost-effectiveness quite a lot. So it makes much more sense for Open Philanthropy to follow a portfolio approach.
I find them a lot more authoritative than comments from individuals in EA who donât do grantmaking, even highly respected ones like Ben.
I do not know to which extent the fraction of Open Philâs donations going to each of their worldviews was defined by lots of people, or only a few. As far as I know, Open Phil has not clarified that. I believe it would be good if Open Phil could describe more of their process (besides what they outlined 6 or 7 years ago in the worldview diversification post).
You also note 80kâs list of the most pressing problems, but you should note here that 80k has one of the most maximalist longtermist positions in EA.
I think 80,000 Hours is not at the very end of the longtermist spectrum. For example, the 80,000 Hours Podcast has had many episodes related to neartermist causes, and the 80,000 Hours Job Board has many positions on these too.
much more cost-effectively that GiveWellâs top charities, whose sign of impact is unclear to me
This is a very bold claim, made quite casually! Especially in light of:
There is a sense in which feedback loops are short.
I would evaluate these options through the GiveWell criteriaâevidence of effectiveness, cost-effectiveness and room for more funding.
For the GiveWell charities, they score very highly on each metric. For example, they are each supported by multiple randomised control trials. By contrast, the indicators you mention are weak proxy indicators (I think you should also have added âcounterfactualâ to each oneâa new arms control treaty isnât an achievement for a donor unless it would likely not have happened without extra funding).
If I could challenge you, I think this looks like motivated reasoning, in that I think these are probably âdecent proxy indicators if youâve already decided to donate solely within longtermismâ. But I think itâs very tough to maintain that longtermist giving opportunities stack up next to neartermist ones, if compared on the same metrics.
To summarise: global health giving opportunitiesâexceptionally strong evidence of effectiveness; rigorous cost-effectiveness analyses; room for more funding updated annually with high levels of transparency.
Longtermist giving opportunities (as mentioned here) - some (weak) proxy indicators show progress, some donât (e.g. Iâm not aware of any counterfactual nuclear arms control treaties in the past 10 years); therefore speculative cost-effectiveness, because little evidence of effectiveness; individual projects likely to have room for more funding, but as a sector much less room for more funding (e.g. you could deploy billions via GiveDirectly, but Open Phil only managed to deploy <$1bn to longtermist causes last year).
As effective giving organisations should (at least in theory) be agnostic about cause area and focussed on âeffectivenessâ, I would be surprised if any raised the majority of their donations for longtermist causes, which have significant challenges around evidence of effectiveness/âtractability.
This [it is unclear whether GiveWellâs top charities are good/âbad] is a very bold claim, made quite casually!
Sorry. I had givensomecontext in the post (not sure you noticed it). You can find more in section 4 of Mogensen 2019 (discussed here):
In this section, I argue that an agent whose utility function is a positive linear transform of impartial good will not prefer donating to Against Malaria Foundation over Make-A-Wish Foundation if she responds to cluelessness with imprecision and satisfies the maximality rule, provided that she shares our evidence. Section 4.1 emphasizes the depth of our uncertainty concerning the indirect effects of donating to Against Malaria Foundation. Section 4.2 reflects on the lessons to be drawn in applying the maximality rule to a choice between these organizations.
You say that:
I would evaluate these options through the GiveWell criteriaâevidence of effectiveness, cost-effectiveness and room for more funding.
I believe those criteria are great, and wish effective giving organisations applied them to their own operations. For example, doing retrospective and prospective cost-effectiveneness analyses.
For the GiveWell charities, they score very highly on each metric.
My concern is that GiveWellâs metrics (roughly, lives saved per dollar[1]) may well not capture most of the expected effects of GiveWellâs interventions. For example:
I think GiveWellâs top charities may be anything from very harmful to very beneficial accounting for the effects on terrestrial arthropods (e.g. insects[1]).
In comparing Make-A-Wish Foundation unfavourably to Against Malaria Foundation, Singer (2015) observes that âsaving a life is better than making a wish come true.â (6) Arguably, there is a qualifier missing from this statement: âall else being equal.â Saving a childâs life need not be better than fulfilling a childâs wish if the indirect effects of saving the childâs life [e.g. on animals] are worse than those of fulfilling the wish.
Feel free to check section 4.1 for many positive and negative consequences of increasing and decreasing population size.
By contrast, the indicators you mention are weak proxy indicators
Note I am not saying the relationships are simple or linear, just that they very much matter. Without nuclear weapons, there would be no risk of nuclear war. In any case, I agree the correlation between longtermist outcomes (e.g. lower extinction risk) and the measurable outputs of longtermist interventions (e.g. less nuclear weapons) will tend to be lower than the correlation between neartermist outcomes (e.g. deaths averted) and the measurable outputs of neartermist interventions (e.g. distributed bednets). However, my concern is that the correlation between neartermist outcomes (e.g. deaths averted) and ultimately relevant outcomes (welfare across all space and time, not just very nearterm welfare of humans) is quite poor.
If I could challenge you, I think this looks like motivated reasoning, in that I think these are probably âdecent proxy indicators if youâve already decided to donate solely within longtermismâ.
Fair! For what is worth, I was not commited to donating solely to longtermist interventions from the onset. I used to split my donations evenly across all 4 EA Funds, and wrotearticles about donating to GiveWellâs top charities on the online newspaper of my former university.
Longtermist giving opportunities (as mentioned here) - some (weak) proxy indicators show progress, some donât (e.g. Iâm not aware of any counterfactual nuclear arms control treaties in the past 10 years); therefore speculative cost-effectiveness, because little evidence of effectiveness; individual projects likely to have room for more funding, but as a sector much less room for more funding (e.g. you could deploy billions via GiveDirectly, but Open Phil only managed to deploy <$1bn to longtermist causes last year).
I agree longtermist organisations should do more to assess their cost-effectiveness and room for more funding. One factor is that longtermist organisations tend to be smaller (not Nuclear Threat Initiative), so they have less resources to do such analyses (although arguably still enough).
In reality, according to GiveWellâs moral weights, the value of saving lives increases until an age of around 10 (if I recall correctly), and then starts decreasing. Economic benefits are also taken into account.
Thanks for the comment, Jack!
Could you clarify what do you mean by donation opportunities? The way I think about it, it makes sense for small donors to donate to whatever organisation/âfund they think is more cost-effective at the margin (for large donors, diminishing marginal returns are important, so it makes sense to follow a portfolio approach).
Personally, I would say interventions around biosecurity and pandemic preparedness and civilisation resilience can aborb and deploy funding much more cost-effectively that GiveWellâs top charities, whose sign of impact is unclear to me.
There is a sense in which feedback loops are short. Some examples:
Outcome: decreasing extinction risk from climate change. Goal: decreasing emissions. Nearterm proxy: adoption of a policy to decrease emissions.
Outcome: decreasing extinction risk from nuclear war. Goal: decrease number of nuclear weapons. Nearterm proxy: agreements to limit nuclear arsenals.
Outcome: decreasing extinction risk from engineered pandemics. Goal: increase pandemic preparedness. Nearterm proxy: ability to rapidly scale up the production of vaccines.
You can think about it in another way. Can projects funded by Open Philanthropy (or other) meaningfully increase the tiny amount of people working on AI Safety (90 % confidence interval, 200 to 1 k), or improve their ability to do so?
I agree there is lots of uncertainty, but I do not think one should assume Open Philanthropy has figured out the best way to handle it. I believe individuals could use Open Philâs views as a starting point, but then should (to a certain extent) try to look into the arguments, and update their views (and donations) accordingly. Personally, I used to distribute all my donations evenly across the 4 EA Funds (25 % each), but am now directing 100 % of my donations to the Long-Term Future Fund. This does not mean I think neartermist interventions should receive 0 donations. Even if I thought the optimal fraction of longtermist donations in the portfolio was only 1 pp higher, since my donations are much smaller than 1 % of the overall donations, it makes sense for me to direct all my donations to the longtermist projects.
There is an important distinction between what small donors and Open Philanthropy should do:
Small donations have a negligible impact on the marginal cost-effectiveness of the organisations/âfunds receiving them. So it makes sense for me to donate to the organisation/âfund which I view as most cost-effective at the margin.
Large donations can shift the marginal cost-effectiveness quite a lot. So it makes much more sense for Open Philanthropy to follow a portfolio approach.
I do not know to which extent the fraction of Open Philâs donations going to each of their worldviews was defined by lots of people, or only a few. As far as I know, Open Phil has not clarified that. I believe it would be good if Open Phil could describe more of their process (besides what they outlined 6 or 7 years ago in the worldview diversification post).
I think 80,000 Hours is not at the very end of the longtermist spectrum. For example, the 80,000 Hours Podcast has had many episodes related to neartermist causes, and the 80,000 Hours Job Board has many positions on these too.
Thanks for engaging so positively here.
A couple of quick reactions:
This is a very bold claim, made quite casually! Especially in light of:
I would evaluate these options through the GiveWell criteriaâevidence of effectiveness, cost-effectiveness and room for more funding.
For the GiveWell charities, they score very highly on each metric. For example, they are each supported by multiple randomised control trials. By contrast, the indicators you mention are weak proxy indicators (I think you should also have added âcounterfactualâ to each oneâa new arms control treaty isnât an achievement for a donor unless it would likely not have happened without extra funding).
If I could challenge you, I think this looks like motivated reasoning, in that I think these are probably âdecent proxy indicators if youâve already decided to donate solely within longtermismâ. But I think itâs very tough to maintain that longtermist giving opportunities stack up next to neartermist ones, if compared on the same metrics.
To summarise: global health giving opportunitiesâexceptionally strong evidence of effectiveness; rigorous cost-effectiveness analyses; room for more funding updated annually with high levels of transparency.
Longtermist giving opportunities (as mentioned here) - some (weak) proxy indicators show progress, some donât (e.g. Iâm not aware of any counterfactual nuclear arms control treaties in the past 10 years); therefore speculative cost-effectiveness, because little evidence of effectiveness; individual projects likely to have room for more funding, but as a sector much less room for more funding (e.g. you could deploy billions via GiveDirectly, but Open Phil only managed to deploy <$1bn to longtermist causes last year).
As effective giving organisations should (at least in theory) be agnostic about cause area and focussed on âeffectivenessâ, I would be surprised if any raised the majority of their donations for longtermist causes, which have significant challenges around evidence of effectiveness/âtractability.
Likewise!
Sorry. I had given some context in the post (not sure you noticed it). You can find more in section 4 of Mogensen 2019 (discussed here):
You say that:
I believe those criteria are great, and wish effective giving organisations applied them to their own operations. For example, doing retrospective and prospective cost-effectiveneness analyses.
My concern is that GiveWellâs metrics (roughly, lives saved per dollar[1]) may well not capture most of the expected effects of GiveWellâs interventions. For example:
From Mogensen 2019:
Feel free to check section 4.1 for many positive and negative consequences of increasing and decreasing population size.
Note I am not saying the relationships are simple or linear, just that they very much matter. Without nuclear weapons, there would be no risk of nuclear war. In any case, I agree the correlation between longtermist outcomes (e.g. lower extinction risk) and the measurable outputs of longtermist interventions (e.g. less nuclear weapons) will tend to be lower than the correlation between neartermist outcomes (e.g. deaths averted) and the measurable outputs of neartermist interventions (e.g. distributed bednets). However, my concern is that the correlation between neartermist outcomes (e.g. deaths averted) and ultimately relevant outcomes (welfare across all space and time, not just very nearterm welfare of humans) is quite poor.
Fair! For what is worth, I was not commited to donating solely to longtermist interventions from the onset. I used to split my donations evenly across all 4 EA Funds, and wrote articles about donating to GiveWellâs top charities on the online newspaper of my former university.
I agree longtermist organisations should do more to assess their cost-effectiveness and room for more funding. One factor is that longtermist organisations tend to be smaller (not Nuclear Threat Initiative), so they have less resources to do such analyses (although arguably still enough).
In reality, according to GiveWellâs moral weights, the value of saving lives increases until an age of around 10 (if I recall correctly), and then starts decreasing. Economic benefits are also taken into account.