Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Finally, I think you acknowledge but probably underweight the importance of giving more weight to recent performance. For many organisations, the ‘revenue curve’ of donations will start out low but then grow rapidly. So the relevant thing for me is the direction of travel of Ayuda Efectiva, not its performance as an average of its first three years. You can see the value of looking at the direction of travel if you look at the performance of Effektiv Spenden and, to some extent, Giving What We Can (although GWWC had significant ‘unfair advantages’ in its early years). In each case, their performance has improved substantially over time.
Thanks for noting that, Jack!
Note the factual non-marginal multipliers I present for the whole period (e.g. 2019 to 2021 for Ayuda Efectiva) are not the mean across the factual non-marginal multipliers for the years of that period. I calculate the factual non-marginal multiplier for a given period from the ratio between donations received to be directed towards effective organisations and costs, so the years with greater volume of donations and costs will have a larger weight. This explains why Ayuda Efectiva’s multiplier for 2019 to 2021 (1.34) is not too different from that for 2021 (1.72).
It would be nice if effective giving organisations forecasted their future costs, donations received, and multipliers, thus doing a prospective cost-effectiveness analysis.
Thanks for this Vasco—always a useful exercise to look at cost-effectiveness, especially in an area like effective giving, where the money-moved is quite easily measured.
Some thoughts on this, which I’ll split into different comments for ease of discussion:
This seems very unlikely to be true in practice, but also I’m not sure it should be true in an ideal world either. Effective giving organisations should vary according to many factors—target market, costs of operating in various jurisdictions, competition being higher in some jurisdictions than others, the effectiveness of the team and strategy etc..
For example, it would be naive to assume that an effective giving org targeting Ultra High Net Worth Individuals (e.g. LongView, Effective Giving, Founder’s Pledge) would have the same ROI as one targeting grassroots givers (e.g. One for the World). Some types of outreach/donor will have much higher ROI than others.
The reason I think it isn’t even ‘ideal’ for all organisations to have the same ROI is that there is value to having a variety of approaches, because:
Certain types of outreach are crowded (e.g. it seems silly repeatedly to set up a ‘new Founder’s Pledge’, or a new ’10% pledge’ organisation)
Certain types are too specialist or need special expertise that isn’t available to every organisation (e.g. it’s very hard to gain access to Ultra High Net Worth givers)
If we all did one type of fundraising, it would decrease the diversity of our funding base and increase our risk
I’d also submit that the relative impact of effective-giving organizations nearer the “grassroots” level will likely be underestimated by looking solely at money moved. For example, grassroots effective-giving campaigns provide people with accessible ways to take action, which itself can spur greater commitment and downstream positive actions that aren’t captured well by a money-moved analysis alone. In contrast, money moved likely does a better job capturing the bulk of the impact from UHNW outreach.
Agreed, Jason! On the other hand, I would expect effective giving organisation to be tracking such indirect impacts if they represented an important part of their theory of change and overall impact. My impression is that the 4 organisations I analysed are not assessing much such indirect impacts. They were also not covered in GWWC’s last impact analysis (see here).
They seem rather difficult to capture and evaluate at a high level of specificity. It’s unclear if attempting to better measure and quantify that portion of ROI was the best use of these org’s resources a year ago, or even now in a tighter funding picture.
Per the first linked source: “In the 2020 EA Survey, 21% of respondents reported that Giving What We Can was important for them getting involved in EA.” Doubtless the percentage would be higher for all effective-givingish organizations (especially if GiveWell were included, my own entry point). Even concluding that 2.5 percent of 21 percent of EA activity should be “credited” to grassroots effective giving would be pretty significant additional impact for the fairly low spend involved.
I think there is a tension between:
People getting involved with EA is a major driver of our impact.
We do not measure how much we are responsible for people getting involved with EA.
These imply a major driver of impact is not being measured, which seems strange (specially for the larger effective giving organisations). Note that I am not suggesting investing tons of resources into quantifying indirect impact. Just asking a few questions once a year (e.g. did you apply to any job thanks to becoming aware of EA via our effective giving organisation?) would take little time, and provide useful information.
I agree GWWC’s indirect impact has been quite important:
On the other hand, I would say “getting involved in EA” is a little too vague. I think effective giving can be considered being involved in EA, so, from what you quoted alone, it is unclear whether there have been indirect impacts besides donations.
This is not obvious to me, because I think GWWC and GiveWell have much stronger ties to EA than the mean effective giving organisation. I also expect effective giving to disproportionally select for people who will end up engaged with neartermist interventions, which I think have very unclear impact.
Donor time/attention is a precious commodity to fundraisers, so I wouldn’t expect organizations to have expended much of it on this topic without a specific business justification. It’s plausible to me that the funders thought (and may still think) that each org’s easily-quantifiable output was sufficient to fill room for more funding, and that the orgs didn’t (and don’t) think more precise measurement of indirect impact would materially change org strategy (e.g., because those impacts are attainable by the org only as a byproduct of doing the org’s standard work).
Thanks, Jack! It is always good to receive feedback on such exercises too!
I agree with all the points you make. As I said:
However, although it is fine for the (all things considered) factual non-marginal multipliers to be different, the (all things considered) counterfactual marginal multipliers should be the same. If the marginal cost-effectiveness of donating to X is higher than that of donating to Y, one should donate more to X at the margin (which does not mean one should donate 0 to Y).
I don’t think this position is “extreme” but it is certainly highly debatable. Longtermist giving has fewer donation opportunities; can absorb less extra funding and deploy it effectively; and has very long feedback loops, which are hard to measure and have untested theories of change. In the case of AI safety, it also seems to have become subject to the forces of mainstream capitalism in a way that makes it less funding constrained and considerably less tractable (e.g. can even Open Philanthropy, with >$10bn to spend, really slow the pace of capabilities research?).
I think a better way to think about this is to look at Open Philanthropy, a specialist giving org which maintains worldview diversity because there is really genuine debate about how to allocate funding between different cause areas. I find them a lot more authoritative than comments from individuals in EA who don’t do grantmaking, even highly respected ones like Ben.
(You also note 80k’s list of the most pressing problems, but you should note here that 80k has one of the most maximalist longtermist positions in EA.)
Thanks for the comment, Jack!
Could you clarify what do you mean by donation opportunities? The way I think about it, it makes sense for small donors to donate to whatever organisation/fund they think is more cost-effective at the margin (for large donors, diminishing marginal returns are important, so it makes sense to follow a portfolio approach).
Personally, I would say interventions around biosecurity and pandemic preparedness and civilisation resilience can aborb and deploy funding much more cost-effectively that GiveWell’s top charities, whose sign of impact is unclear to me.
There is a sense in which feedback loops are short. Some examples:
Outcome: decreasing extinction risk from climate change. Goal: decreasing emissions. Nearterm proxy: adoption of a policy to decrease emissions.
Outcome: decreasing extinction risk from nuclear war. Goal: decrease number of nuclear weapons. Nearterm proxy: agreements to limit nuclear arsenals.
Outcome: decreasing extinction risk from engineered pandemics. Goal: increase pandemic preparedness. Nearterm proxy: ability to rapidly scale up the production of vaccines.
You can think about it in another way. Can projects funded by Open Philanthropy (or other) meaningfully increase the tiny amount of people working on AI Safety (90 % confidence interval, 200 to 1 k), or improve their ability to do so?
I agree there is lots of uncertainty, but I do not think one should assume Open Philanthropy has figured out the best way to handle it. I believe individuals could use Open Phil’s views as a starting point, but then should (to a certain extent) try to look into the arguments, and update their views (and donations) accordingly. Personally, I used to distribute all my donations evenly across the 4 EA Funds (25 % each), but am now directing 100 % of my donations to the Long-Term Future Fund. This does not mean I think neartermist interventions should receive 0 donations. Even if I thought the optimal fraction of longtermist donations in the portfolio was only 1 pp higher, since my donations are much smaller than 1 % of the overall donations, it makes sense for me to direct all my donations to the longtermist projects.
There is an important distinction between what small donors and Open Philanthropy should do:
Small donations have a negligible impact on the marginal cost-effectiveness of the organisations/funds receiving them. So it makes sense for me to donate to the organisation/fund which I view as most cost-effective at the margin.
Large donations can shift the marginal cost-effectiveness quite a lot. So it makes much more sense for Open Philanthropy to follow a portfolio approach.
I do not know to which extent the fraction of Open Phil’s donations going to each of their worldviews was defined by lots of people, or only a few. As far as I know, Open Phil has not clarified that. I believe it would be good if Open Phil could describe more of their process (besides what they outlined 6 or 7 years ago in the worldview diversification post).
I think 80,000 Hours is not at the very end of the longtermist spectrum. For example, the 80,000 Hours Podcast has had many episodes related to neartermist causes, and the 80,000 Hours Job Board has many positions on these too.
Thanks for engaging so positively here.
A couple of quick reactions:
This is a very bold claim, made quite casually! Especially in light of:
I would evaluate these options through the GiveWell criteria—evidence of effectiveness, cost-effectiveness and room for more funding.
For the GiveWell charities, they score very highly on each metric. For example, they are each supported by multiple randomised control trials. By contrast, the indicators you mention are weak proxy indicators (I think you should also have added ‘counterfactual’ to each one—a new arms control treaty isn’t an achievement for a donor unless it would likely not have happened without extra funding).
If I could challenge you, I think this looks like motivated reasoning, in that I think these are probably ‘decent proxy indicators if you’ve already decided to donate solely within longtermism’. But I think it’s very tough to maintain that longtermist giving opportunities stack up next to neartermist ones, if compared on the same metrics.
To summarise: global health giving opportunities—exceptionally strong evidence of effectiveness; rigorous cost-effectiveness analyses; room for more funding updated annually with high levels of transparency.
Longtermist giving opportunities (as mentioned here) - some (weak) proxy indicators show progress, some don’t (e.g. I’m not aware of any counterfactual nuclear arms control treaties in the past 10 years); therefore speculative cost-effectiveness, because little evidence of effectiveness; individual projects likely to have room for more funding, but as a sector much less room for more funding (e.g. you could deploy billions via GiveDirectly, but Open Phil only managed to deploy <$1bn to longtermist causes last year).
As effective giving organisations should (at least in theory) be agnostic about cause area and focussed on ‘effectiveness’, I would be surprised if any raised the majority of their donations for longtermist causes, which have significant challenges around evidence of effectiveness/tractability.
Likewise!
Sorry. I had given some context in the post (not sure you noticed it). You can find more in section 4 of Mogensen 2019 (discussed here):
You say that:
I believe those criteria are great, and wish effective giving organisations applied them to their own operations. For example, doing retrospective and prospective cost-effectiveneness analyses.
My concern is that GiveWell’s metrics (roughly, lives saved per dollar[1]) may well not capture most of the expected effects of GiveWell’s interventions. For example:
From Mogensen 2019:
Feel free to check section 4.1 for many positive and negative consequences of increasing and decreasing population size.
Note I am not saying the relationships are simple or linear, just that they very much matter. Without nuclear weapons, there would be no risk of nuclear war. In any case, I agree the correlation between longtermist outcomes (e.g. lower extinction risk) and the measurable outputs of longtermist interventions (e.g. less nuclear weapons) will tend to be lower than the correlation between neartermist outcomes (e.g. deaths averted) and the measurable outputs of neartermist interventions (e.g. distributed bednets). However, my concern is that the correlation between neartermist outcomes (e.g. deaths averted) and ultimately relevant outcomes (welfare across all space and time, not just very nearterm welfare of humans) is quite poor.
Fair! For what is worth, I was not commited to donating solely to longtermist interventions from the onset. I used to split my donations evenly across all 4 EA Funds, and wrote articles about donating to GiveWell’s top charities on the online newspaper of my former university.
I agree longtermist organisations should do more to assess their cost-effectiveness and room for more funding. One factor is that longtermist organisations tend to be smaller (not Nuclear Threat Initiative), so they have less resources to do such analyses (although arguably still enough).
In reality, according to GiveWell’s moral weights, the value of saving lives increases until an age of around 10 (if I recall correctly), and then starts decreasing. Economic benefits are also taken into account.
Do you mean z > 1?
Thanks, David! Corrected.