Thanks for this Vasco—always a useful exercise to look at cost-effectiveness, especially in an area like effective giving, where the money-moved is quite easily measured.
Some thoughts on this, which I’ll split into different comments for ease of discussion:
“Nevertheless, the counterfactual marginal multipliers adjusted for cost-effectiveness and indirect impacts should ideally be equal. In other words, donating to any effective giving organisation should be similarly effective taking into account all effects.”
This seems very unlikely to be true in practice, but also I’m not sure it should be true in an ideal world either. Effective giving organisations should vary according to many factors—target market, costs of operating in various jurisdictions, competition being higher in some jurisdictions than others, the effectiveness of the team and strategy etc..
For example, it would be naive to assume that an effective giving org targeting Ultra High Net Worth Individuals (e.g. LongView, Effective Giving, Founder’s Pledge) would have the same ROI as one targeting grassroots givers (e.g. One for the World). Some types of outreach/donor will have much higher ROI than others.
The reason I think it isn’t even ‘ideal’ for all organisations to have the same ROI is that there is value to having a variety of approaches, because:
Certain types of outreach are crowded (e.g. it seems silly repeatedly to set up a ‘new Founder’s Pledge’, or a new ’10% pledge’ organisation)
Certain types are too specialist or need special expertise that isn’t available to every organisation (e.g. it’s very hard to gain access to Ultra High Net Worth givers)
If we all did one type of fundraising, it would decrease the diversity of our funding base and increase our risk
I’d also submit that the relative impact of effective-giving organizations nearer the “grassroots” level will likely be underestimated by looking solely at money moved. For example, grassroots effective-giving campaigns provide people with accessible ways to take action, which itself can spur greater commitment and downstream positive actions that aren’t captured well by a money-moved analysis alone. In contrast, money moved likely does a better job capturing the bulk of the impact from UHNW outreach.
Agreed, Jason! On the other hand, I would expect effective giving organisation to be tracking such indirect impacts if they represented an important part of their theory of change and overall impact. My impression is that the 4 organisations I analysed are not assessing much such indirect impacts. They were also not covered in GWWC’s last impact analysis (see here).
They seem rather difficult to capture and evaluate at a high level of specificity. It’s unclear if attempting to better measure and quantify that portion of ROI was the best use of these org’s resources a year ago, or even now in a tighter funding picture.
Per the first linked source: “In the 2020 EA Survey, 21% of respondents reported that Giving What We Can was important for them getting involved in EA.” Doubtless the percentage would be higher for all effective-givingish organizations (especially if GiveWell were included, my own entry point). Even concluding that 2.5 percent of 21 percent of EA activity should be “credited” to grassroots effective giving would be pretty significant additional impact for the fairly low spend involved.
They seem rather difficult to capture and evaluate at a high level of specificity. It’s unclear if attempting to better measure and quantify that portion of ROI was the best use of these org’s resources a year ago, or even now in a tighter funding picture.
I think there is a tension between:
People getting involved with EA is a major driver of our impact.
We do not measure how much we are responsible for people getting involved with EA.
These imply a major driver of impact is not being measured, which seems strange (specially for the larger effective giving organisations). Note that I am not suggesting investing tons of resources into quantifying indirect impact. Just asking a few questions once a year (e.g. did you apply to any job thanks to becoming aware of EA via our effective giving organisation?) would take little time, and provide useful information.
Per the first linked source: “In the 2020 EA Survey, 21% of respondents reported that Giving What We Can was important for them getting involved in EA.”
I agree GWWC’s indirect impact has been quite important:
I believe it would be important to study (by descending order of importance):
On the other hand, I would say “getting involved in EA” is a little too vague. I think effective giving can be considered being involved in EA, so, from what you quoted alone, it is unclear whether there have been indirect impacts besides donations.
Doubtless the percentage would be higher for all effective-givingish organizations (especially if GiveWell were included, my own entry point).
This is not obvious to me, because I think GWWC and GiveWell have much stronger ties to EA than the mean effective giving organisation. I also expect effective giving to disproportionally select for people who will end up engaged with neartermist interventions, which I think have very unclear impact.
Donor time/attention is a precious commodity to fundraisers, so I wouldn’t expect organizations to have expended much of it on this topic without a specific business justification. It’s plausible to me that the funders thought (and may still think) that each org’s easily-quantifiable output was sufficient to fill room for more funding, and that the orgs didn’t (and don’t) think more precise measurement of indirect impact would materially change org strategy (e.g., because those impacts are attainable by the org only as a byproduct of doing the org’s standard work).
Thanks, Jack! It is always good to receive feedback on such exercises too!
I agree with all the points you make. As I said:
However, the results [for the factual non-marginal multipliers] might differ accounting for future donations (received after 2021, but caused until then), counterfactuals, diminishing marginal returns, cost-effectiveness of caused donations, and indirect impacts of effective giving. Furthermore, the organisations were at different levels of maturity. Consequently, my estimates for the factual non-marginal multipliers [ROIs] are not directly comparable, and I do not know which of the 4 organisations are more effective at the margin.
However, although it is fine for the (all things considered) factual non-marginal multipliers to be different, the (all things considered) counterfactual marginal multipliers should be the same. If the marginal cost-effectiveness of donating to X is higher than that of donating to Y, one should donate more to X at the margin (which does not mean one should donate 0 to Y).
Thanks for this Vasco—always a useful exercise to look at cost-effectiveness, especially in an area like effective giving, where the money-moved is quite easily measured.
Some thoughts on this, which I’ll split into different comments for ease of discussion:
This seems very unlikely to be true in practice, but also I’m not sure it should be true in an ideal world either. Effective giving organisations should vary according to many factors—target market, costs of operating in various jurisdictions, competition being higher in some jurisdictions than others, the effectiveness of the team and strategy etc..
For example, it would be naive to assume that an effective giving org targeting Ultra High Net Worth Individuals (e.g. LongView, Effective Giving, Founder’s Pledge) would have the same ROI as one targeting grassroots givers (e.g. One for the World). Some types of outreach/donor will have much higher ROI than others.
The reason I think it isn’t even ‘ideal’ for all organisations to have the same ROI is that there is value to having a variety of approaches, because:
Certain types of outreach are crowded (e.g. it seems silly repeatedly to set up a ‘new Founder’s Pledge’, or a new ’10% pledge’ organisation)
Certain types are too specialist or need special expertise that isn’t available to every organisation (e.g. it’s very hard to gain access to Ultra High Net Worth givers)
If we all did one type of fundraising, it would decrease the diversity of our funding base and increase our risk
I’d also submit that the relative impact of effective-giving organizations nearer the “grassroots” level will likely be underestimated by looking solely at money moved. For example, grassroots effective-giving campaigns provide people with accessible ways to take action, which itself can spur greater commitment and downstream positive actions that aren’t captured well by a money-moved analysis alone. In contrast, money moved likely does a better job capturing the bulk of the impact from UHNW outreach.
Agreed, Jason! On the other hand, I would expect effective giving organisation to be tracking such indirect impacts if they represented an important part of their theory of change and overall impact. My impression is that the 4 organisations I analysed are not assessing much such indirect impacts. They were also not covered in GWWC’s last impact analysis (see here).
They seem rather difficult to capture and evaluate at a high level of specificity. It’s unclear if attempting to better measure and quantify that portion of ROI was the best use of these org’s resources a year ago, or even now in a tighter funding picture.
Per the first linked source: “In the 2020 EA Survey, 21% of respondents reported that Giving What We Can was important for them getting involved in EA.” Doubtless the percentage would be higher for all effective-givingish organizations (especially if GiveWell were included, my own entry point). Even concluding that 2.5 percent of 21 percent of EA activity should be “credited” to grassroots effective giving would be pretty significant additional impact for the fairly low spend involved.
I think there is a tension between:
People getting involved with EA is a major driver of our impact.
We do not measure how much we are responsible for people getting involved with EA.
These imply a major driver of impact is not being measured, which seems strange (specially for the larger effective giving organisations). Note that I am not suggesting investing tons of resources into quantifying indirect impact. Just asking a few questions once a year (e.g. did you apply to any job thanks to becoming aware of EA via our effective giving organisation?) would take little time, and provide useful information.
I agree GWWC’s indirect impact has been quite important:
On the other hand, I would say “getting involved in EA” is a little too vague. I think effective giving can be considered being involved in EA, so, from what you quoted alone, it is unclear whether there have been indirect impacts besides donations.
This is not obvious to me, because I think GWWC and GiveWell have much stronger ties to EA than the mean effective giving organisation. I also expect effective giving to disproportionally select for people who will end up engaged with neartermist interventions, which I think have very unclear impact.
Donor time/attention is a precious commodity to fundraisers, so I wouldn’t expect organizations to have expended much of it on this topic without a specific business justification. It’s plausible to me that the funders thought (and may still think) that each org’s easily-quantifiable output was sufficient to fill room for more funding, and that the orgs didn’t (and don’t) think more precise measurement of indirect impact would materially change org strategy (e.g., because those impacts are attainable by the org only as a byproduct of doing the org’s standard work).
Thanks, Jack! It is always good to receive feedback on such exercises too!
I agree with all the points you make. As I said:
However, although it is fine for the (all things considered) factual non-marginal multipliers to be different, the (all things considered) counterfactual marginal multipliers should be the same. If the marginal cost-effectiveness of donating to X is higher than that of donating to Y, one should donate more to X at the margin (which does not mean one should donate 0 to Y).