Agreed, Jason! On the other hand, I would expect effective giving organisation to be tracking such indirect impacts if they represented an important part of their theory of change and overall impact. My impression is that the 4 organisations I analysed are not assessing much such indirect impacts. They were also not covered in GWWCās last impact analysis (see here).
They seem rather difficult to capture and evaluate at a high level of specificity. Itās unclear if attempting to better measure and quantify that portion of ROI was the best use of these orgās resources a year ago, or even now in a tighter funding picture.
Per the first linked source: āIn the 2020 EA Survey, 21% of respondents reported that Giving What We Can was important for them getting involved in EA.ā Doubtless the percentage would be higher for all effective-givingish organizations (especially if GiveWell were included, my own entry point). Even concluding that 2.5 percent of 21 percent of EA activity should be ācreditedā to grassroots effective giving would be pretty significant additional impact for the fairly low spend involved.
They seem rather difficult to capture and evaluate at a high level of specificity. Itās unclear if attempting to better measure and quantify that portion of ROI was the best use of these orgās resources a year ago, or even now in a tighter funding picture.
I think there is a tension between:
People getting involved with EA is a major driver of our impact.
We do not measure how much we are responsible for people getting involved with EA.
These imply a major driver of impact is not being measured, which seems strange (specially for the larger effective giving organisations). Note that I am not suggesting investing tons of resources into quantifying indirect impact. Just asking a few questions once a year (e.g. did you apply to any job thanks to becoming aware of EA via our effective giving organisation?) would take little time, and provide useful information.
Per the first linked source: āIn the 2020 EA Survey, 21% of respondents reported that Giving What We Can was important for them getting involved in EA.ā
I agree GWWCās indirect impact has been quite important:
I believe it would be important to study (by descending order of importance):
On the other hand, I would say āgetting involved in EAā is a little too vague. I think effective giving can be considered being involved in EA, so, from what you quoted alone, it is unclear whether there have been indirect impacts besides donations.
Doubtless the percentage would be higher for all effective-givingish organizations (especially if GiveWell were included, my own entry point).
This is not obvious to me, because I think GWWC and GiveWell have much stronger ties to EA than the mean effective giving organisation. I also expect effective giving to disproportionally select for people who will end up engaged with neartermist interventions, which I think have very unclear impact.
Donor time/āattention is a precious commodity to fundraisers, so I wouldnāt expect organizations to have expended much of it on this topic without a specific business justification. Itās plausible to me that the funders thought (and may still think) that each orgās easily-quantifiable output was sufficient to fill room for more funding, and that the orgs didnāt (and donāt) think more precise measurement of indirect impact would materially change org strategy (e.g., because those impacts are attainable by the org only as a byproduct of doing the orgās standard work).
Agreed, Jason! On the other hand, I would expect effective giving organisation to be tracking such indirect impacts if they represented an important part of their theory of change and overall impact. My impression is that the 4 organisations I analysed are not assessing much such indirect impacts. They were also not covered in GWWCās last impact analysis (see here).
They seem rather difficult to capture and evaluate at a high level of specificity. Itās unclear if attempting to better measure and quantify that portion of ROI was the best use of these orgās resources a year ago, or even now in a tighter funding picture.
Per the first linked source: āIn the 2020 EA Survey, 21% of respondents reported that Giving What We Can was important for them getting involved in EA.ā Doubtless the percentage would be higher for all effective-givingish organizations (especially if GiveWell were included, my own entry point). Even concluding that 2.5 percent of 21 percent of EA activity should be ācreditedā to grassroots effective giving would be pretty significant additional impact for the fairly low spend involved.
I think there is a tension between:
People getting involved with EA is a major driver of our impact.
We do not measure how much we are responsible for people getting involved with EA.
These imply a major driver of impact is not being measured, which seems strange (specially for the larger effective giving organisations). Note that I am not suggesting investing tons of resources into quantifying indirect impact. Just asking a few questions once a year (e.g. did you apply to any job thanks to becoming aware of EA via our effective giving organisation?) would take little time, and provide useful information.
I agree GWWCās indirect impact has been quite important:
On the other hand, I would say āgetting involved in EAā is a little too vague. I think effective giving can be considered being involved in EA, so, from what you quoted alone, it is unclear whether there have been indirect impacts besides donations.
This is not obvious to me, because I think GWWC and GiveWell have much stronger ties to EA than the mean effective giving organisation. I also expect effective giving to disproportionally select for people who will end up engaged with neartermist interventions, which I think have very unclear impact.
Donor time/āattention is a precious commodity to fundraisers, so I wouldnāt expect organizations to have expended much of it on this topic without a specific business justification. Itās plausible to me that the funders thought (and may still think) that each orgās easily-quantifiable output was sufficient to fill room for more funding, and that the orgs didnāt (and donāt) think more precise measurement of indirect impact would materially change org strategy (e.g., because those impacts are attainable by the org only as a byproduct of doing the orgās standard work).