[Question] Impact of Charity Evaluations on Evaluated Charities’ Effectiveness

In relevance to ongoing outreach work in EA Israel, an upcoming Charity Effectiveness Prize, and our local charity-evaluation work, I’ve become interested in how does the evaluation process affects charities who are under evaluation.

I can (sort of) quantify and understand the direct impact of the recommendation on the top charities. I can also (sort of) imagine the kind of impact the recommendation process has on popularizing cost-effectiveness (although I’d love to read a detailed report on the topic).

What I’d like to understand better at the moment are the generally-framed questions around how does more evidence lead to higher performance:

  1. How do self-reflecting charities adapt to new evidence? GiveDirectly, for example, performs a lot of RCTs on direct cash transfers. I’d be very interested in examples of GiveDirectly and other charities making strategic or practical changes due to new evidence, and examples of randomized trials or other analyses being performed to make high-level decisions. Are there examples of nonprofits that completely started over when their interventions proved ineffective? I’d also be interested to learn more about what charities recommended by evaluation orgs learn from the evaluation process itself and whether this helps them improve (or cause harm).

  2. How are charities that aren’t empirically grounded impacted from doing charity evaluation? Most charities, unfortunately, are not performing RCTs or otherwise invest in gathering evidence about the impact of their actions. Generally speaking, how do such charities respond to evidence or claims of (in)effectiveness? How reasonable is it to expect charities without a strong self-evaluation history to improve as a result of performing an analysis later on? Are there examples where organizations like GiveWell and ACE successfully helped improve the performance of investigated charities?

No comments.