Impact Evaluation in EA
Given EA’s history and values, I’d have expected for impact evaluation to be a distinguishing feature of the movement. In fact, impact evaluation seems fairly rare in the EA space.
There are some things specific actors could do for EA to get more of the benefits of impact evaluation. For example, organisations that don’t already could carry out evaluations of their impact, and a well-suited individual could start an organisation to carry out impact evaluations and analysis of the EA movement.
Overall I’m unsure to what extent more focus on impact evaluation would be an improvement. On the one hand, establishing impact is challenging for many EA activities and impact evaluation can be burdensome. On the other hand, an organisation’s historic impact seems very action-relevant to its future activities and current levels of impact evaluation seem low.
What Is Impact Evaluation?
Over the last year I’ve been speaking to EA orgs about their impact evaluation. This includes setting up a theory of change (ToC), choosing metrics and methods of evaluation, and carrying out evaluations. Impact evaluation can be done internally, by a funder, or by another external evaluator.
Why is Impact Evaluation Important?
Whereas companies have a clear metric to understand their success (profit), the social impact of a nonprofit is much harder to see. As a result, if a nonprofit wants to have a good sense of its success then it’s likely to need to make an explicit assessment of its impact. For this reason impact evaluation is often viewed as a core part of strategy and operations for nonprofits, particularly in the Global Health and Development space.
Concretely, the main benefits of impact evaluation to an organisation are:
Confidence that targeted impacts are achieved
Decision-making being more closely tied to desired social impact (including organisational alignment)
Identifying changes to activities to increase impact and mitigate harms
Sharing progress with stakeholders, and highlighting successes publicly
I’d expect Impact Evaluation to be quite common in EA
Given the history of EA, I’d have expected impact evaluation to be quite common in the EA movement. One early EA message was something like “even programs that sound good and are implemented by people with the best intentions can be ineffectual or even harmful, and you might not realise this until you rigorously look into the outcomes”.
EA also has values and norms that align with impact evaluation. The movement has a focus on rigour, quantification, and impact. It has a reputation for making use of cost-effectiveness analysis and placing weight on ‘getting to the truth of the matter’, and there is a culture of actually caring about good outcomes.
Impact evaluation is fairly rare in EA
My impression is that impact evaluation is actually quite rare in the EA space, in the following ways:
As far as I can tell many organisations in EA, perhaps somewhere between 30 and 70% (outside of GHD work), including the larger ones:
Don’t have explicit theories of change
Don’t carry out or publish assessments of their impact
Don’t have explicit internal functions focussing on impact evaluation
There aren’t established and sophisticated ways in the movement to evaluate the impact of many common EA activities, such as movement-building and policy change
To be clear, there definitely are evaluation efforts in EA, such as:
Many orgs do publish impact reports with meaningful information and detailed metrics
Funders carry out work that might be considered impact evaluation to support grants
There are multiple charity evaluators (GiveWell, GWWC, Founders Pledge, ACE, EA Funds, Giving Green, potentially others) focussed on impact evaluation for funders
Culturally, organisations seem motivated by impact, and it is common to carry out cost-effectiveness calculations and reason in social impact terms when making decisions
But overall I’m still surprised by the level of impact evaluation in EA.
Potentially justified reasons for this
This might be the optimal level of impact evaluation. Some potentially well justified reasons for the current level are:
It’s challenging for many EA activities
The impact of many EA activities is indirect, hard to see and/or a long way in the future, making evaluation challenging. This increases the burden of impact evaluation work, and reduces the pay-off. Some organisations I spoke to have attempted this work in the past but found it too challenging without enough reward, so reasonably shelved it for the timebeing. (This is also the most obvious explanation for why impact evaluation is much more common for GHD organisations than e.g. policy, AI safety and movement-building organisations)
Org’s are busy
Many organisations I spoke to expressed a desire to have better impact evaluation and/or a theory of change, but it was on a long list of things to do
Org’s might have strong priors about activities
Organisations might have strong views that work is worth carrying out and expect that the evidence from impact evaluation would be weak, so that the results of an evaluation would be unlikely to change their focus
Some low- to medium-cost opportunities
Here are some actions that different actors could take if they wanted to do or encourage more impact evaluation:
Strengthen internal evaluations – organisations above a certain size (e.g. >10 employees) could carry out a minimal level of impact evaluation, including:
Creating an explicit theory of change, with a description of what they do, what they hope it will lead to, and how they evaluate their success
Running internal impact evaluations to assess their social impact (and potentially publishing the results)
Having staff dedicated to impact evaluation or with it included in their duties
Leadership teams being invested in the results of evaluations
Carry out external evaluations – individuals who want to work on this problem and are well-suited to this type of work could:
Carry out impact evaluations for EA organisations (or provide advice on evaluations carried out internally)
They could focus on a specific cause area or sub-section of EA, since evaluation methods differ between them
Carry out analysis of the EA movement as a whole, including historic impact and potential risks and opportunities
Will Macaskill points out here that EA should potentially have an org focussed on identifying potential risks of harm. To me the more natural idea would be that there is an evaluation org focussed on more generally assessing the progress of the EA movement, since if this is done properly it would include potential harms.
This analysis could be distributed to EA org leaders, or published online. For analysis made public, it would be important to get the buy-in of central EA org’s and be considerate of PR risks.
Collect and centralise resources and improve methodology
An individual could:
Collect public impact evaluations and methodological resources in e.g. a wiki
Develop evaluation methods for common EA cause areas or interventions (this is perhaps best done while simultaneously carrying out actual evaluations)
CEA could refresh and deepen the public impact page
Overall, I’m fairly uncertain to what extent more focus on impact evaluation in EA would be an improvement.
On the one hand, impact evaluation is challenging for many EA activities and can carry a high resource burden, without a guarantee that it will affect decisions in significant ways.
On the other hand, the historic impact of activities seems very action-relevant to future activities, and current levels of evaluation seem low.
If someone was particularly excited and well-suited to work on impact evaluation in EA then that seems more unambiguously positive and I could imagine them being really useful to many organisations.
Thanks to Stephen Clare, Ben Clifford and Devon Fritz for providing comments on a draft of this post.
I’m basing this on conversations I’ve had with ~20 org’s, and on quickly checking the websites of ~10 other prominent EA org’s for a public ToC or impact report. For the second set of organisations they may carry out this work but not make it public.
“Someone could set up an organisation or a team that’s explicitly taking on the task of assessing, monitoring and mitigating ways in which EA faces major risks, and could thereby fail to provide value to the world, or even cause harm.”