I do agree there is a potential gap for more impact evaluation in EA space and that it is common place for many non-EA NGOs/organisations to be required to have a certain percentage of their programme set aside for monitoring & evaluation purposes… so it feels something similar for EA organisations could be easily achieved.
A potential option—though would need far more exploration—is a central EA organisation that is funded by 5% of all OP/GW/EA Funds grants. So if a Open Phil gives a $1m grant, $50k is allocated to the central EA impact evaluation organisation who now need to add that recipient org to their list of orgs to work with and do an independent evaluation of at some agreed point (depending on grant objectives etc.).
One thing I would stress in particular links to your point around the difficulty of doing M&E on several of the largest EA cause areas (esp. in the GCR space) that have very long (or potentially non-existent) feedback loops and unclear metrics to track. Rather than just accepting its too difficult to do impact evaluation, the focus should be on the process of decision making and reasoning in those organisations, which can act as a ‘best alternative’ proxy. This can then be evaluated, through independent assessment of items such as theory of changes and use of decision methods like Bayesian belief networks of the route to impact/change.
Nice post!
I do agree there is a potential gap for more impact evaluation in EA space and that it is common place for many non-EA NGOs/organisations to be required to have a certain percentage of their programme set aside for monitoring & evaluation purposes… so it feels something similar for EA organisations could be easily achieved.
A potential option—though would need far more exploration—is a central EA organisation that is funded by 5% of all OP/GW/EA Funds grants. So if a Open Phil gives a $1m grant, $50k is allocated to the central EA impact evaluation organisation who now need to add that recipient org to their list of orgs to work with and do an independent evaluation of at some agreed point (depending on grant objectives etc.).
One thing I would stress in particular links to your point around the difficulty of doing M&E on several of the largest EA cause areas (esp. in the GCR space) that have very long (or potentially non-existent) feedback loops and unclear metrics to track. Rather than just accepting its too difficult to do impact evaluation, the focus should be on the process of decision making and reasoning in those organisations, which can act as a ‘best alternative’ proxy. This can then be evaluated, through independent assessment of items such as theory of changes and use of decision methods like Bayesian belief networks of the route to impact/change.