Interesting discussion. I agree incentives can be tricky and I have seen my fair share of bad evaluations and evaluation organisations with questionable practices. Some thoughts from me as an evaluator who has worked in a few different country contexts:
I think M&E is not just internal or completely external. A lot of times M&E orgs are hired to provide expert support and work alongside an organisation to develop an evaluation. M&E can be complex and it can really help orgs to have experts guide them through this process and clarify their thinking. And as you say when we have internal buy in we are more likely to see the findings taken up and actioned. When we only see M&E as an outside judgement commissioned by a funder with no input from the org being evaluated we make M&E out as antagonistic or adversarial which can be an unhelpful dynamic. I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder. This can be very disempowering and may not lead to positive changes.
I think many organisations do want to learn and improve but fear harsh judgement which is quite a natural, human response. I think bringing partners/orgs on board early, and establishing a pre-evaluation plan (see here) highlighting what your standards for evidence are and what actions you will take in response to certain findings before the evaluation is helpful. This also gives the organisation ownership over the evaluation results. I think it is important to frame your evaluation so that feedback can be taken on board in a culturally appropriate way. The last thing you want is for an organisation to feel harshly judged with absolutely no input or right of reply.
We speak like M&E is clear cut but M&E assessments often don’t come out fully positive or negative. A lot of evaluations occupy a messy middle. There is often some good things, some not so good things, some thing which we think are good or bad but we don’t have conclusive evidence. Sensemaking can be subjective as it often comes down to how you weigh different values or the standards you set for what good or bad look like. This can be different between the funder, the org, the community and the evaluators. For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say? Is it good? Is it bad? Is this difference practically significant? Should the program be changed? What if changing this makes it less cost-effective? This comes down to how you weigh different values and standards of performance.
I agree with Dan and think integrity is a very important internal driver. While I agree with Nick that acting this way can be more difficult for local staff given the connections and relationships both professional and personal that they have to navigate, I don’t think integrity as an incentive is hard for them to compute, it is just harder for them to action. I don’t think the response should be that all evaluations should be done by non-local/international firms. This is highly disempowering, would drain local capacity, and again puts decision-making back in the hands of people often from high-income and supposedly ‘more objective’ contexts rather than building a strong local ecosystem of accountability, re-hashing problematic colonial power dynamics.
These kinds of dilemmas exist everywhere. Evaluation is always a tricky tightrope walk where you are trying to balance the rigour of evidence, the weight of different values, and the broader political and relational ecosystem so that what you say is actually used and put into practice to improve programs and impact.
Wow thanks so much again for the great insights. So good to have experienced development practitioners here!
To give some background I came to EA partly because I saw how useless most NGOs are here where I live, and the EA framework answers many of the questions as to why, and some of the questions as to how to fix the problem. If I was the one doing M&E and had a magic wand, I would probably decide to shut down over 80% of NGOs and programs that I assessed.
Also we have had a bunch of USAID and other funded M&E pass through many of our health centers, and they have almost never either found our biggest problems nor suggested good solutions—with one exception of a specifically focused financial management assessment which was actually really helpful.
I won’t respond to everything but just make a few comments :)..
Your M&E might just be better First, the level of M&E you do might be so much better than I have seen, that some of the issues I talk about might not apply so much.
”For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say?”
That you have even done the kind of analysis that allows you to ask this kind of great question would put you above nearly any M&E that I have ever seen here in Northern Uganda. Even the concept of calculated “cost effectiveness” as we know it rarely (if ever) considered here. I can’t think of anyone who has assessed either the bigger health centers we operate or OneDay Health who has included this in an assessment.
I’m not sure how you would answer that question, but the fact that you have even reached that point means that in my eyes you are already winning to some degree. Also this analysis is so fantastic thanks for sharing I haven’t seen that before! My only comment is that I don’t think the analysis generated “mixed’ results -they seem very clear to me :D!
External assessors for data collection, local assessors for analysis and change? For an assessment like this one of Miraclefeet, I favour external assessors to gather the basic data then perhaps local assessors could take over on the analysis? Data collection needs to be squeaky clean otherwise everything else falls down. This particular assessment should be fairly straightforward to assess by first gathering these data
1. Have the clubfoot procedures actually been done as stated? This needs a random selection of a sample of all patients allegedly worked on (say 100 randomly selected from a list of 5000 patients provided by Miricalefeet) then each one of those should be physically followed up in their home and checked. This isn’t difficult, and anything else is open to bias.
2. What has the “average intervention” achieved? Then those same 100 patients should be assessed for impact—what is their objective level of functionality and subjective improvement in wellbeing/quality of life after the procedure.
Once these 2 pieces of data are gathered, the organisational analysis and discussion you speak of can start and that might be more productive on a local-to-local level, providing the local expertise is available.
Integrity there but comes second? I know integrity is an important driver like you say, and I love your comment that it is easy to compute and hard to action. In my experience integrity is usually there, but often falls behind the other “positive skew” motivating factors. Also I agree that M&E shouldn’t always be done by external firms partly because of the reasons you state. An added reason is that external firms often hire lots of local people to do much of the work anyway, so the same issues I outlined remain.
A small disagreement? ”I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder.”
In my experience this response might be a red flag. A sign that the org might dodging and weaving after failing to perform. I believe almost all organisation should do do pre specified actions A,B and C which provides impact X, Y, and Z. If these actions aren’t happening and impact isn’t produced then that needs to be fixed or maybe the work needs to stop. External evaluators’ job isn’t to understand the context (how could they possibly do that? Its not realistic. I’ve been in Uganda for 10 years and in many ways I still don’t understand the local context) -that is our job, the practitioners. Their job is to see what the org is doing and whether the impact is happening.
As a side note I’m a little disappointed that we don’t have more engagement on this discussion. - the “M&E question” is so important, but perhaps its not sexy and probably isn’t accessible to many.
Interesting discussion. I agree incentives can be tricky and I have seen my fair share of bad evaluations and evaluation organisations with questionable practices. Some thoughts from me as an evaluator who has worked in a few different country contexts:
I think M&E is not just internal or completely external. A lot of times M&E orgs are hired to provide expert support and work alongside an organisation to develop an evaluation. M&E can be complex and it can really help orgs to have experts guide them through this process and clarify their thinking. And as you say when we have internal buy in we are more likely to see the findings taken up and actioned. When we only see M&E as an outside judgement commissioned by a funder with no input from the org being evaluated we make M&E out as antagonistic or adversarial which can be an unhelpful dynamic. I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder. This can be very disempowering and may not lead to positive changes.
I think many organisations do want to learn and improve but fear harsh judgement which is quite a natural, human response. I think bringing partners/orgs on board early, and establishing a pre-evaluation plan (see here) highlighting what your standards for evidence are and what actions you will take in response to certain findings before the evaluation is helpful. This also gives the organisation ownership over the evaluation results. I think it is important to frame your evaluation so that feedback can be taken on board in a culturally appropriate way. The last thing you want is for an organisation to feel harshly judged with absolutely no input or right of reply.
We speak like M&E is clear cut but M&E assessments often don’t come out fully positive or negative. A lot of evaluations occupy a messy middle. There is often some good things, some not so good things, some thing which we think are good or bad but we don’t have conclusive evidence. Sensemaking can be subjective as it often comes down to how you weigh different values or the standards you set for what good or bad look like. This can be different between the funder, the org, the community and the evaluators. For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say? Is it good? Is it bad? Is this difference practically significant? Should the program be changed? What if changing this makes it less cost-effective? This comes down to how you weigh different values and standards of performance.
I agree with Dan and think integrity is a very important internal driver. While I agree with Nick that acting this way can be more difficult for local staff given the connections and relationships both professional and personal that they have to navigate, I don’t think integrity as an incentive is hard for them to compute, it is just harder for them to action. I don’t think the response should be that all evaluations should be done by non-local/international firms. This is highly disempowering, would drain local capacity, and again puts decision-making back in the hands of people often from high-income and supposedly ‘more objective’ contexts rather than building a strong local ecosystem of accountability, re-hashing problematic colonial power dynamics.
These kinds of dilemmas exist everywhere. Evaluation is always a tricky tightrope walk where you are trying to balance the rigour of evidence, the weight of different values, and the broader political and relational ecosystem so that what you say is actually used and put into practice to improve programs and impact.
Wow thanks so much again for the great insights. So good to have experienced development practitioners here!
To give some background I came to EA partly because I saw how useless most NGOs are here where I live, and the EA framework answers many of the questions as to why, and some of the questions as to how to fix the problem. If I was the one doing M&E and had a magic wand, I would probably decide to shut down over 80% of NGOs and programs that I assessed.
Also we have had a bunch of USAID and other funded M&E pass through many of our health centers, and they have almost never either found our biggest problems nor suggested good solutions—with one exception of a specifically focused financial management assessment which was actually really helpful.
I won’t respond to everything but just make a few comments :)..
Your M&E might just be better
First, the level of M&E you do might be so much better than I have seen, that some of the issues I talk about might not apply so much.
”For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say?”
That you have even done the kind of analysis that allows you to ask this kind of great question would put you above nearly any M&E that I have ever seen here in Northern Uganda. Even the concept of calculated “cost effectiveness” as we know it rarely (if ever) considered here. I can’t think of anyone who has assessed either the bigger health centers we operate or OneDay Health who has included this in an assessment.
I’m not sure how you would answer that question, but the fact that you have even reached that point means that in my eyes you are already winning to some degree. Also this analysis is so fantastic thanks for sharing I haven’t seen that before! My only comment is that I don’t think the analysis generated “mixed’ results -they seem very clear to me :D!
External assessors for data collection, local assessors for analysis and change?
For an assessment like this one of Miraclefeet, I favour external assessors to gather the basic data then perhaps local assessors could take over on the analysis? Data collection needs to be squeaky clean otherwise everything else falls down. This particular assessment should be fairly straightforward to assess by first gathering these data
1. Have the clubfoot procedures actually been done as stated? This needs a random selection of a sample of all patients allegedly worked on (say 100 randomly selected from a list of 5000 patients provided by Miricalefeet) then each one of those should be physically followed up in their home and checked. This isn’t difficult, and anything else is open to bias.
2. What has the “average intervention” achieved? Then those same 100 patients should be assessed for impact—what is their objective level of functionality and subjective improvement in wellbeing/quality of life after the procedure.
Once these 2 pieces of data are gathered, the organisational analysis and discussion you speak of can start and that might be more productive on a local-to-local level, providing the local expertise is available.
Integrity there but comes second?
I know integrity is an important driver like you say, and I love your comment that it is easy to compute and hard to action. In my experience integrity is usually there, but often falls behind the other “positive skew” motivating factors. Also I agree that M&E shouldn’t always be done by external firms partly because of the reasons you state. An added reason is that external firms often hire lots of local people to do much of the work anyway, so the same issues I outlined remain.
A small disagreement?
”I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder.”
In my experience this response might be a red flag. A sign that the org might dodging and weaving after failing to perform. I believe almost all organisation should do do pre specified actions A,B and C which provides impact X, Y, and Z. If these actions aren’t happening and impact isn’t produced then that needs to be fixed or maybe the work needs to stop. External evaluators’ job isn’t to understand the context (how could they possibly do that? Its not realistic. I’ve been in Uganda for 10 years and in many ways I still don’t understand the local context) - that is our job, the practitioners. Their job is to see what the org is doing and whether the impact is happening.
As a side note I’m a little disappointed that we don’t have more engagement on this discussion. - the “M&E question” is so important, but perhaps its not sexy and probably isn’t accessible to many.