Nice one, this instinctively makes good sense to me and I’m super excited to see how this progresses
I’m especially interested by the follow up evaluation. I now do have doubts about the claims of “technical assistance” programs—theoretically they can have a great path to impact, but unfortnuately the few I have seen first hand (n=very small and they were less specific and focusd than Miraclefoot) aren’t nearly as effective as they claim to be.
I LOVE this paragraph “Because MiracleFeet provides assistance for treatment, but not the treatment itself, it’s tougher to draw a straight line from its work to impact. We were also unsure about existing access to treatment in the three countries where MiracleFeet would be working; we had been told that treatment is scarce in the Philippines, and essentially nonexistent in Chad and Côte d’Ivoire, but we didn’t have independent confirmation of that.”
Great thing is that you will find out with our evaluation! Its great hat you are doing the before and after studies, and I agree it will make financial sense given the amount that you will learn. I suppose in the Phillipines if only 10% of the kids would have been treated without Miraclefeet, you will assess whether you see around a 900% increase in treatments in that region where miraclefeet works? Chad and Côte d’Ivoire will be especially good case studies due to the apparent complete lack of treatment there, as you wlil be able to attribute almost every successful treatment to Miraclefeet. I’m a little surprised you couldn’t confirm treatment was completely non-existent. I would have thought Just calling a few of the biggest hospitals there through a few of your global health contacts would do the trick. I suppose if you are doing a big before and after though, you will answer that in due corse anyway....
I’m not sure how you plan to do external monitoring, but I hope you are doing it yourself rather than hiring external M&E orgs. Here in Uganda anyway those orgs are heavily incentivised to give positive reports, and there is often collusion/corruption between the org itself and the evaluators. The incentives are all wrong, as both the M&E provider and the org they evaluate both benefit hugely from positive rather than negative evaluations—the org through future funding and the M&E provider throughfuture work.
Because of this I’ve seen a couple of hopeless programs here get bizzarely good evaluation reports. Even something that might seem super simple like following up on clubfoot treatment success percent or counting successful treatments could easily get forged. Here In Uganda I’d almost go as far to say you’d struggle to get an objective and accurate report through a local M&E provider—although many will disagree with me here.
I think there might be a number of surgeries and procedures I think that could potentially be extremely cost-effective to fund—perhaps not at this kind of massive scale though. You seem to suggest that 1 in 800 isn’t super common, but in terms of major congenital or genetic disabilities that are curable, that’s about as common as it gets. I would compare it with Down Syndrome—that’s a similar prevalence to clubfoot and imagine the impact if we could cure that!
This is super exciting—I really hope it works and am super stoked you are funding it!
[Disclaimer: I’m the Chief Economist of IDinsight, an M+E provider who has worked with GiveWell and many others. I have a LOT of experience with evaluators being pressured to sugarcoat results, or lack thereof. ]
Strong disagree on this conclusion that M+E providers are inherently biased.
Yes, there are situations where M+E have incentives that can lead to bias. For instance, if an NGO hires an M+E provider to do an external evaluation of themselves, the NGO is therefore the ‘client’ of the researchers. This can be problematic, since the NGO will need to approve deliverables before payments are made. I’ve been involved in these situations and it is tricky.
But in general, arrangements can be made to align incentives with the truth. For instance if a funder (like GiveWell) hires an M+E provider to do an evaluation of one of its grantees, the incentives of the M+E provider are aligned with the funder, who hopefully would like to know the unvarnished truth. We’ve done numerous evaluations for GiveWell (most notably the New Incentives RCT) and have never felt any incentive to skew results one way or another.
From an organizational perspective, a well-run evaluation organization has much stronger long-term incentives to have a reputation for being honest, transparent, and truth-seeking, rather than getting repeat business from any particular client.
Thanks so much Dan—honoured to get a reply from someone with so much experience on the topic and doing such important work. There’s also a decent chance that ID insight has higher standards than many other orgs.
I agree with a decent amount of this—I agree that an NGO hiring its own M&E provider directly is “problematic”, and “tricky” only that I would use stronger language ;). Personally I think its a waste of resources for an NGO to hire an external M&E provider as the incentives are so skewed, I don’t think there’s a lot of added value compared to just internal M&E. Yes incentives are of course all wrong there too, but at least the knowledge and understanding of the operation will be better than the external provider, and the uptake for reform by the org might be better if it is driven from the inside as well.
I also agree if a funder commissions the M&E provider that is far better . At the management level incentives of the funder and the M&E organisation are likely to be aligned. I’m sure you don’t feel any incentive at your level to skew a result, but despite that both from evidence what I have seen, the positive skew is very hard to remove due to unfortunately skewed incentives at the local level.
”We’ve done numerous evaluations for GiveWell (most notably the New Incentives RCT) and have never felt any incentive to skew results one way or another. ” - I’m sure you don’t at that top level, but at the local “on the ground”, assessor level it is very difficult to avoid positive skew.
Both from my personal experience and theory (see incentives below) I think it is likely there will be some degree of positive skew even among the best M&E orgs—this might not mean the orgs shouldn’t exist, but we should be doing much more to mitigate the “on-the-ground” positive bias.
Unfortunately there are strong almost unidirectional incentives towards M&E providers at the local level to skew an M&E assessment positively. I can’t think of clear incentives towards a negative assessment—maybe someone else can?
Incentives for external M&E Positive bias (A lot of overlap between these)
Belief a positive assessment will bring more work. Although this belief may not be true, the most obvious reasoning which I have seen for skewing M&E positive often goes something like...
Positive assessment = more funding for NGO locally = more M&E work in future for me Negative assessment = less funding for NGO locally = less M&E work in future for me
The equation which you hope your employees will abide by might be(correct me if I’m wrong)
Honest assessment = Correct funding for NGO = increased trust in assessment org = more work for assessment org = more work for all our staff including me
But I think that chain of reasoning is VERY difficult to compute for local people working in doing the assessing. There’s also an aspect of prisoners dilemma here, where you rely on everyone assessing in your org to be on board with this long term view in order for it to work for you personally
Long term relationship maintanaince—Often the pool of educated people in NGO jobs and M&E jobs isn’t big, with a lot of overlap especially in a low-income low-education situation. Here in Northern Uganda educated people are likely to know each other. People are incentivised against negative assessments, because it may make things relationally harder with their friends community in future. This is understandable!
Short and long term Job security. I know one absolute legend here who worked as M&E within an org, and straight up whistle-blew against corruption to a funder. Not only was he thrown out of that organisation, but no other NGO here would hire him for years because they feared he may do the same to them—he was even told that directly in 2 interviews! Eventually he left this part of Uganda to get a job somewhere else where he wasn’t known. The fear of long term job security and employability is a strong incentive against reporting negative assessment, especially reporting extremely negative aspects like corruption and an org not actually having done work they claimed they did.
Incentives for external M&E negative bias-I genuinely fine it hard to think of good ones
Perhaps if an organisation really is amazing, the assessor could overemphasise some negative aspects to provide a sense of balance.
You don’t like the organisation or the people in the organisation so you give a disingenuous negative response.
I won’t get into ways of mitigating these bias now (my comment is too long already haha), but I think the natural lean (quite heavily) towards positive skew in M&E is quite high.
Would love to hear specific rebuttals to this if you have time, but all good if you don’t!
Interesting discussion. I agree incentives can be tricky and I have seen my fair share of bad evaluations and evaluation organisations with questionable practices. Some thoughts from me as an evaluator who has worked in a few different country contexts:
I think M&E is not just internal or completely external. A lot of times M&E orgs are hired to provide expert support and work alongside an organisation to develop an evaluation. M&E can be complex and it can really help orgs to have experts guide them through this process and clarify their thinking. And as you say when we have internal buy in we are more likely to see the findings taken up and actioned. When we only see M&E as an outside judgement commissioned by a funder with no input from the org being evaluated we make M&E out as antagonistic or adversarial which can be an unhelpful dynamic. I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder. This can be very disempowering and may not lead to positive changes.
I think many organisations do want to learn and improve but fear harsh judgement which is quite a natural, human response. I think bringing partners/orgs on board early, and establishing a pre-evaluation plan (see here) highlighting what your standards for evidence are and what actions you will take in response to certain findings before the evaluation is helpful. This also gives the organisation ownership over the evaluation results. I think it is important to frame your evaluation so that feedback can be taken on board in a culturally appropriate way. The last thing you want is for an organisation to feel harshly judged with absolutely no input or right of reply.
We speak like M&E is clear cut but M&E assessments often don’t come out fully positive or negative. A lot of evaluations occupy a messy middle. There is often some good things, some not so good things, some thing which we think are good or bad but we don’t have conclusive evidence. Sensemaking can be subjective as it often comes down to how you weigh different values or the standards you set for what good or bad look like. This can be different between the funder, the org, the community and the evaluators. For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say? Is it good? Is it bad? Is this difference practically significant? Should the program be changed? What if changing this makes it less cost-effective? This comes down to how you weigh different values and standards of performance.
I agree with Dan and think integrity is a very important internal driver. While I agree with Nick that acting this way can be more difficult for local staff given the connections and relationships both professional and personal that they have to navigate, I don’t think integrity as an incentive is hard for them to compute, it is just harder for them to action. I don’t think the response should be that all evaluations should be done by non-local/international firms. This is highly disempowering, would drain local capacity, and again puts decision-making back in the hands of people often from high-income and supposedly ‘more objective’ contexts rather than building a strong local ecosystem of accountability, re-hashing problematic colonial power dynamics.
These kinds of dilemmas exist everywhere. Evaluation is always a tricky tightrope walk where you are trying to balance the rigour of evidence, the weight of different values, and the broader political and relational ecosystem so that what you say is actually used and put into practice to improve programs and impact.
Wow thanks so much again for the great insights. So good to have experienced development practitioners here!
To give some background I came to EA partly because I saw how useless most NGOs are here where I live, and the EA framework answers many of the questions as to why, and some of the questions as to how to fix the problem. If I was the one doing M&E and had a magic wand, I would probably decide to shut down over 80% of NGOs and programs that I assessed.
Also we have had a bunch of USAID and other funded M&E pass through many of our health centers, and they have almost never either found our biggest problems nor suggested good solutions—with one exception of a specifically focused financial management assessment which was actually really helpful.
I won’t respond to everything but just make a few comments :)..
Your M&E might just be better First, the level of M&E you do might be so much better than I have seen, that some of the issues I talk about might not apply so much.
”For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say?”
That you have even done the kind of analysis that allows you to ask this kind of great question would put you above nearly any M&E that I have ever seen here in Northern Uganda. Even the concept of calculated “cost effectiveness” as we know it rarely (if ever) considered here. I can’t think of anyone who has assessed either the bigger health centers we operate or OneDay Health who has included this in an assessment.
I’m not sure how you would answer that question, but the fact that you have even reached that point means that in my eyes you are already winning to some degree. Also this analysis is so fantastic thanks for sharing I haven’t seen that before! My only comment is that I don’t think the analysis generated “mixed’ results -they seem very clear to me :D!
External assessors for data collection, local assessors for analysis and change? For an assessment like this one of Miraclefeet, I favour external assessors to gather the basic data then perhaps local assessors could take over on the analysis? Data collection needs to be squeaky clean otherwise everything else falls down. This particular assessment should be fairly straightforward to assess by first gathering these data
1. Have the clubfoot procedures actually been done as stated? This needs a random selection of a sample of all patients allegedly worked on (say 100 randomly selected from a list of 5000 patients provided by Miricalefeet) then each one of those should be physically followed up in their home and checked. This isn’t difficult, and anything else is open to bias.
2. What has the “average intervention” achieved? Then those same 100 patients should be assessed for impact—what is their objective level of functionality and subjective improvement in wellbeing/quality of life after the procedure.
Once these 2 pieces of data are gathered, the organisational analysis and discussion you speak of can start and that might be more productive on a local-to-local level, providing the local expertise is available.
Integrity there but comes second? I know integrity is an important driver like you say, and I love your comment that it is easy to compute and hard to action. In my experience integrity is usually there, but often falls behind the other “positive skew” motivating factors. Also I agree that M&E shouldn’t always be done by external firms partly because of the reasons you state. An added reason is that external firms often hire lots of local people to do much of the work anyway, so the same issues I outlined remain.
A small disagreement? ”I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder.”
In my experience this response might be a red flag. A sign that the org might dodging and weaving after failing to perform. I believe almost all organisation should do do pre specified actions A,B and C which provides impact X, Y, and Z. If these actions aren’t happening and impact isn’t produced then that needs to be fixed or maybe the work needs to stop. External evaluators’ job isn’t to understand the context (how could they possibly do that? Its not realistic. I’ve been in Uganda for 10 years and in many ways I still don’t understand the local context) -that is our job, the practitioners. Their job is to see what the org is doing and whether the impact is happening.
As a side note I’m a little disappointed that we don’t have more engagement on this discussion. - the “M&E question” is so important, but perhaps its not sexy and probably isn’t accessible to many.
Hi Nick, thanks for the thoughtful response. I think you make a lot of good points and I agree that there are numerous incentives can can lead an M+E provider to bias results positively. That’s why there is a ton of bad M+E out there.
One main reaction: for an employee who works in an M+E org, there is arguably no worse situation than being pressured to skew your results positively, or even worse, taking on projects where you know a certain results is expected by your clients. It makes you feel you work is meaningless, and really sucks. And when you are put in this situations, you sure as hell don’t want to work for the same client again.
Yes, i hear you that for bean-counters in an organization (or those who get dividends in a for-profit org), there are strong incentives to make clients happy and get more contracts. But I think that the job-satisfaction incentive for rank-and-file employees skews the other way. And in the course of my experience, I think it is this latter incentive toward truth-telling that has dominated in most cases.
Perhaps, like the rules for auditors established after accounting scandals, funders should adopt a policy requiring changes in the M&E provider at certain intervals, maybe with some random selection of interval? Knowing that next year’s assessment may be done by a different firm may create a disincentive for gaming the system (and a pathway for easier detection of any gaming). That may only work for projects with longer-term M&E efforts though.
Thanks for your comments, your insight into this grant, and your support!
We do expect to get input from local hospital staff on existing treatment coverage through the baseline surveys. The monitoring grant will fund the creation of a sampling frame that includes both public and private health facilities, which we think will yield more complete data than contacting hospitals through our partners.
We agree that potential bias from external evaluators is a risk for the reasons you’ve mentioned. While we won’t be involved in the selection of evaluators, we plan to do the following to mitigate that risk:
Meet with representatives of all the firms and vet them at a high level so we can identify red (and green) flags.
Work closely with both MiracleFeet and the external evaluators throughout the process, so that we can ask questions about and provide input on their research strategies along the way.
We don’t think this will completely eliminate uncertainty about the quality of monitoring results, but we expect it will help. We also think there is some value to be gained from working with evaluators who have a strong familiarity with the local context.
Thanks for the reply, and most of this makes sense to me.
I’m not sure I understand how you won’t be involved in the selection of evaluators, who will do that exactly? Or maybe you mean you won’t select the on-the-ground evaluators as in that will be done by the company, which makes sense.
“The monitoring grant will fund the creation of a sampling frame that includes both public and private health facilities, which we think will yield more complete data than contacting hospitals through our partners.” This could work (high risk), but seems like a roundabout and inefficient way to do things. Following up on that data from multiple hospitals in West Africa for example could be a nightmare.
I would have thought with this kind of massive funding and the relatively small number of people who get procedures (in the thousands), MiracleFeet could maintain a database the contact details of every kid who gets help—this wouldn’t be hard and would make M&E so much easier for everyone. Hospitals might well collect substandard information which makes proper follow up impossible, spoiling your M&E efforts.
If I was going to give one piece of advice on M&E, it would be that your evaluators should follow up personally a completely random sample of individuals who had been treated—both to check that both the interventions actually happened, and that the claimed improvement is real. There should be a list of names, home locations and phone numbers of every single patient who received treatment—I think if that’s not there and individuals can’t be followed for this kind of intervention, than meaningful M&E becomes close to impossible.
Yes, to clarify, MiracleFeet is selecting the on-the-ground evaluator that will conduct the monitoring in each location, although GiveWell will lightly vet MiracleFeet’s choices and meet with each evaluator.
The primary purpose of the monitoring grant is to understand how many children are treated for clubfoot both with and without MiracleFeet’s support. So, although MiracleFeet has records of children treated through facilities it’s supported, we also want an assessment of baseline treatment coverage before MiracleFeet launches its program (or expands it, in the case of the Philippines). We do plan to incorporate some form of data audit as part of endline activities; we’ll work out the details of that at a later date.
Thanks again for your interest in this and for taking the time to ask questions!
Nice one, this instinctively makes good sense to me and I’m super excited to see how this progresses
I’m especially interested by the follow up evaluation. I now do have doubts about the claims of “technical assistance” programs—theoretically they can have a great path to impact, but unfortnuately the few I have seen first hand (n=very small and they were less specific and focusd than Miraclefoot) aren’t nearly as effective as they claim to be.
I LOVE this paragraph “Because MiracleFeet provides assistance for treatment, but not the treatment itself, it’s tougher to draw a straight line from its work to impact. We were also unsure about existing access to treatment in the three countries where MiracleFeet would be working; we had been told that treatment is scarce in the Philippines, and essentially nonexistent in Chad and Côte d’Ivoire, but we didn’t have independent confirmation of that.”
Great thing is that you will find out with our evaluation! Its great hat you are doing the before and after studies, and I agree it will make financial sense given the amount that you will learn. I suppose in the Phillipines if only 10% of the kids would have been treated without Miraclefeet, you will assess whether you see around a 900% increase in treatments in that region where miraclefeet works? Chad and Côte d’Ivoire will be especially good case studies due to the apparent complete lack of treatment there, as you wlil be able to attribute almost every successful treatment to Miraclefeet. I’m a little surprised you couldn’t confirm treatment was completely non-existent. I would have thought Just calling a few of the biggest hospitals there through a few of your global health contacts would do the trick. I suppose if you are doing a big before and after though, you will answer that in due corse anyway....
I’m not sure how you plan to do external monitoring, but I hope you are doing it yourself rather than hiring external M&E orgs. Here in Uganda anyway those orgs are heavily incentivised to give positive reports, and there is often collusion/corruption between the org itself and the evaluators. The incentives are all wrong, as both the M&E provider and the org they evaluate both benefit hugely from positive rather than negative evaluations—the org through future funding and the M&E provider throughfuture work.
Because of this I’ve seen a couple of hopeless programs here get bizzarely good evaluation reports. Even something that might seem super simple like following up on clubfoot treatment success percent or counting successful treatments could easily get forged. Here In Uganda I’d almost go as far to say you’d struggle to get an objective and accurate report through a local M&E provider—although many will disagree with me here.
I think there might be a number of surgeries and procedures I think that could potentially be extremely cost-effective to fund—perhaps not at this kind of massive scale though. You seem to suggest that 1 in 800 isn’t super common, but in terms of major congenital or genetic disabilities that are curable, that’s about as common as it gets. I would compare it with Down Syndrome—that’s a similar prevalence to clubfoot and imagine the impact if we could cure that!
This is super exciting—I really hope it works and am super stoked you are funding it!
[Disclaimer: I’m the Chief Economist of IDinsight, an M+E provider who has worked with GiveWell and many others. I have a LOT of experience with evaluators being pressured to sugarcoat results, or lack thereof. ]
Strong disagree on this conclusion that M+E providers are inherently biased.
Yes, there are situations where M+E have incentives that can lead to bias. For instance, if an NGO hires an M+E provider to do an external evaluation of themselves, the NGO is therefore the ‘client’ of the researchers. This can be problematic, since the NGO will need to approve deliverables before payments are made. I’ve been involved in these situations and it is tricky.
But in general, arrangements can be made to align incentives with the truth. For instance if a funder (like GiveWell) hires an M+E provider to do an evaluation of one of its grantees, the incentives of the M+E provider are aligned with the funder, who hopefully would like to know the unvarnished truth. We’ve done numerous evaluations for GiveWell (most notably the New Incentives RCT) and have never felt any incentive to skew results one way or another.
From an organizational perspective, a well-run evaluation organization has much stronger long-term incentives to have a reputation for being honest, transparent, and truth-seeking, rather than getting repeat business from any particular client.
Thanks so much Dan—honoured to get a reply from someone with so much experience on the topic and doing such important work. There’s also a decent chance that ID insight has higher standards than many other orgs.
I agree with a decent amount of this—I agree that an NGO hiring its own M&E provider directly is “problematic”, and “tricky” only that I would use stronger language ;). Personally I think its a waste of resources for an NGO to hire an external M&E provider as the incentives are so skewed, I don’t think there’s a lot of added value compared to just internal M&E. Yes incentives are of course all wrong there too, but at least the knowledge and understanding of the operation will be better than the external provider, and the uptake for reform by the org might be better if it is driven from the inside as well.
I also agree if a funder commissions the M&E provider that is far better . At the management level incentives of the funder and the M&E organisation are likely to be aligned. I’m sure you don’t feel any incentive at your level to skew a result, but despite that both from evidence what I have seen, the positive skew is very hard to remove due to unfortunately skewed incentives at the local level.
”We’ve done numerous evaluations for GiveWell (most notably the New Incentives RCT) and have never felt any incentive to skew results one way or another. ” - I’m sure you don’t at that top level, but at the local “on the ground”, assessor level it is very difficult to avoid positive skew.
Both from my personal experience and theory (see incentives below) I think it is likely there will be some degree of positive skew even among the best M&E orgs—this might not mean the orgs shouldn’t exist, but we should be doing much more to mitigate the “on-the-ground” positive bias.
Unfortunately there are strong almost unidirectional incentives towards M&E providers at the local level to skew an M&E assessment positively. I can’t think of clear incentives towards a negative assessment—maybe someone else can?
Incentives for external M&E Positive bias (A lot of overlap between these)
Belief a positive assessment will bring more work. Although this belief may not be true, the most obvious reasoning which I have seen for skewing M&E positive often goes something like...
Positive assessment = more funding for NGO locally = more M&E work in future for me
Negative assessment = less funding for NGO locally = less M&E work in future for me
The equation which you hope your employees will abide by might be(correct me if I’m wrong)
Honest assessment = Correct funding for NGO = increased trust in assessment org = more work for assessment org = more work for all our staff including me
But I think that chain of reasoning is VERY difficult to compute for local people working in doing the assessing. There’s also an aspect of prisoners dilemma here, where you rely on everyone assessing in your org to be on board with this long term view in order for it to work for you personally
Long term relationship maintanaince—Often the pool of educated people in NGO jobs and M&E jobs isn’t big, with a lot of overlap especially in a low-income low-education situation. Here in Northern Uganda educated people are likely to know each other. People are incentivised against negative assessments, because it may make things relationally harder with their friends community in future. This is understandable!
Short and long term Job security. I know one absolute legend here who worked as M&E within an org, and straight up whistle-blew against corruption to a funder. Not only was he thrown out of that organisation, but no other NGO here would hire him for years because they feared he may do the same to them—he was even told that directly in 2 interviews! Eventually he left this part of Uganda to get a job somewhere else where he wasn’t known. The fear of long term job security and employability is a strong incentive against reporting negative assessment, especially reporting extremely negative aspects like corruption and an org not actually having done work they claimed they did.
Incentives for external M&E negative bias- I genuinely fine it hard to think of good ones
Perhaps if an organisation really is amazing, the assessor could overemphasise some negative aspects to provide a sense of balance.
You don’t like the organisation or the people in the organisation so you give a disingenuous negative response.
I won’t get into ways of mitigating these bias now (my comment is too long already haha), but I think the natural lean (quite heavily) towards positive skew in M&E is quite high.
Would love to hear specific rebuttals to this if you have time, but all good if you don’t!
Thanks again
NIck.
Interesting discussion. I agree incentives can be tricky and I have seen my fair share of bad evaluations and evaluation organisations with questionable practices. Some thoughts from me as an evaluator who has worked in a few different country contexts:
I think M&E is not just internal or completely external. A lot of times M&E orgs are hired to provide expert support and work alongside an organisation to develop an evaluation. M&E can be complex and it can really help orgs to have experts guide them through this process and clarify their thinking. And as you say when we have internal buy in we are more likely to see the findings taken up and actioned. When we only see M&E as an outside judgement commissioned by a funder with no input from the org being evaluated we make M&E out as antagonistic or adversarial which can be an unhelpful dynamic. I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder. This can be very disempowering and may not lead to positive changes.
I think many organisations do want to learn and improve but fear harsh judgement which is quite a natural, human response. I think bringing partners/orgs on board early, and establishing a pre-evaluation plan (see here) highlighting what your standards for evidence are and what actions you will take in response to certain findings before the evaluation is helpful. This also gives the organisation ownership over the evaluation results. I think it is important to frame your evaluation so that feedback can be taken on board in a culturally appropriate way. The last thing you want is for an organisation to feel harshly judged with absolutely no input or right of reply.
We speak like M&E is clear cut but M&E assessments often don’t come out fully positive or negative. A lot of evaluations occupy a messy middle. There is often some good things, some not so good things, some thing which we think are good or bad but we don’t have conclusive evidence. Sensemaking can be subjective as it often comes down to how you weigh different values or the standards you set for what good or bad look like. This can be different between the funder, the org, the community and the evaluators. For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say? Is it good? Is it bad? Is this difference practically significant? Should the program be changed? What if changing this makes it less cost-effective? This comes down to how you weigh different values and standards of performance.
I agree with Dan and think integrity is a very important internal driver. While I agree with Nick that acting this way can be more difficult for local staff given the connections and relationships both professional and personal that they have to navigate, I don’t think integrity as an incentive is hard for them to compute, it is just harder for them to action. I don’t think the response should be that all evaluations should be done by non-local/international firms. This is highly disempowering, would drain local capacity, and again puts decision-making back in the hands of people often from high-income and supposedly ‘more objective’ contexts rather than building a strong local ecosystem of accountability, re-hashing problematic colonial power dynamics.
These kinds of dilemmas exist everywhere. Evaluation is always a tricky tightrope walk where you are trying to balance the rigour of evidence, the weight of different values, and the broader political and relational ecosystem so that what you say is actually used and put into practice to improve programs and impact.
Wow thanks so much again for the great insights. So good to have experienced development practitioners here!
To give some background I came to EA partly because I saw how useless most NGOs are here where I live, and the EA framework answers many of the questions as to why, and some of the questions as to how to fix the problem. If I was the one doing M&E and had a magic wand, I would probably decide to shut down over 80% of NGOs and programs that I assessed.
Also we have had a bunch of USAID and other funded M&E pass through many of our health centers, and they have almost never either found our biggest problems nor suggested good solutions—with one exception of a specifically focused financial management assessment which was actually really helpful.
I won’t respond to everything but just make a few comments :)..
Your M&E might just be better
First, the level of M&E you do might be so much better than I have seen, that some of the issues I talk about might not apply so much.
”For example if you find an education project is cost-effectively increasing test scores, but only for female students and not struggling male students what do you say?”
That you have even done the kind of analysis that allows you to ask this kind of great question would put you above nearly any M&E that I have ever seen here in Northern Uganda. Even the concept of calculated “cost effectiveness” as we know it rarely (if ever) considered here. I can’t think of anyone who has assessed either the bigger health centers we operate or OneDay Health who has included this in an assessment.
I’m not sure how you would answer that question, but the fact that you have even reached that point means that in my eyes you are already winning to some degree. Also this analysis is so fantastic thanks for sharing I haven’t seen that before! My only comment is that I don’t think the analysis generated “mixed’ results -they seem very clear to me :D!
External assessors for data collection, local assessors for analysis and change?
For an assessment like this one of Miraclefeet, I favour external assessors to gather the basic data then perhaps local assessors could take over on the analysis? Data collection needs to be squeaky clean otherwise everything else falls down. This particular assessment should be fairly straightforward to assess by first gathering these data
1. Have the clubfoot procedures actually been done as stated? This needs a random selection of a sample of all patients allegedly worked on (say 100 randomly selected from a list of 5000 patients provided by Miricalefeet) then each one of those should be physically followed up in their home and checked. This isn’t difficult, and anything else is open to bias.
2. What has the “average intervention” achieved? Then those same 100 patients should be assessed for impact—what is their objective level of functionality and subjective improvement in wellbeing/quality of life after the procedure.
Once these 2 pieces of data are gathered, the organisational analysis and discussion you speak of can start and that might be more productive on a local-to-local level, providing the local expertise is available.
Integrity there but comes second?
I know integrity is an important driver like you say, and I love your comment that it is easy to compute and hard to action. In my experience integrity is usually there, but often falls behind the other “positive skew” motivating factors. Also I agree that M&E shouldn’t always be done by external firms partly because of the reasons you state. An added reason is that external firms often hire lots of local people to do much of the work anyway, so the same issues I outlined remain.
A small disagreement?
”I have seen orgs who have been unhappy with an external evaluation because they feel the evaluators made judgements when they didn’t fully understand the operating context (and how can they with often only a fly by visit to project locations) or did not properly take into account the values of the organisation or the community but rather only listened to the funder.”
In my experience this response might be a red flag. A sign that the org might dodging and weaving after failing to perform. I believe almost all organisation should do do pre specified actions A,B and C which provides impact X, Y, and Z. If these actions aren’t happening and impact isn’t produced then that needs to be fixed or maybe the work needs to stop. External evaluators’ job isn’t to understand the context (how could they possibly do that? Its not realistic. I’ve been in Uganda for 10 years and in many ways I still don’t understand the local context) - that is our job, the practitioners. Their job is to see what the org is doing and whether the impact is happening.
As a side note I’m a little disappointed that we don’t have more engagement on this discussion. - the “M&E question” is so important, but perhaps its not sexy and probably isn’t accessible to many.
Hi Nick, thanks for the thoughtful response. I think you make a lot of good points and I agree that there are numerous incentives can can lead an M+E provider to bias results positively. That’s why there is a ton of bad M+E out there.
One main reaction: for an employee who works in an M+E org, there is arguably no worse situation than being pressured to skew your results positively, or even worse, taking on projects where you know a certain results is expected by your clients. It makes you feel you work is meaningless, and really sucks. And when you are put in this situations, you sure as hell don’t want to work for the same client again.
Yes, i hear you that for bean-counters in an organization (or those who get dividends in a for-profit org), there are strong incentives to make clients happy and get more contracts. But I think that the job-satisfaction incentive for rank-and-file employees skews the other way. And in the course of my experience, I think it is this latter incentive toward truth-telling that has dominated in most cases.
Perhaps, like the rules for auditors established after accounting scandals, funders should adopt a policy requiring changes in the M&E provider at certain intervals, maybe with some random selection of interval? Knowing that next year’s assessment may be done by a different firm may create a disincentive for gaming the system (and a pathway for easier detection of any gaming). That may only work for projects with longer-term M&E efforts though.
Hi, Nick,
Thanks for your comments, your insight into this grant, and your support!
We do expect to get input from local hospital staff on existing treatment coverage through the baseline surveys. The monitoring grant will fund the creation of a sampling frame that includes both public and private health facilities, which we think will yield more complete data than contacting hospitals through our partners.
We agree that potential bias from external evaluators is a risk for the reasons you’ve mentioned. While we won’t be involved in the selection of evaluators, we plan to do the following to mitigate that risk:
Meet with representatives of all the firms and vet them at a high level so we can identify red (and green) flags.
Work closely with both MiracleFeet and the external evaluators throughout the process, so that we can ask questions about and provide input on their research strategies along the way.
We don’t think this will completely eliminate uncertainty about the quality of monitoring results, but we expect it will help. We also think there is some value to be gained from working with evaluators who have a strong familiarity with the local context.
I hope that’s helpful!
Best,
Miranda
Thanks for the reply, and most of this makes sense to me.
I’m not sure I understand how you won’t be involved in the selection of evaluators, who will do that exactly? Or maybe you mean you won’t select the on-the-ground evaluators as in that will be done by the company, which makes sense.
“The monitoring grant will fund the creation of a sampling frame that includes both public and private health facilities, which we think will yield more complete data than contacting hospitals through our partners.” This could work (high risk), but seems like a roundabout and inefficient way to do things. Following up on that data from multiple hospitals in West Africa for example could be a nightmare.
I would have thought with this kind of massive funding and the relatively small number of people who get procedures (in the thousands), MiracleFeet could maintain a database the contact details of every kid who gets help—this wouldn’t be hard and would make M&E so much easier for everyone. Hospitals might well collect substandard information which makes proper follow up impossible, spoiling your M&E efforts.
If I was going to give one piece of advice on M&E, it would be that your evaluators should follow up personally a completely random sample of individuals who had been treated—both to check that both the interventions actually happened, and that the claimed improvement is real. There should be a list of names, home locations and phone numbers of every single patient who received treatment—I think if that’s not there and individuals can’t be followed for this kind of intervention, than meaningful M&E becomes close to impossible.
Hi, Nick,
Yes, to clarify, MiracleFeet is selecting the on-the-ground evaluator that will conduct the monitoring in each location, although GiveWell will lightly vet MiracleFeet’s choices and meet with each evaluator.
The primary purpose of the monitoring grant is to understand how many children are treated for clubfoot both with and without MiracleFeet’s support. So, although MiracleFeet has records of children treated through facilities it’s supported, we also want an assessment of baseline treatment coverage before MiracleFeet launches its program (or expands it, in the case of the Philippines). We do plan to incorporate some form of data audit as part of endline activities; we’ll work out the details of that at a later date.
Thanks again for your interest in this and for taking the time to ask questions!
Miranda