Thanks a lot for writing this up and sharing your evaluations and thinking!
I think there is lots of value in on-the-ground investigations and am glad for the data you collected to shine more light on the Cameroonian experience. That said, reading the post I wasn’t quite sure what to make of some of your claims and take-aways, and I’m a little concerned that your conclusions may be misrepresentating part of the situation. Could you share a bit more about your methodology for evaluating the cost-effectiveness of different organisations in Cameroon? What questions did these orgs answer when they entered your competition? What metrics and data sources did you rely on when evaluating their claims and efforts through your own research?
Most centrally, I would be interested to know: 1) Did you find no evidence of effects or did you find evidence for no effect[1]?; and 2) Which time horizon did you look at when measuring effects, and are you concerned that a limited time horizon might miss essential outcomes?
If you find the time, I’d be super grateful for some added information and your thoughts on the above!
Despite our energetic writing, we may have been carried away. We had very limited tools and information. It would be more accurate to say the first, no evidence of effects.
In fact, we did not have the tools or data to look rigorously into all projects and their intended and unintended effects.
We had two layers:
In the first layer we assume projects do exactly what organizations claim they do, and just establish a possible output per dollar (well in this case, per franc CFA). If they have usable output or outcome data, we use that, if not we may even use research on an equivalent program (eg. the GBV example). In each category, it is easy to compare which output per dollar is cheaper, still with some working assumptions.
From there, the difference between organizations was quite huge, and we had some budget to do data collection for the possible best 6 (thanks EA Infrastructure fund). There we just try to confirm the effect claimed by interviewing beneficiaries of the assistance. In two cases the effect claimed wasn’t visible at the time of data collection and that gave us two finalists (economic, because in the other project most beneficiaries did not remember participating, and health, because in the other project participants appeared to be worse off than before the project), in the human rights category there were two very similar projects as finalists and one had slightly stronger effects.
We were clearly biased for small budgets, so the overall winner had a big advantage because it was literally an intervention for 1 family, we think this may still be accurate anyway, and it is plausible that there are great opportunities to do good at small scale in developing countries, particularly through cash.
We had also limitations in comparing across sectors, but more or less the 3 finalists got the same (a badge, a framed award, some feedback they can use with potential donors, and subscription to an online newsletter to funding opportunities. We recognized the human rights final position was more tight and we added the runner-up to the newsletter). We decided to do more for the winner because we thought it was the only one meeting cost-effectiveness expectations and we can’t find much better in Cameroon, but that was out of the contest.
Going back to your question, if I have to guess, I am sure these projects may have effects that we did not get to see. I am unsure these effects are achieved in a cost-effective manner because they are buried in so much else.
Thanks for explaining! In this case, I think I come away far less convinced by your conclusions (and the confidence of your language) than you seem to. I (truly!) find what you did admirable given the resources you seem to have had at your disposal and the difficult data situation you faced. And I think many of the observations you describe (e.g., about how orgs responded to your call; about donor incentives) are insightful and well worth discussing. But I also think that the output would be significantly more valuable had you added more nuance and caution to your findings, as well as a more detailed description of the underlying data & analysis methods.
But, as said before, I still appreciate the work you did and also the honesty in you answer here!
In my eyes, this is an unusually strong effort to judge these projects. Its obviously far from perfect, but a better effort than most NGOs or competitions would.
Thanks a lot for writing this up and sharing your evaluations and thinking!
I think there is lots of value in on-the-ground investigations and am glad for the data you collected to shine more light on the Cameroonian experience. That said, reading the post I wasn’t quite sure what to make of some of your claims and take-aways, and I’m a little concerned that your conclusions may be misrepresentating part of the situation. Could you share a bit more about your methodology for evaluating the cost-effectiveness of different organisations in Cameroon? What questions did these orgs answer when they entered your competition? What metrics and data sources did you rely on when evaluating their claims and efforts through your own research?
Most centrally, I would be interested to know: 1) Did you find no evidence of effects or did you find evidence for no effect[1]?; and 2) Which time horizon did you look at when measuring effects, and are you concerned that a limited time horizon might miss essential outcomes?
If you find the time, I’d be super grateful for some added information and your thoughts on the above!
The two are not necessarily the same and there’s a danger of misrepresentation and misleading policy advice when equating them uncritically. This has been discussed in the field of evidence-based health and medicine, but I think it also applies to observational studies on development interventions like the ones you analyse: Ranganathan, Pramesh, & Buyse (2015): Common pitfalls in statistical analysis: “No evidence of effect” versus “evidence of no effect”; Vounzoulaki (2020): ‘No evidence of effect’ versus ‘evidence of no effect’: how do they differ?; Tarnow-Mordi & Healy (1999): Distinguishing between “no evidence of effect” and “evidence of no effect” in randomised controlled trials and other comparisons
Great points, Sarah.
Thanks for your work, EffectiveHelp—Cameroon. I think it would be great if you could share the data underlying your analysis.
Despite our energetic writing, we may have been carried away. We had very limited tools and information. It would be more accurate to say the first, no evidence of effects.
In fact, we did not have the tools or data to look rigorously into all projects and their intended and unintended effects.
We had two layers:
In the first layer we assume projects do exactly what organizations claim they do, and just establish a possible output per dollar (well in this case, per franc CFA). If they have usable output or outcome data, we use that, if not we may even use research on an equivalent program (eg. the GBV example). In each category, it is easy to compare which output per dollar is cheaper, still with some working assumptions.
From there, the difference between organizations was quite huge, and we had some budget to do data collection for the possible best 6 (thanks EA Infrastructure fund). There we just try to confirm the effect claimed by interviewing beneficiaries of the assistance. In two cases the effect claimed wasn’t visible at the time of data collection and that gave us two finalists (economic, because in the other project most beneficiaries did not remember participating, and health, because in the other project participants appeared to be worse off than before the project), in the human rights category there were two very similar projects as finalists and one had slightly stronger effects.
We were clearly biased for small budgets, so the overall winner had a big advantage because it was literally an intervention for 1 family, we think this may still be accurate anyway, and it is plausible that there are great opportunities to do good at small scale in developing countries, particularly through cash.
We had also limitations in comparing across sectors, but more or less the 3 finalists got the same (a badge, a framed award, some feedback they can use with potential donors, and subscription to an online newsletter to funding opportunities. We recognized the human rights final position was more tight and we added the runner-up to the newsletter). We decided to do more for the winner because we thought it was the only one meeting cost-effectiveness expectations and we can’t find much better in Cameroon, but that was out of the contest.
Going back to your question, if I have to guess, I am sure these projects may have effects that we did not get to see. I am unsure these effects are achieved in a cost-effective manner because they are buried in so much else.
Thanks for explaining! In this case, I think I come away far less convinced by your conclusions (and the confidence of your language) than you seem to. I (truly!) find what you did admirable given the resources you seem to have had at your disposal and the difficult data situation you faced. And I think many of the observations you describe (e.g., about how orgs responded to your call; about donor incentives) are insightful and well worth discussing. But I also think that the output would be significantly more valuable had you added more nuance and caution to your findings, as well as a more detailed description of the underlying data & analysis methods.
But, as said before, I still appreciate the work you did and also the honesty in you answer here!
In my eyes, this is an unusually strong effort to judge these projects. Its obviously far from perfect, but a better effort than most NGOs or competitions would.