Despite our energetic writing, we may have been carried away. We had very limited tools and information. It would be more accurate to say the first, no evidence of effects.
In fact, we did not have the tools or data to look rigorously into all projects and their intended and unintended effects.
We had two layers:
In the first layer we assume projects do exactly what organizations claim they do, and just establish a possible output per dollar (well in this case, per franc CFA). If they have usable output or outcome data, we use that, if not we may even use research on an equivalent program (eg. the GBV example). In each category, it is easy to compare which output per dollar is cheaper, still with some working assumptions.
From there, the difference between organizations was quite huge, and we had some budget to do data collection for the possible best 6 (thanks EA Infrastructure fund). There we just try to confirm the effect claimed by interviewing beneficiaries of the assistance. In two cases the effect claimed wasn’t visible at the time of data collection and that gave us two finalists (economic, because in the other project most beneficiaries did not remember participating, and health, because in the other project participants appeared to be worse off than before the project), in the human rights category there were two very similar projects as finalists and one had slightly stronger effects.
We were clearly biased for small budgets, so the overall winner had a big advantage because it was literally an intervention for 1 family, we think this may still be accurate anyway, and it is plausible that there are great opportunities to do good at small scale in developing countries, particularly through cash.
We had also limitations in comparing across sectors, but more or less the 3 finalists got the same (a badge, a framed award, some feedback they can use with potential donors, and subscription to an online newsletter to funding opportunities. We recognized the human rights final position was more tight and we added the runner-up to the newsletter). We decided to do more for the winner because we thought it was the only one meeting cost-effectiveness expectations and we can’t find much better in Cameroon, but that was out of the contest.
Going back to your question, if I have to guess, I am sure these projects may have effects that we did not get to see. I am unsure these effects are achieved in a cost-effective manner because they are buried in so much else.
Thanks for explaining! In this case, I think I come away far less convinced by your conclusions (and the confidence of your language) than you seem to. I (truly!) find what you did admirable given the resources you seem to have had at your disposal and the difficult data situation you faced. And I think many of the observations you describe (e.g., about how orgs responded to your call; about donor incentives) are insightful and well worth discussing. But I also think that the output would be significantly more valuable had you added more nuance and caution to your findings, as well as a more detailed description of the underlying data & analysis methods.
But, as said before, I still appreciate the work you did and also the honesty in you answer here!
In my eyes, this is an unusually strong effort to judge these projects. Its obviously far from perfect, but a better effort than most NGOs or competitions would.
Despite our energetic writing, we may have been carried away. We had very limited tools and information. It would be more accurate to say the first, no evidence of effects.
In fact, we did not have the tools or data to look rigorously into all projects and their intended and unintended effects.
We had two layers:
In the first layer we assume projects do exactly what organizations claim they do, and just establish a possible output per dollar (well in this case, per franc CFA). If they have usable output or outcome data, we use that, if not we may even use research on an equivalent program (eg. the GBV example). In each category, it is easy to compare which output per dollar is cheaper, still with some working assumptions.
From there, the difference between organizations was quite huge, and we had some budget to do data collection for the possible best 6 (thanks EA Infrastructure fund). There we just try to confirm the effect claimed by interviewing beneficiaries of the assistance. In two cases the effect claimed wasn’t visible at the time of data collection and that gave us two finalists (economic, because in the other project most beneficiaries did not remember participating, and health, because in the other project participants appeared to be worse off than before the project), in the human rights category there were two very similar projects as finalists and one had slightly stronger effects.
We were clearly biased for small budgets, so the overall winner had a big advantage because it was literally an intervention for 1 family, we think this may still be accurate anyway, and it is plausible that there are great opportunities to do good at small scale in developing countries, particularly through cash.
We had also limitations in comparing across sectors, but more or less the 3 finalists got the same (a badge, a framed award, some feedback they can use with potential donors, and subscription to an online newsletter to funding opportunities. We recognized the human rights final position was more tight and we added the runner-up to the newsletter). We decided to do more for the winner because we thought it was the only one meeting cost-effectiveness expectations and we can’t find much better in Cameroon, but that was out of the contest.
Going back to your question, if I have to guess, I am sure these projects may have effects that we did not get to see. I am unsure these effects are achieved in a cost-effective manner because they are buried in so much else.
Thanks for explaining! In this case, I think I come away far less convinced by your conclusions (and the confidence of your language) than you seem to. I (truly!) find what you did admirable given the resources you seem to have had at your disposal and the difficult data situation you faced. And I think many of the observations you describe (e.g., about how orgs responded to your call; about donor incentives) are insightful and well worth discussing. But I also think that the output would be significantly more valuable had you added more nuance and caution to your findings, as well as a more detailed description of the underlying data & analysis methods.
But, as said before, I still appreciate the work you did and also the honesty in you answer here!
In my eyes, this is an unusually strong effort to judge these projects. Its obviously far from perfect, but a better effort than most NGOs or competitions would.