Thanks for this! It’s exciting to see people try out new things and experiment with ways to improve the world.
Some thoughts on the information you present in this post, and what it seems to be missing:
Based on the information presented in this post alone, I’m not confident in the theory of change you present, specifically the (crucial) link between “training HW’s” → “reduction in deaths”, or your team’s plans to measure the impact of training on:
The actual change in HW behavior (e.g. they are now giving better advice and treatment in real life / on the field)
Whether the change in HW behavior actually saves lives (e.g. the improvement in advice/treatment leads to X lives saved)
[EDIT, Sep 13 2022: The paragraph after this one is wrong, see the excerpt below where Marshall links to the post about neonatal care, which includes studies of this kind of intervention directly measuring deaths averted.)
My analysis of existing research studies shows that training HWs to properly care for newborn babies is likely to be highly cost-effective, with an average cost of $59 per DALY averted ($100 per DALY averted is sometimes cited as a benchmark for highly effective interventions)....
I still think my points apply to Marshall’s teams specific implementation of the intervention.
[End Edit]
The closest I found in this post was a discussion on evidence related to skill gains & knowledge retention. None of the following mention either a or b.
Online neonatal care training leads to significant knowledge and skills gains among HWs.
The infection control training I’ve been working on has high completion rates, learning gains comparable to those seen in more resource-intensive in-person training, and very positive learner feedback.
Recent research shows that a short online training in blood pressure measurement improves HWs’ clinical skills.
The second study you cite doesn’t mention actual skills change, which seems more important than knowledge retention. The third study you cite is the one above in a US context, and I don’t know how much this evidence transfers to an LMIC context.
However, you mention a lack of evidence in a previous post “New cause area: training health workers to prevent newborn deaths” you mention that:
I was tempted to title this piece “New cause area: health workforce development,” but the reality is that there’s much better data to quantify cost-effectiveness on a dollars-per-DALY basis in the niche of neonatal care training.[3] I believe that this much narrower cause area is just the tip of the iceberg – it’s easy to see and quantify but also indicative of a much bigger underlying and unaddressed problem. Fortunately, modest investments in neonatal care training might be cost-effective on their own while also generating transferable lessons applicable to the bigger problem.
2. Based on this post, it’s not clear to me how you / your team is prioritizing which countries or regions to operate in, which health problems to work on first, and why. Based on what you wrote in the previous post, the next steps you outlined seemed very promising, and plausibly neonatal care could be good, but this is not mentioned in this post.
3. Becaus the funding amount is quite large ($4 million), it would be good to explain why you think this is the right amount to ask for right now. Based on my knowledge of early-stage global health startup NGOs, I’d expect that you could probably create a strong case for impact (e.g. run an RCT) and create a strong case for impact for somewhere in the range of USD $100,000-$500,000 (very very rough estimate, someone please correct me if this is way off!).
Thanks for this feedback! This is exactly why I posted, so before I provide any specific responses to your points, please know that I appreciate all of the questions and suggestions and I’m already thinking of how they could be addressed in a future version of this proposal.
1. I appreciate your point that the key step in the theory of change is not clear—and I think this is not due to a gap in the data itself but instead due to a gap in my presentation of the evidence. The key supporting evidence is linked out from this statement:
My analysis of existing research studies shows that training HWs to properly care for newborn babies is likely to be highly cost-effective, with an average cost of $59 per DALY averted ($100 per DALY averted is sometimes cited as a benchmark for highly effective interventions)....
The linked post cites six studies that show reductions in mortality due to HW training. While there are remaining reasons for skepticism, I think these six studies support this key step in the theory of change, at least for some types of training. Regarding your sub-points on point (1), I accept the feedback that we can and should provide more detail on the evaluation in a future version of this. The six studies provide pretty clear guidance on the type of data we would collect.
2. I agree that a roadmap of regions / countries / priority courses would be helpful to include and can add this to a future version. Thanks for the suggestion. We’d want to start with topics that have the strongest existing evidence base (such as neonatal care and management of childhood illness).
3. The dollar amount may seem high, but this is a technology development project. I think it will be very difficult to build a truly excellent learning platform that is tailored to this target audience without attracting top engineering talent, and that gets expensive. As I mentioned in the post, we’ve already done substantial piloting on a shoestring and I plan to continue to do that! I’ll think further about whether we can present a tiered approach, with additional pilots done with an MVP.
I missed that link! Thanks for flagging. I think when I read that, it wasn’t clear to me that this study had explicit examples of reduction in mortality. I’ll edit my first comment so that people know it was included.
That makes sense, and I think it could make a big difference to e.g. potential funders reading this to be more clear.
I think the thing I would see as most important is demonstrating that your specific implementation of the solution results in deaths averted, and this could be done for a lower cost. At that point, if there is evidence, it makes sense to scale up / professionalize the platform.
Thanks for this! It’s exciting to see people try out new things and experiment with ways to improve the world.
Some thoughts on the information you present in this post, and what it seems to be missing:
Based on the information presented in this post alone, I’m not confident in the theory of change you present, specifically the (crucial) link between “training HW’s” → “reduction in deaths”, or your team’s plans to measure the impact of training on:
The actual change in HW behavior (e.g. they are now giving better advice and treatment in real life / on the field)
Whether the change in HW behavior actually saves lives (e.g. the improvement in advice/treatment leads to X lives saved)
[EDIT, Sep 13 2022: The paragraph after this one is wrong, see the excerpt below where Marshall links to the post about neonatal care, which includes studies of this kind of intervention directly measuring deaths averted.)
I still think my points apply to Marshall’s teams specific implementation of the intervention.
[End Edit]
The closest I found in this post was a discussion on evidence related to skill gains & knowledge retention. None of the following mention either a or b.
The second study you cite doesn’t mention actual skills change, which seems more important than knowledge retention. The third study you cite is the one above in a US context, and I don’t know how much this evidence transfers to an LMIC context.
However, you mention a lack of evidence in a previous post “New cause area: training health workers to prevent newborn deaths” you mention that:
2. Based on this post, it’s not clear to me how you / your team is prioritizing which countries or regions to operate in, which health problems to work on first, and why. Based on what you wrote in the previous post, the next steps you outlined seemed very promising, and plausibly neonatal care could be good, but this is not mentioned in this post.
3. Becaus the funding amount is quite large ($4 million), it would be good to explain why you think this is the right amount to ask for right now. Based on my knowledge of early-stage global health startup NGOs, I’d expect that you could probably create a strong case for impact (e.g. run an RCT) and create a strong case for impact for somewhere in the range of USD $100,000-$500,000 (very very rough estimate, someone please correct me if this is way off!).
Thanks for this feedback! This is exactly why I posted, so before I provide any specific responses to your points, please know that I appreciate all of the questions and suggestions and I’m already thinking of how they could be addressed in a future version of this proposal.
1. I appreciate your point that the key step in the theory of change is not clear—and I think this is not due to a gap in the data itself but instead due to a gap in my presentation of the evidence. The key supporting evidence is linked out from this statement:
The linked post cites six studies that show reductions in mortality due to HW training. While there are remaining reasons for skepticism, I think these six studies support this key step in the theory of change, at least for some types of training. Regarding your sub-points on point (1), I accept the feedback that we can and should provide more detail on the evaluation in a future version of this. The six studies provide pretty clear guidance on the type of data we would collect.
2. I agree that a roadmap of regions / countries / priority courses would be helpful to include and can add this to a future version. Thanks for the suggestion. We’d want to start with topics that have the strongest existing evidence base (such as neonatal care and management of childhood illness).
3. The dollar amount may seem high, but this is a technology development project. I think it will be very difficult to build a truly excellent learning platform that is tailored to this target audience without attracting top engineering talent, and that gets expensive. As I mentioned in the post, we’ve already done substantial piloting on a shoestring and I plan to continue to do that! I’ll think further about whether we can present a tiered approach, with additional pilots done with an MVP.
Thanks for responding!
I missed that link! Thanks for flagging. I think when I read that, it wasn’t clear to me that this study had explicit examples of reduction in mortality. I’ll edit my first comment so that people know it was included.
That makes sense, and I think it could make a big difference to e.g. potential funders reading this to be more clear.
I think the thing I would see as most important is demonstrating that your specific implementation of the solution results in deaths averted, and this could be done for a lower cost. At that point, if there is evidence, it makes sense to scale up / professionalize the platform.