Good post Nick. I think the question mark about the timing of the experiment considering cuts to many robustly good programmes is a particularly good one
I don’t think the Centre for Effective Aid Policy is a particularly accurate comparison, as I think there’s a significant difference between the likely effectiveness of a new org lobbying Western governments to give money to different causes (against sophisticated lobbyists for the status quo and government-defined “soft power” priorities) and orgs with established relationships providing technical recommendations to improve healthcare outcomes to LEDC governments that actually express interest in using them. I think the lack of positive findings in the wider literature links you provide are more interesting, although suspect the outcomes are highly variable depending on level of government engagement, competence of organizations, magnitude of problems they purport to solve and whether the shifts they are promoting are even in the right direction. It would be interesting in that respect to see how GiveWell evaluated the individual organizations. I do agree that budgeting dashboards don’t necessarily seem like an area relatively highly paid outsiders are best placed to optimise.
I suspect the high cost reflects use of non-local staff, which of course has a mixture of advantages and disadvantages beyond the higher cost.
I’m sceptical of the value of RCTs between nations that have different healthcare policies and standards and bureaucracies to start with (particularly as I don’t think there’s a secular global trend in the sort of outcomes TSUs are supposed to achieve, and collecting data on some of them feels like it would involve nearly as much effort as actually providing the recommendations). A lot of policy and government optimization work—effective or otherwise—is hard to RCT especially at national level. Which doesn’t mean there can’t be more transparency and non-RCT metrics
Thanks for this fantastic comment! - yes I agree my comparison with the center for effective aid policy fairly weak, I was trying to find a real life example of moving governments being very difficult, and I could have found a more analogous one. I’m not sure I’m this case countries “asking” is necessarily a signal that shifts are more likely. I think there are lots of motives for government s asking for help here, including employing local friends with lucrative salaries, and hoping these relationships might being in more donor money. But maybe I’m too cynical!
I agree the outcomes will vary based on a huge variety of things including the factors you mention. I think we need better indications though of which of these might lead to effective technical support. It’s tricky and needs more decent research.
If there were more non local staff you would be right, but from the podcast it did seem they were planning on hiring mostly local people?
Your right on RCTs (have edited post), I got that wrong, but I still think we can use routinely collected data on health outcomes to see if Health metrics have improved, before and after at least. I don’t think it needs to be too expensive to assess.
Good post Nick. I think the question mark about the timing of the experiment considering cuts to many robustly good programmes is a particularly good one
I don’t think the Centre for Effective Aid Policy is a particularly accurate comparison, as I think there’s a significant difference between the likely effectiveness of a new org lobbying Western governments to give money to different causes (against sophisticated lobbyists for the status quo and government-defined “soft power” priorities) and orgs with established relationships providing technical recommendations to improve healthcare outcomes to LEDC governments that actually express interest in using them. I think the lack of positive findings in the wider literature links you provide are more interesting, although suspect the outcomes are highly variable depending on level of government engagement, competence of organizations, magnitude of problems they purport to solve and whether the shifts they are promoting are even in the right direction. It would be interesting in that respect to see how GiveWell evaluated the individual organizations. I do agree that budgeting dashboards don’t necessarily seem like an area relatively highly paid outsiders are best placed to optimise.
I suspect the high cost reflects use of non-local staff, which of course has a mixture of advantages and disadvantages beyond the higher cost.
I’m sceptical of the value of RCTs between nations that have different healthcare policies and standards and bureaucracies to start with (particularly as I don’t think there’s a secular global trend in the sort of outcomes TSUs are supposed to achieve, and collecting data on some of them feels like it would involve nearly as much effort as actually providing the recommendations). A lot of policy and government optimization work—effective or otherwise—is hard to RCT especially at national level. Which doesn’t mean there can’t be more transparency and non-RCT metrics
Thanks for this fantastic comment! - yes I agree my comparison with the center for effective aid policy fairly weak, I was trying to find a real life example of moving governments being very difficult, and I could have found a more analogous one. I’m not sure I’m this case countries “asking” is necessarily a signal that shifts are more likely. I think there are lots of motives for government s asking for help here, including employing local friends with lucrative salaries, and hoping these relationships might being in more donor money. But maybe I’m too cynical!
I agree the outcomes will vary based on a huge variety of things including the factors you mention. I think we need better indications though of which of these might lead to effective technical support. It’s tricky and needs more decent research.
If there were more non local staff you would be right, but from the podcast it did seem they were planning on hiring mostly local people?
Your right on RCTs (have edited post), I got that wrong, but I still think we can use routinely collected data on health outcomes to see if Health metrics have improved, before and after at least. I don’t think it needs to be too expensive to assess.