Jamie, thank you so much for this thoughtful and constructive feedback! I really appreciate you taking the time to engage with this so carefully.
You’re absolutely right that these claims need more substantiation. I made a deliberate choice to keep the initial post relatively brief to give people baseline knowledge and invite engagement rather than overwhelming readers with data upfront. But I’m glad you’re pushing me to go deeper.
Let me provide more detail on each dimension, while being honest about where the evidence is strong and where it’s limited:
On Scale: According to UN OCHA’s December 2024 report, an estimated 30.4 million people need assistance in 2025, nearly two thirds of the country’s population and marking an increase of 5.6 million people from 2024. The ACAPS October 2024 report notes that conflict-induced displacement has affected more than ten million people, while livelihoods, markets, and services across the country have collapsed.
According to UN OCHA’s report from August 2024, famine conditions are now prevalent in Zamzam internally displaced persons camp in North Darfur State, marking the first such report globally since 2017, with the IPC Famine Review Committee concluding that thousands more people are likely experiencing similar conditions in 13 other areas at risk of famine.
The IRC’s 2025 Emergency Watchlist ranks Sudan at the top for the second year running, describing it as “the largest humanitarian crisis ever recorded,” accounting for 10% of people in humanitarian need globally despite being home to just 1% of the global population.
On Neglectedness: This is what I think has the strongest case. The SSHAP October 2024 case study notes that in April 2024, donors came together in Paris in an effort to raise the USD 2.7 billion that the UN estimated is required. I helped prepare US government officials for that meeting, and remember how unbelievably difficult it was for donors to agree on coordinated action. But current estimation suggests funding is just 41% of what is needed. Media coverage, political will and funding all remain low when compared to the magnitude of the crisis.
According to UN OCHA’s July 2024 dashboard, by the end of July, the 2024 Sudan Humanitarian Needs and Response Plan was still less than 40 per cent funded of the $2.7 billion required.
The Norwegian Refugee Council’s 2023 report found that Sudan was among the nine most underfunded crises globally, with funding coverage between 2019 and 2023 averaging 15 percent lower than other humanitarian response plans.
And critically: the SSHAP report notes that in December 2023, research indicated that only 16% of aid was able to reach those in need, with access most restricted in the besieged Khartoum, Darfur and Kordofan states.
On Tractability and Cost-Effectiveness: This is where I need to be most honest about evidence limitations. I cannot provide you with a GiveWell-style cost-per-life-saved calculation. Here’s what I can tell you from the independent research reports:
Efficiency indicators:
The ACAPS report documents that ERR volunteers have worked unpaid for over two years, meaning overhead costs are near-zero
Some ERRs in Khartoum voiced that intermediary NGOs would take a significant percentage (often 10%) of grants for administrative fees while not doing much in terms of operational work, with ERRs carrying out implementation including running kitchens and clinics
ERRs implement informal yet effective accountability measures, such as public complaint handling and transparent procurement rules, including the formation of procurement committees
Access advantage:
ERRs’ adaptability, presence in conflict areas, and proximity to communities have enabled them to respond where other national and international responders could not
This means the counterfactual impact is potentially very high—these aren’t services duplicating what others could provide, they’re often the only services reaching certain populations
Demonstrated scale:
By October 2024, an estimated 360 ERRs were operating across seven states
Between 2023-2024, ERRs provided first aid, delivered medicines including for chronic diseases, mapped safe evacuation routes, supported IDPs in shelters, established communal kitchens, distributed food, and operated hospitals and local health facilities
The Honest Comparison to Top Cause Areas: You asked for explicit quantitative comparisons. I can’t provide them at the level of rigor EA typically expects, and I want to be clear about why:
Global health interventions (malaria nets, deworming, etc.) have decades of RCT evidence. I cannot compete with that level of certainty.
What I can argue: In a context where two-thirds of a country’s population needs humanitarian assistance including confirmed famine conditions, volunteer networks with ~0% overhead operating where no one else can reach might have cost-effectiveness in the same ballpark as top interventions. But I’m making an educated argument based on the available evidence, not proving it with RCTs.
The epistemic challenge: This raises a real question about EA’s framework. Should we only fund interventions we can measure with near-certainty? Or should we have some capacity for high-uncertainty, high-potential-impact interventions during acute emergencies?
What Would Stronger Evidence Look Like?
Honestly? It would probably require EA funding a proper evaluation. You could fund:
Retrospective analysis of ERR operations with health economists
Prospective monitoring of specific interventions
Comparative analysis of ERR vs. traditional NGO cost structures in Sudan
But there’s a chicken-and-egg problem: we can’t get that evidence without some initial funding, but we can’t get funding without that evidence.
My Ask: I’m not claiming Sudan definitively beats GiveWell top charities on cost-effectiveness. I’m arguing it’s plausible enough that it warrants serious evaluation, and that the combination of massive scale + extreme neglectedness + demonstrated local capacity should be enough to trigger that evaluation.
What would you need to see to consider this worth deeper investigation? I’d really value your thoughts on how EA might approach situations like this where the need is urgent but the evidence base doesn’t yet meet our typical standards. Thanks again for engaging with this so thoughtfully!
The epistemic challenge: This raises a real question about EA’s framework. Should we only fund interventions we can measure with near-certainty? Or should we have some capacity for high-uncertainty, high-potential-impact interventions during acute emergencies?
I think most people would say that the analysis should be close to risk-neutral. However, global-health donors seem more risk-averse in practice.
That being said, I would submit that we probably should penalize early-stage research and cost-effectiveness analysis, not based on risk tolerance per se but because experience teaches that effectiveness often goes down as analytical rigor goes up. To analogize to a different domain, lots of drugs look great in early trials but fall apart in late-stage trials. So I think that the necessary showing is probably this: is there a substantial probability that the cost-effectiveness significantly exceeds the counterfactual use of the money (which I will assume to be GiveWell All Grants?)
GiveWell has made malnutrition grants, such as this one. The estimated cost-effectiveness was somewhat less than its usual bar (8x, as opposed to 10x, would have been 10x absent funging adjustment). This appears to be a program for extremely malnourished young children, as evidenced by a cost of $215 per child. I’m not qualified to say what the sweet spot for combating malnutrition is (e.g., whether a program for somewhat less malnourished young children might be more cost-effective because it could use less specialized foods, or whether the extra costs of feeding a larger population predominate.) On the other hand, if our starting point is that young-child extreme malnutrition programs are close to the bar, then it seems likely that programs for mild-to-moderate malnutrition and adult malnutrition probably wouldn’t clear the bar. All that is very shallow and out of my domain expertise. But that’s my initial stab at how we might start to bridge the evidentiary deficit here.
We also have pre-existing work (e.g., by GiveDirectly) on the effects of just giving cash to people in poverty, and emerging work that suggests giving the cash at the right time (e.g., shortly before childbirth) has a multiplier effect. Although selecting a multiplier would be dicey here, I would be willing to accept that there’s a multiplier here (and that provision of food and basic medicine is close enough to providing cash in these circumstances to use the cash data). You’d need a large multiple to get to 10x, though.
In any event, I think trying to adapt existing cost-effectiveness estimates to project results for a different context is a reasonable first step here. The projections are going to be error-prone, but I think they could inform whether to invest in more specific work.
There’s another awkward issue here. It’s more likely that ERRs engage in some programs that are more effective than the marginal GiveWell dollar than it is that the marginal dollar given to an ERR outperforms the marginal dollar given to GiveWell All Grants. While recognizing the diversity of ERRs, could we give (e.g.) $1MM dedicated to young-child malnutrition work and predict that ~$1MM more in that work will get done? Or will money get shifted around such that we get (e.g.) $250K more of that, of communal kitchens, of paying those who currently volunteer, and of something else? If the latter, we would need to base the cost-effectiveness estimate off the true marginal effect of the donation.
But there’s a chicken-and-egg problem: we can’t get that evidence without some initial funding, but we can’t get funding without that evidence.
I fear it’s even worse than that. The classic EA global-health model assumes a fairly stable situation: malaria is ~malaria, and usually the world hasn’t changed that much in 5-10 years (and isn’t so different from country to country in a similar area) to render reliance on prior work dicey. By the time you were able to get high-quality results on ERRs, would the situation have changed enough to undermine reliance on that data? How much confidence could we justifiably have that results on ERRs obtained during one crisis would hold for a different crisis in a different country?
In the end, you have to do the best you can with the information you have. But if evidence will become stale quickly and is very context-dependent, that would make me somewhat less excited about spending a lot of resources to gather it.
Jamie, thank you so much for this thoughtful and constructive feedback! I really appreciate you taking the time to engage with this so carefully.
You’re absolutely right that these claims need more substantiation. I made a deliberate choice to keep the initial post relatively brief to give people baseline knowledge and invite engagement rather than overwhelming readers with data upfront. But I’m glad you’re pushing me to go deeper.
Let me provide more detail on each dimension, while being honest about where the evidence is strong and where it’s limited:
On Scale: According to UN OCHA’s December 2024 report, an estimated 30.4 million people need assistance in 2025, nearly two thirds of the country’s population and marking an increase of 5.6 million people from 2024. The ACAPS October 2024 report notes that conflict-induced displacement has affected more than ten million people, while livelihoods, markets, and services across the country have collapsed.
According to UN OCHA’s report from August 2024, famine conditions are now prevalent in Zamzam internally displaced persons camp in North Darfur State, marking the first such report globally since 2017, with the IPC Famine Review Committee concluding that thousands more people are likely experiencing similar conditions in 13 other areas at risk of famine.
The IRC’s 2025 Emergency Watchlist ranks Sudan at the top for the second year running, describing it as “the largest humanitarian crisis ever recorded,” accounting for 10% of people in humanitarian need globally despite being home to just 1% of the global population.
On Neglectedness: This is what I think has the strongest case. The SSHAP October 2024 case study notes that in April 2024, donors came together in Paris in an effort to raise the USD 2.7 billion that the UN estimated is required. I helped prepare US government officials for that meeting, and remember how unbelievably difficult it was for donors to agree on coordinated action. But current estimation suggests funding is just 41% of what is needed. Media coverage, political will and funding all remain low when compared to the magnitude of the crisis.
According to UN OCHA’s July 2024 dashboard, by the end of July, the 2024 Sudan Humanitarian Needs and Response Plan was still less than 40 per cent funded of the $2.7 billion required.
The Norwegian Refugee Council’s 2023 report found that Sudan was among the nine most underfunded crises globally, with funding coverage between 2019 and 2023 averaging 15 percent lower than other humanitarian response plans.
And critically: the SSHAP report notes that in December 2023, research indicated that only 16% of aid was able to reach those in need, with access most restricted in the besieged Khartoum, Darfur and Kordofan states.
On Tractability and Cost-Effectiveness: This is where I need to be most honest about evidence limitations. I cannot provide you with a GiveWell-style cost-per-life-saved calculation. Here’s what I can tell you from the independent research reports:
Efficiency indicators:
The ACAPS report documents that ERR volunteers have worked unpaid for over two years, meaning overhead costs are near-zero
Some ERRs in Khartoum voiced that intermediary NGOs would take a significant percentage (often 10%) of grants for administrative fees while not doing much in terms of operational work, with ERRs carrying out implementation including running kitchens and clinics
ERRs implement informal yet effective accountability measures, such as public complaint handling and transparent procurement rules, including the formation of procurement committees
Access advantage:
ERRs’ adaptability, presence in conflict areas, and proximity to communities have enabled them to respond where other national and international responders could not
This means the counterfactual impact is potentially very high—these aren’t services duplicating what others could provide, they’re often the only services reaching certain populations
Demonstrated scale:
By October 2024, an estimated 360 ERRs were operating across seven states
Between 2023-2024, ERRs provided first aid, delivered medicines including for chronic diseases, mapped safe evacuation routes, supported IDPs in shelters, established communal kitchens, distributed food, and operated hospitals and local health facilities
The Honest Comparison to Top Cause Areas: You asked for explicit quantitative comparisons. I can’t provide them at the level of rigor EA typically expects, and I want to be clear about why:
Global health interventions (malaria nets, deworming, etc.) have decades of RCT evidence. I cannot compete with that level of certainty.
What I can argue: In a context where two-thirds of a country’s population needs humanitarian assistance including confirmed famine conditions, volunteer networks with ~0% overhead operating where no one else can reach might have cost-effectiveness in the same ballpark as top interventions. But I’m making an educated argument based on the available evidence, not proving it with RCTs.
The epistemic challenge: This raises a real question about EA’s framework. Should we only fund interventions we can measure with near-certainty? Or should we have some capacity for high-uncertainty, high-potential-impact interventions during acute emergencies?
What Would Stronger Evidence Look Like?
Honestly? It would probably require EA funding a proper evaluation. You could fund:
Retrospective analysis of ERR operations with health economists
Prospective monitoring of specific interventions
Comparative analysis of ERR vs. traditional NGO cost structures in Sudan
But there’s a chicken-and-egg problem: we can’t get that evidence without some initial funding, but we can’t get funding without that evidence.
My Ask: I’m not claiming Sudan definitively beats GiveWell top charities on cost-effectiveness. I’m arguing it’s plausible enough that it warrants serious evaluation, and that the combination of massive scale + extreme neglectedness + demonstrated local capacity should be enough to trigger that evaluation.
What would you need to see to consider this worth deeper investigation? I’d really value your thoughts on how EA might approach situations like this where the need is urgent but the evidence base doesn’t yet meet our typical standards. Thanks again for engaging with this so thoughtfully!
I think most people would say that the analysis should be close to risk-neutral. However, global-health donors seem more risk-averse in practice.
That being said, I would submit that we probably should penalize early-stage research and cost-effectiveness analysis, not based on risk tolerance per se but because experience teaches that effectiveness often goes down as analytical rigor goes up. To analogize to a different domain, lots of drugs look great in early trials but fall apart in late-stage trials. So I think that the necessary showing is probably this: is there a substantial probability that the cost-effectiveness significantly exceeds the counterfactual use of the money (which I will assume to be GiveWell All Grants?)
GiveWell has made malnutrition grants, such as this one. The estimated cost-effectiveness was somewhat less than its usual bar (8x, as opposed to 10x, would have been 10x absent funging adjustment). This appears to be a program for extremely malnourished young children, as evidenced by a cost of $215 per child. I’m not qualified to say what the sweet spot for combating malnutrition is (e.g., whether a program for somewhat less malnourished young children might be more cost-effective because it could use less specialized foods, or whether the extra costs of feeding a larger population predominate.) On the other hand, if our starting point is that young-child extreme malnutrition programs are close to the bar, then it seems likely that programs for mild-to-moderate malnutrition and adult malnutrition probably wouldn’t clear the bar. All that is very shallow and out of my domain expertise. But that’s my initial stab at how we might start to bridge the evidentiary deficit here.
We also have pre-existing work (e.g., by GiveDirectly) on the effects of just giving cash to people in poverty, and emerging work that suggests giving the cash at the right time (e.g., shortly before childbirth) has a multiplier effect. Although selecting a multiplier would be dicey here, I would be willing to accept that there’s a multiplier here (and that provision of food and basic medicine is close enough to providing cash in these circumstances to use the cash data). You’d need a large multiple to get to 10x, though.
In any event, I think trying to adapt existing cost-effectiveness estimates to project results for a different context is a reasonable first step here. The projections are going to be error-prone, but I think they could inform whether to invest in more specific work.
There’s another awkward issue here. It’s more likely that ERRs engage in some programs that are more effective than the marginal GiveWell dollar than it is that the marginal dollar given to an ERR outperforms the marginal dollar given to GiveWell All Grants. While recognizing the diversity of ERRs, could we give (e.g.) $1MM dedicated to young-child malnutrition work and predict that ~$1MM more in that work will get done? Or will money get shifted around such that we get (e.g.) $250K more of that, of communal kitchens, of paying those who currently volunteer, and of something else? If the latter, we would need to base the cost-effectiveness estimate off the true marginal effect of the donation.
I fear it’s even worse than that. The classic EA global-health model assumes a fairly stable situation: malaria is ~malaria, and usually the world hasn’t changed that much in 5-10 years (and isn’t so different from country to country in a similar area) to render reliance on prior work dicey. By the time you were able to get high-quality results on ERRs, would the situation have changed enough to undermine reliance on that data? How much confidence could we justifiably have that results on ERRs obtained during one crisis would hold for a different crisis in a different country?
In the end, you have to do the best you can with the information you have. But if evidence will become stale quickly and is very context-dependent, that would make me somewhat less excited about spending a lot of resources to gather it.