Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources?
I think there are basically two ways of looking at this question.
One is the typical EA/‘consequentialist’ approach. Here you accept that some amount of the money will be wasted (fraud/corruption/incompetence), build this explicitly into your cost-effectiveness model, and then see what the bottom line is. If I recall correctly, GiveWell explicitly assumes something like 50% of insecticide-treated bednets are not used properly; their cost-effectiveness estimate would be double if they didn’t make this adjustment. $1.6m of mismanagement seems relatively small compared to the total size of anti-malaria programs, so presumably doesn’t move the needle much on the overall QALY/$ figure. This sort of approach is also common in areas like for-profit businesses (e.g. half of all advertising spending is wasted, we just don’t know which half...) and welfare states (e.g. tolerated disability benefit fraud in the UK). To literally answer your question, that $1.6m is presumably not the best use of resources, but we’re willing to tolerate that loss because the rest of the money is used for very good purposes so overall malaria aid is (plausibly) the best use of resources.
The alternative is a more deontological approach, where basically any fraud or malfeasance is grounds for a radical response. This is especially common in cases where adversarial selection is a big risk, where any tolerated bad actors will rapidly grow to take a large fraction of the total, or where people have particularly strong moral views about the misconduct. Examples include zero-tolerance schemes for harassment in the workplace, DOGE hunting down woke in USAID/NSF, or the Foreign Corrupt Practices Act. In cases like this people are willing to cull the entire flock just to stop a single infected bird—sometimes a drastic measure can be warranted to eliminate a hidden threat.
In the malaria example, if the cost is merely that $1.6m is set on fire, the first approach seems pretty appropriate. The second approach seems more applicable if you thought the $1.6m was having actively negative effects (e.g. supporting organised crime) or was liable to grow dramatically if not checked.
I think there are basically two ways of looking at this question.
One is the typical EA/‘consequentialist’ approach. Here you accept that some amount of the money will be wasted (fraud/corruption/incompetence), build this explicitly into your cost-effectiveness model, and then see what the bottom line is. If I recall correctly, GiveWell explicitly assumes something like 50% of insecticide-treated bednets are not used properly; their cost-effectiveness estimate would be double if they didn’t make this adjustment. $1.6m of mismanagement seems relatively small compared to the total size of anti-malaria programs, so presumably doesn’t move the needle much on the overall QALY/$ figure. This sort of approach is also common in areas like for-profit businesses (e.g. half of all advertising spending is wasted, we just don’t know which half...) and welfare states (e.g. tolerated disability benefit fraud in the UK). To literally answer your question, that $1.6m is presumably not the best use of resources, but we’re willing to tolerate that loss because the rest of the money is used for very good purposes so overall malaria aid is (plausibly) the best use of resources.
The alternative is a more deontological approach, where basically any fraud or malfeasance is grounds for a radical response. This is especially common in cases where adversarial selection is a big risk, where any tolerated bad actors will rapidly grow to take a large fraction of the total, or where people have particularly strong moral views about the misconduct. Examples include zero-tolerance schemes for harassment in the workplace, DOGE hunting down woke in USAID/NSF, or the Foreign Corrupt Practices Act. In cases like this people are willing to cull the entire flock just to stop a single infected bird—sometimes a drastic measure can be warranted to eliminate a hidden threat.
In the malaria example, if the cost is merely that $1.6m is set on fire, the first approach seems pretty appropriate. The second approach seems more applicable if you thought the $1.6m was having actively negative effects (e.g. supporting organised crime) or was liable to grow dramatically if not checked.