Great post! A particular issue is that E(cost/effect) is infinite or undefined if you have a non-zero probability that the effect is 0. This is very commonly the case.
Another interesting point, highlighted by your log normal example, is that higher variance will tend to increase the difference between E(1/X) and 1/E(X).
E[effect/cost] will also inflate the cost-effectiveness, giving too much weight to cases where you spend little and too little weight to your costs (and opportunity costs) when you spend a lot. If there’s a 90% chance the costs are astronomical and there’s no impact or it’s net negative, the other 10% could still make it “look” cost-effective. The whole project could have negative expected effects (negative E[effect]), but positive E[effect/cost]. That would not be a project worth supporting.
You should usually be estimating E[costs]/E[effects] or E[effects]/E[costs], not the expected values of ratios.
Have you figured out exactly when “E[costs]/E[effects] or E[effects]/E[costs]” is called for? I have historically agreed with the point you are making, but my beliefs have been shaken recently. Here’s an example that has made me think twice:
You are donating $100 to a malaria charity and can choose between charity A and B. Charity A gets bednets for $1 each. Charity B does not yet know the cost of its bednets, but they will cost either $0.50 or $1.50 with equal probability.
Donating to charity A has a value of 100 bednets. Donating to charity B has expected value 133 bednets (equal chance of buying 200 or 66). But “E[costs]/E[effects] or E[effects]/E[costs]” is the same for each charity. In this case, E[effect/cost] seems like the right metric.
So is, the difference the fact that total costs are fixed? Someone deciding whether to start an organization or to commit to fully-funding a new intervention would have to contend with variable, unknown total costs.
Is it because funding charity A involves buying 100 “shares” in an intervention, and funding charity B involves buying either 200 or 66 “shares”, which “E[costs]/E[effects] or E[effects]/E[costs]” fails to capture?
The cases where it matters are the ones where you don’t know how much you’ll spend, including if you’re starting or running a charity, how much your charity will spend. For example, depending on your outputs, impact and updated expectations for cost-effectiveness, you’d stop taking donations and shut down.
If you wanted specifically to buy exactly 100 bednets, then committing to B would be worse, because you’ll spend more than $100 in expectation, and each extra expected dollar could have bought another bednet from A. This would be more relevant from the perspective of a charity that doesn’t know how much it’ll need to spend to reach a specific fixed binary goal, like a government policy change. But, the ratios of expected values still seem right here.
I’m not sure there are any cases where E[costs]/E[effects] or E[effects]/E[costs] gives the wrong answer in practical applications for resource allocation, if costs is what you’ll spend on the intervention. E[effects] is what you’re trying to maximize, and you can get that by multiplying E[effects]/E[costs] and E[costs]. E[effects/costs] won’t in general give you E[effects] when you multiply by E[costs].
Having now read the post that Lorenzo recommended below, I’m coming round to the majority view that the key question is “how much good could we expect from a fixed unit of cost?”.
I think in this thread there are two ways of defining costs:
Michael considers the cost as the total amount spent
Stan suggests a case where the cost is the amount needed to be spent per unit of intervention.
I think this is the major source of disagreement here, right?
This discussion resembles the observation that the cost-effectiveness ratio should mostly be used in the margin. That is, in the end we care about something like (total effect)−(total cost) and when we decide where to spend the next dollar we should compute the derivatives with respect to that extra resource and choose the intervention which maximizes that increased value.
I thought about this a bit and have edited this post from last year. I’m curious if you find it useful!
There’s also lots of discussion in the comments on that post about why E[effect/cost] is better than E[effects]/E[costs] (which I originally argued for) according to most commenters (which I now agree with).
It’s probably still worth posting! E.g. it seems that @MichaelStJules and commenters on my post would disagree on defaulting to E[effect/cost] vs E[effects]/E[costs]
Great post! A particular issue is that E(cost/effect) is infinite or undefined if you have a non-zero probability that the effect is 0. This is very commonly the case.
Another interesting point, highlighted by your log normal example, is that higher variance will tend to increase the difference between E(1/X) and 1/E(X).
E[effect/cost] will also inflate the cost-effectiveness, giving too much weight to cases where you spend little and too little weight to your costs (and opportunity costs) when you spend a lot. If there’s a 90% chance the costs are astronomical and there’s no impact or it’s net negative, the other 10% could still make it “look” cost-effective. The whole project could have negative expected effects (negative E[effect]), but positive E[effect/cost]. That would not be a project worth supporting.
You should usually be estimating E[costs]/E[effects] or E[effects]/E[costs], not the expected values of ratios.
Have you figured out exactly when “E[costs]/E[effects] or E[effects]/E[costs]” is called for? I have historically agreed with the point you are making, but my beliefs have been shaken recently. Here’s an example that has made me think twice:
You are donating $100 to a malaria charity and can choose between charity A and B. Charity A gets bednets for $1 each. Charity B does not yet know the cost of its bednets, but they will cost either $0.50 or $1.50 with equal probability.
Donating to charity A has a value of 100 bednets. Donating to charity B has expected value 133 bednets (equal chance of buying 200 or 66). But “E[costs]/E[effects] or E[effects]/E[costs]” is the same for each charity. In this case, E[effect/cost] seems like the right metric.
So is, the difference the fact that total costs are fixed? Someone deciding whether to start an organization or to commit to fully-funding a new intervention would have to contend with variable, unknown total costs.
Is it because funding charity A involves buying 100 “shares” in an intervention, and funding charity B involves buying either 200 or 66 “shares”, which “E[costs]/E[effects] or E[effects]/E[costs]” fails to capture?
If you set costs=$100 as constant in this case, then
E[effects/costs] = E[effects]/costs = E[effects]/E[costs],
and both are right.
The cases where it matters are the ones where you don’t know how much you’ll spend, including if you’re starting or running a charity, how much your charity will spend. For example, depending on your outputs, impact and updated expectations for cost-effectiveness, you’d stop taking donations and shut down.
If you wanted specifically to buy exactly 100 bednets, then committing to B would be worse, because you’ll spend more than $100 in expectation, and each extra expected dollar could have bought another bednet from A. This would be more relevant from the perspective of a charity that doesn’t know how much it’ll need to spend to reach a specific fixed binary goal, like a government policy change. But, the ratios of expected values still seem right here.
I’m not sure there are any cases where E[costs]/E[effects] or E[effects]/E[costs] gives the wrong answer in practical applications for resource allocation, if costs is what you’ll spend on the intervention. E[effects] is what you’re trying to maximize, and you can get that by multiplying E[effects]/E[costs] and E[costs]. E[effects/costs] won’t in general give you E[effects] when you multiply by E[costs].
Ah yes, I see that.
Having now read the post that Lorenzo recommended below, I’m coming round to the majority view that the key question is “how much good could we expect from a fixed unit of cost?”.
I think in this thread there are two ways of defining costs:
Michael considers the cost as the total amount spent
Stan suggests a case where the cost is the amount needed to be spent per unit of intervention.
I think this is the major source of disagreement here, right?
This discussion resembles the observation that the cost-effectiveness ratio should mostly be used in the margin. That is, in the end we care about something like (total effect)−(total cost) and when we decide where to spend the next dollar we should compute the derivatives with respect to that extra resource and choose the intervention which maximizes that increased value.
I thought about this a bit and have edited this post from last year. I’m curious if you find it useful!
There’s also lots of discussion in the comments on that post about why E[effect/cost] is better than E[effects]/E[costs] (which I originally argued for) according to most commenters (which I now agree with).
Thank you. I’d been drafting a very similar post of my own!!
It’s probably still worth posting! E.g. it seems that @MichaelStJules and commenters on my post would disagree on defaulting to E[effect/cost] vs E[effects]/E[costs]