The cases where it matters are the ones where you don’t know how much you’ll spend, including if you’re starting or running a charity, how much your charity will spend. For example, depending on your outputs, impact and updated expectations for cost-effectiveness, you’d stop taking donations and shut down.
If you wanted specifically to buy exactly 100 bednets, then committing to B would be worse, because you’ll spend more than $100 in expectation, and each extra expected dollar could have bought another bednet from A. This would be more relevant from the perspective of a charity that doesn’t know how much it’ll need to spend to reach a specific fixed binary goal, like a government policy change. But, the ratios of expected values still seem right here.
I’m not sure there are any cases where E[costs]/E[effects] or E[effects]/E[costs] gives the wrong answer in practical applications for resource allocation, if costs is what you’ll spend on the intervention. E[effects] is what you’re trying to maximize, and you can get that by multiplying E[effects]/E[costs] and E[costs]. E[effects/costs] won’t in general give you E[effects] when you multiply by E[costs].
Having now read the post that Lorenzo recommended below, I’m coming round to the majority view that the key question is “how much good could we expect from a fixed unit of cost?”.
I think in this thread there are two ways of defining costs:
Michael considers the cost as the total amount spent
Stan suggests a case where the cost is the amount needed to be spent per unit of intervention.
I think this is the major source of disagreement here, right?
This discussion resembles the observation that the cost-effectiveness ratio should mostly be used in the margin. That is, in the end we care about something like (total effect)−(total cost) and when we decide where to spend the next dollar we should compute the derivatives with respect to that extra resource and choose the intervention which maximizes that increased value.
If you set costs=$100 as constant in this case, then
E[effects/costs] = E[effects]/costs = E[effects]/E[costs],
and both are right.
The cases where it matters are the ones where you don’t know how much you’ll spend, including if you’re starting or running a charity, how much your charity will spend. For example, depending on your outputs, impact and updated expectations for cost-effectiveness, you’d stop taking donations and shut down.
If you wanted specifically to buy exactly 100 bednets, then committing to B would be worse, because you’ll spend more than $100 in expectation, and each extra expected dollar could have bought another bednet from A. This would be more relevant from the perspective of a charity that doesn’t know how much it’ll need to spend to reach a specific fixed binary goal, like a government policy change. But, the ratios of expected values still seem right here.
I’m not sure there are any cases where E[costs]/E[effects] or E[effects]/E[costs] gives the wrong answer in practical applications for resource allocation, if costs is what you’ll spend on the intervention. E[effects] is what you’re trying to maximize, and you can get that by multiplying E[effects]/E[costs] and E[costs]. E[effects/costs] won’t in general give you E[effects] when you multiply by E[costs].
Ah yes, I see that.
Having now read the post that Lorenzo recommended below, I’m coming round to the majority view that the key question is “how much good could we expect from a fixed unit of cost?”.
I think in this thread there are two ways of defining costs:
Michael considers the cost as the total amount spent
Stan suggests a case where the cost is the amount needed to be spent per unit of intervention.
I think this is the major source of disagreement here, right?
This discussion resembles the observation that the cost-effectiveness ratio should mostly be used in the margin. That is, in the end we care about something like (total effect)−(total cost) and when we decide where to spend the next dollar we should compute the derivatives with respect to that extra resource and choose the intervention which maximizes that increased value.