Point estimates are fine for multiplication, lossy for division
One way of getting around this is transforming all divisions into multiplications. For example, one can calculate E(X/​Y) from E(X)*E(1/​Y) (assuming independence), instead of using E(X)/​E(Y). Computing E(1/​Y) will require using Guesstimate or similar, but then the mean can be used in a spreadsheet without having to run a full Monte Carlo simulation, which would take longer.
I am not sure, but I think a similar approach can be followed for most estimates. For example, one can use Guesstimate to obtain E(X^alpha) or E(log(X)), instead of using E(X)^alpha or log(E(X)).
Using intervals is still useful to get ranges for the outputs in a principled way, but I wonder whether the expected value alone is enough. I think expected utility is all that matters, so there is a sense in which the expected value captures all the relevant information.
I suppose I have been using interval estimates because I think they can inform how much the expected value might change in response to new information, which is useful to know. However, I am not confident uncertainty, which is what can be directly observed from the outputted intervals, is a good proxy for resilience.
I think I have come to believe assessing resilience doing a sensitivity analysis with point estimates derived from distributions is usually better than trying to evaluate it based on the uncertainty of the final result.
Note that 1/​Y is generally not well defined when Y’s range contains 0, and it’s messy when it approaches it, and when both X and Y contains both positive and negative parts. My preferred solution is to either look at Xs and Ys that are both positive, or to look at the joint pdf of X and Y, rather than the sum.
Note that 1/​Y is generally not well defined when Y’s range contains 0, and it’s messy when it approaches it
I agree. Just one note, I think a distribution for Y which encompasses 0 cannot be correct, because it would lead to infinities, which I am happy to reject. Can you give some examples in which Y (i.e. a distribution in the denominator) is defined such that it could not be zero, but you still found messiness?
when both X and Y contains both positive and negative parts
For this case, one can get point estimates from:
E(X) = P(X > 0)*E(X | X > 0) + P(X < 0)*E(X | X < 0).
E(1/​Y) = P(Y > 0)*E(1/​Y | Y > 0) + P(Y < 0)*E(1/​Y | Y < 0).
Yes, I prefer to calculate the cost-effectiveness in terms of benefits per unit cost. This way, the expected cost-effectiveness can be multiplied by the cost to obtain the expected benefits. In contrast, the cost cannot be divided by the expected cost per unit benefit to obtain the expected benefits.
Another advantage of benefits per unit cost is that they always increase with the goodness of the intervention, whereas the cost per unit benefit has a more confusing relationship (when it can be both positive and negative).
the cost is practically never zero
Yes, I do not think the cost can be zero. Even if the monetary cost is zero, there are always time costs.
Hi Stan,
One way of getting around this is transforming all divisions into multiplications. For example, one can calculate E(X/​Y) from E(X)*E(1/​Y) (assuming independence), instead of using E(X)/​E(Y). Computing E(1/​Y) will require using Guesstimate or similar, but then the mean can be used in a spreadsheet without having to run a full Monte Carlo simulation, which would take longer.
I am not sure, but I think a similar approach can be followed for most estimates. For example, one can use Guesstimate to obtain E(X^alpha) or E(log(X)), instead of using E(X)^alpha or log(E(X)).
Using intervals is still useful to get ranges for the outputs in a principled way, but I wonder whether the expected value alone is enough. I think expected utility is all that matters, so there is a sense in which the expected value captures all the relevant information.
I suppose I have been using interval estimates because I think they can inform how much the expected value might change in response to new information, which is useful to know. However, I am not confident uncertainty, which is what can be directly observed from the outputted intervals, is a good proxy for resilience.
I think I have come to believe assessing resilience doing a sensitivity analysis with point estimates derived from distributions is usually better than trying to evaluate it based on the uncertainty of the final result.
Note that 1/​Y is generally not well defined when Y’s range contains 0, and it’s messy when it approaches it, and when both X and Y contains both positive and negative parts. My preferred solution is to either look at Xs and Ys that are both positive, or to look at the joint pdf of X and Y, rather than the sum.
Hi Nuño,
Nice points!
I agree. Just one note, I think a distribution for Y which encompasses 0 cannot be correct, because it would lead to infinities, which I am happy to reject. Can you give some examples in which Y (i.e. a distribution in the denominator) is defined such that it could not be zero, but you still found messiness?
For this case, one can get point estimates from:
E(X) = P(X > 0)*E(X | X > 0) + P(X < 0)*E(X | X < 0).
E(1/​Y) = P(Y > 0)*E(1/​Y | Y > 0) + P(Y < 0)*E(1/​Y | Y < 0).
This may not buy you enough. E.g., sometimes you may want to calculate the $/​life saved, where life saved is a distribution which could be 0.
I think that in practice you (almost) always want to calculate lives/​$, not $/​life, and the cost is practically never zero
Hi Lorenzo,
Yes, I prefer to calculate the cost-effectiveness in terms of benefits per unit cost. This way, the expected cost-effectiveness can be multiplied by the cost to obtain the expected benefits. In contrast, the cost cannot be divided by the expected cost per unit benefit to obtain the expected benefits.
Another advantage of benefits per unit cost is that they always increase with the goodness of the intervention, whereas the cost per unit benefit has a more confusing relationship (when it can be both positive and negative).
Yes, I do not think the cost can be zero. Even if the monetary cost is zero, there are always time costs.