FWIW, when I have a weighted factor model to build, I think about how I can turn it into a BOTEC, and try to get it close(r) to a BOTEC. I did this for my career comparison and a geographic weighted factor model.
And I think this usually means some factors, in their units, like scale (e.g. number of individuals, years of life, DALYs amount of suffering) and probability of success (%), should be multiplied. And usually not weighted at all, except when you want to calculate a factor multiple ways and average them. Otherwise, youâll typically get weird units.
And what is the unit conversion between DALYs and a % chance of success, say? This doesnât make much sense, and probably neither will any weights, in a weighted sum. Adding factors with different units together doesnât make much sense if you wanted to interpret the final results in a scope-sensitive way.
This all makes most sense if you only have one effect youâre estimating, e.g. one direct effect and no indirect effects. Different effects should be added. A more complete model could then be the sum of multiplicative models, one multiplicative model for each effect.
EDIT: But also BOTECs and multiplicative models may be more sensitive to their factors, and more sensitive to errors in factor values when ranking. So, it may be best to do sensitivity analysis, with a range of values for the factors. But thatâs more work.
I sometimes do this, but I wonder if it defeats one of the key benefits of a WFMâthat it accounts for multiple criteria and prevents any single consideration dominating.
(With BOTECs, sometimes the final ranking/âconclusion is very dependent on one or two very uncertain or arbitrary criteria.)
If a single consideration dominates, it might be for good reason. The relative insensitivity of WFMs can reflect poor scaling of the score with impact.
(With BOTECs, sometimes the final ranking/âconclusion is very dependent on one or two very uncertain or arbitrary criteria.)
I might be inclined to do sensitivity analysis to the parameters and multiple different BOTECs/âmodels in these cases, but thatâs also more work. At some point, itâs not really a BOTEC anymore, because the model is too complicated to fit on the back of an envelope. And it can no longer be practical to use the same BOTEC/âmodel structure across interventions that are too different.
Yeah I agree in principle it âmight be for good reasonâ, though I still have some sense that it seems desirable to reduce overdependence on your ratings for one or two criteria. Similar to the reasoning for sequence thinking vs. cluster thinking
That makes sense to me, Michael. Relatedly, GiveWellâs bases their geographic prioritisation on cost-effectiveness analyses of the most promising countries.
FWIW, when I have a weighted factor model to build, I think about how I can turn it into a BOTEC, and try to get it close(r) to a BOTEC. I did this for my career comparison and a geographic weighted factor model.
And I think this usually means some factors, in their units, like scale (e.g. number of individuals, years of life, DALYs amount of suffering) and probability of success (%), should be multiplied. And usually not weighted at all, except when you want to calculate a factor multiple ways and average them. Otherwise, youâll typically get weird units.
And what is the unit conversion between DALYs and a % chance of success, say? This doesnât make much sense, and probably neither will any weights, in a weighted sum. Adding factors with different units together doesnât make much sense if you wanted to interpret the final results in a scope-sensitive way.
This all makes most sense if you only have one effect youâre estimating, e.g. one direct effect and no indirect effects. Different effects should be added. A more complete model could then be the sum of multiplicative models, one multiplicative model for each effect.
EDIT: But also BOTECs and multiplicative models may be more sensitive to their factors, and more sensitive to errors in factor values when ranking. So, it may be best to do sensitivity analysis, with a range of values for the factors. But thatâs more work.
I did the same, so I predictably agree-voted. Iâm curious if disagree-voters can explain why.
I sometimes do this, but I wonder if it defeats one of the key benefits of a WFMâthat it accounts for multiple criteria and prevents any single consideration dominating.
(With BOTECs, sometimes the final ranking/âconclusion is very dependent on one or two very uncertain or arbitrary criteria.)
If a single consideration dominates, it might be for good reason. The relative insensitivity of WFMs can reflect poor scaling of the score with impact.
I might be inclined to do sensitivity analysis to the parameters and multiple different BOTECs/âmodels in these cases, but thatâs also more work. At some point, itâs not really a BOTEC anymore, because the model is too complicated to fit on the back of an envelope. And it can no longer be practical to use the same BOTEC/âmodel structure across interventions that are too different.
Yeah I agree in principle it âmight be for good reasonâ, though I still have some sense that it seems desirable to reduce overdependence on your ratings for one or two criteria. Similar to the reasoning for sequence thinking vs. cluster thinking
That makes sense to me, Michael. Relatedly, GiveWellâs bases their geographic prioritisation on cost-effectiveness analyses of the most promising countries.