Thank you for the thoughtful feedback, Benjamin! I will try to explain the model a bit more thouroughly than the methods section of the post.

Let’s forget normalising and weights for a moment. If we measure suffering in hours/kcal and emissions in CO2eq/kcal then the subscales have different units and can’t be added (unless we have a conversion formula from one unit to the other somehow). A common solution in this case is to multiply the subscale values. If we do this a 1% change in suffering changes the combined score by the same amount that a 1% change in emissions would.

We still might want to prioritise some subscales more than others. If we would have added subscale scores we could have multiplied the subscale scores by some constant weights beforehand. If instead we multiply subscale scores we would exponentiate the subscale scores by weights beforehand. This simple idea is called a weighted product model (WPM) in the multiple-criteria decision analysis discipline which studies how to make decisions when we have multiple conflicting criteria.

This tool uses a weighted product model. The unnormalised suffering and emissions scores are: 1. exponentiated by their corresponding weight, 2. multiplied together to get a combined score, 3. the combined score is normalised to the 0-100 range for cleaner display.

WPM is a dimensionless method used for ranking options when making decisions. That is, to answer questions like “is it more important to avoid chicken or beef” not “what is the cardinal utility of avoiding chicken”. This model is only useful for prioritising if I have decided to reduce meat consumption but am only able to leave one species off my plate. I understand now that I should have made it more clear.

Somehow measuring the utility of leaving a species off my plate would be much more interesting but seemed difficult considering the time and skills I had. I did consider using something like DALYs. There is research on converting emissions to DALYs which would allow us to use a parameter for converting non-human animal DALYs to human DALYs but I decided for the simpler ranking-only model.

That makes sense. The point I’m trying to make, though, is that the choice of how to do the conversion from CO2/kcal to hours/kcal is probably the most important bit that drives the results. I’d prefer to make that clearer to users, and get them to make their own assessment.

Instead, the WPM ends up coming up with an implicit conversion rate, which could be way different from what the person would say if asked. Given this, it seems like the results can’t be trusted.

(I expect a WPM would be fine in domains where there are multiple difficult-to-compare criteria and we’re not sure which criteria are most important – as in many daily decisions – but in this case, it could easily be that either CO2 or suffering should totally dominate your ranking, and it just depends on your worldview.)

You are right. I spent time thinking about your comments and I agree that making the tradeoff clearer is one of the most important improvements I can make. Thank you for bringing it out.

Thank you for the thoughtful feedback, Benjamin! I will try to explain the model a bit more thouroughly than the methods section of the post.

Let’s forget normalising and weights for a moment. If we measure suffering in hours/kcal and emissions in CO2eq/kcal then the subscales have different units and can’t be added (unless we have a conversion formula from one unit to the other somehow). A common solution in this case is to multiply the subscale values. If we do this a 1% change in suffering changes the combined score by the same amount that a 1% change in emissions would.

We still might want to prioritise some subscales more than others. If we would have added subscale scores we could have multiplied the subscale scores by some constant weights beforehand. If instead we multiply subscale scores we would exponentiate the subscale scores by weights beforehand. This simple idea is called a weighted product model (WPM) in the multiple-criteria decision analysis discipline which studies how to make decisions when we have multiple conflicting criteria.

This tool uses a weighted product model. The unnormalised suffering and emissions scores are:

1. exponentiated by their corresponding weight,

2. multiplied together to get a combined score,

3. the combined score is normalised to the 0-100 range for cleaner display.

WPM is a dimensionless method used for ranking options when making decisions. That is, to answer questions like “is it more important to avoid chicken or beef” not “what is the cardinal utility of avoiding chicken”. This model is only useful for prioritising if I have decided to reduce meat consumption but am only able to leave one species off my plate. I understand now that I should have made it more clear.

Somehow measuring the utility of leaving a species off my plate would be much more interesting but seemed difficult considering the time and skills I had. I did consider using something like DALYs. There is research on converting emissions to DALYs which would allow us to use a parameter for converting non-human animal DALYs to human DALYs but I decided for the simpler ranking-only model.

That makes sense. The point I’m trying to make, though, is that the choice of how to do the conversion from CO2/kcal to hours/kcal is probably the most important bit that drives the results. I’d prefer to make that clearer to users, and get them to make their own assessment.

Instead, the WPM ends up coming up with an implicit conversion rate, which could be way different from what the person would say if asked. Given this, it seems like the results can’t be trusted.

(I expect a WPM would be fine in domains where there are multiple difficult-to-compare criteria and we’re not sure which criteria are most important – as in many daily decisions – but in this case, it could easily be that either CO2 or suffering should totally dominate your ranking, and it just depends on your worldview.)

You are right. I spent time thinking about your comments and I agree that making the tradeoff clearer is one of the most important improvements I can make. Thank you for bringing it out.