If I drop the lower bound by 4 orders of magnitude, to “between 0.0000002 and 0.87 times”, I get a result of 709 Dalys/1000$, which is basically unchanged. Do sufficiently low bounds basically do nothing here?
This parameter is set to a normal distribution (which, unfortunately you can’t control) and the normal distribution doesn’t change much when you lower the lower bound. A normal distribution between 0.002 and 0.87 is about the same as a normal distribution between 0 and 0.87. (Incidentally, if the distribution were a lognormal distribution with the same range, then the average result would fall halfway between the bounds in terms of orders of magnitude. This would mean cutting the lower bound would have a significant effect. However, the effect would actually raise the effectiveness estimate because it would raise the uncertainty about the precise order of magnitude. The increase of scale outside the 90% confidence range represented by the distribution would more than make up for the lowering of the median.)
Also, this default (if you set it to “constant”) is saying that a chicken has around half the capacity weight of humans. Am I right in interpreting this as saying that if you see three chickens who are set to be imprisoned in a cage for a year, and also see a human who is set to be imprisoned in a similarly bad cage for a year, then you should preferentially free the former? Because if so, it might be worth mentioning that the intuitions of the average person is many, many orders of magnitudes lower than these estimates, not just 1-2.
The welfare capacity is supposed to describe the range between the worst and best possible experiences of a species and the numbers we provide are intended to be used as a tool for comparing harms and benefits across species. Still, it is hard to draw direct action-relevant comparisons of the sort that you describe because there are many potential side effects that would need to be considered. You may want to prioritize humans in the same way that you prioritize your family over others, or citizens of the same country over others. The capacities values are not in tension with that. You may also prefer to help humans because of their capacity for art, friendship, etc.
To grasp the concept, I think a better example application would be: if you had to give a human or three chickens a headache for an hour (which they would otherwise spend unproductively) which choice would introduce less harm into the world? Estimating the chickens’ range as half that of the human would suggest that it is less bad overall from the perspective of total suffering to give the headache to the human.
The numbers are indeed unintuitive for many people but they were not selected by intuition. We have a fairly complex and thought-out methodology. However, we would love to see alternative principled ways of arriving at less animal-friendly estimates of welfare capacities (or moral weights).
This parameter is set to a normal distribution (which, unfortunately you can’t control) and the normal distribution doesn’t change much when you lower the lower bound. A normal distribution between 0.002 and 0.87 is about the same as a normal distribution between 0 and 0.87. (Incidentally, if the distribution were a lognormal distribution with the same range, then the average result would fall halfway between the bounds in terms of orders of magnitude. This would mean cutting the lower bound would have a significant effect. However, the effect would actually raise the effectiveness estimate because it would raise the uncertainty about the precise order of magnitude. The increase of scale outside the 90% confidence range represented by the distribution would more than make up for the lowering of the median.)
The upper end of the scale is already at ” a chicken’s suffering is worth 87% of a humans”. I’m assuming that very few people are claiming that a chickens suffering is worth more than a humans. So wouldn’t the lognormal distribution be skewed to account for this, meaning that the switch would substantially change the results?
That would require building in further assumptions, like a clip of the results at 100%. We would probably want to do that, but it struck me in thinking about this that it is easy to miss when working in a model like this. It is a bit counterintuitive that lowering the lower bound of a log normal distributions can increase the mean.
Thanks for clarifying! I think these numbers are the crux of the whole debate, so it’s worth digging into them.
You may want to prioritize humans in the same way that you prioritize your family over others, or citizens of the same country over others. The capacities values are not in tension with that. You may also prefer to help humans because of their capacity for art, friendship, etc.
I am understanding correctly that none of these factors are included in the global health and development effectiveness evaluation?
To grasp the concept, I think a better example application would be: if you had to give a human or three chickens a headache for an hour (which they would otherwise spend unproductively) which choice would introduce less harm into the world? Estimating the chickens’ range as half that of the human would suggest that it is less bad overall from the perspective of total suffering to give the headache to the human.
I’m not sure how this is different to my hypothetical, except in degree?
Still, it is hard to draw direct action-relevant comparisons of the sort that you describe because there are many potential side effects that would need to be considered.
But the thing we are actually debating here is “should we prevent african children from dying of malaria, or prevent a lot of chickens from being confined to painful cages”, which is an action. If you are using a weight of ~0.44 to make that decision, then shouldn’t you similarly use it to make the “free 3 chickens or a human” decision?
I am understanding correctly that none of these factors are included in the global health and development effectiveness evaluation?
Correct!
A common response we see is that people reject the radical animal-friendly implications suggested by moral weights and infer that we must have something wrong about animals’ capacity for suffering. While we acknowledge the limitations of our work, we generally think a more fruitful response for those who reject the implications is to look for other reasons to prefer helping humans beyond purely reducing suffering. (When you start imagining people in cages, you rope in all sorts of other values that we think might legitimately tip the scales in favor of helping the human.)
This parameter is set to a normal distribution (which, unfortunately you can’t control) and the normal distribution doesn’t change much when you lower the lower bound. A normal distribution between 0.002 and 0.87 is about the same as a normal distribution between 0 and 0.87. (Incidentally, if the distribution were a lognormal distribution with the same range, then the average result would fall halfway between the bounds in terms of orders of magnitude. This would mean cutting the lower bound would have a significant effect. However, the effect would actually raise the effectiveness estimate because it would raise the uncertainty about the precise order of magnitude. The increase of scale outside the 90% confidence range represented by the distribution would more than make up for the lowering of the median.)
The welfare capacity is supposed to describe the range between the worst and best possible experiences of a species and the numbers we provide are intended to be used as a tool for comparing harms and benefits across species. Still, it is hard to draw direct action-relevant comparisons of the sort that you describe because there are many potential side effects that would need to be considered. You may want to prioritize humans in the same way that you prioritize your family over others, or citizens of the same country over others. The capacities values are not in tension with that. You may also prefer to help humans because of their capacity for art, friendship, etc.
To grasp the concept, I think a better example application would be: if you had to give a human or three chickens a headache for an hour (which they would otherwise spend unproductively) which choice would introduce less harm into the world? Estimating the chickens’ range as half that of the human would suggest that it is less bad overall from the perspective of total suffering to give the headache to the human.
The numbers are indeed unintuitive for many people but they were not selected by intuition. We have a fairly complex and thought-out methodology. However, we would love to see alternative principled ways of arriving at less animal-friendly estimates of welfare capacities (or moral weights).
The upper end of the scale is already at ” a chicken’s suffering is worth 87% of a humans”. I’m assuming that very few people are claiming that a chickens suffering is worth more than a humans. So wouldn’t the lognormal distribution be skewed to account for this, meaning that the switch would substantially change the results?
That would require building in further assumptions, like a clip of the results at 100%. We would probably want to do that, but it struck me in thinking about this that it is easy to miss when working in a model like this. It is a bit counterintuitive that lowering the lower bound of a log normal distributions can increase the mean.
Thanks for clarifying! I think these numbers are the crux of the whole debate, so it’s worth digging into them.
I am understanding correctly that none of these factors are included in the global health and development effectiveness evaluation?
I’m not sure how this is different to my hypothetical, except in degree?
But the thing we are actually debating here is “should we prevent african children from dying of malaria, or prevent a lot of chickens from being confined to painful cages”, which is an action. If you are using a weight of ~0.44 to make that decision, then shouldn’t you similarly use it to make the “free 3 chickens or a human” decision?
Correct!
A common response we see is that people reject the radical animal-friendly implications suggested by moral weights and infer that we must have something wrong about animals’ capacity for suffering. While we acknowledge the limitations of our work, we generally think a more fruitful response for those who reject the implications is to look for other reasons to prefer helping humans beyond purely reducing suffering. (When you start imagining people in cages, you rope in all sorts of other values that we think might legitimately tip the scales in favor of helping the human.)