Seems like a good idea, but also a fair bit of work, so Iâd rather wait until RP releases their value ratios over actually existing humans and animals, and update on those. But if you want to do that, my code is open source.
You can see the posts above for material thatâs relevant to (b) and (c), but as evidence for (a), notice that your discussion of your prior isnât about the possible intensities of chickensâ valenced experiences, but about how much you care about those experiences. Iâm not criticizing you personally for this; it happens all the time. In EA, the moral weight of X relative to Y is often understood as an all-things-considered assessment of the relative importance of X relative to Y. I donât think people hear ârelative importanceâ as âhow valuable X is relative to Y conditional on a particular theory of value,â which is still more than we offered, but is in the right ballpark. Instead, they hear it as something like âhow valuable X is relative to Y,â âthe strength of my moral reasons to prioritize X in real-world situations relative to Y,â and âthe strength of my concern for X relative to Yâ all rolled into one. But if thatâs what your priorâs about, then it isnât particularly relevant to your prior about welfare-ranges-conditional-on-hedonism specifically.
Finally, note that if you do accept that your priors are vulnerable to these kinds of problems, then you either have to abandon or defend them. Otherwise, you donât have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
I agree with a), and mention this somewhat prominently in the post, so that kind of sours my reaction to the rest of your comment, as it feels like you are answering to something I didnât say:
The second shortcut I am taking is to interpret Rethink Prioritiesâs estimates as estimates of the relative value of humans and each species of animalâthat is, to take their estimates as saying âa human is X times more valuable than a pig/âchicken/âshrimp/âetcâ. But RP explicitly notes that they are not that, they are just estimates of the range that welfare can take, from the worst experience to the best experience. Youâd still have to adjust according to what proportion of that range is experienced, e.g., according to how much suffering a chicken in a factory farm experiences as a proportion of its maximum suffering.
and then later:
Note that I am in fact abusing RPâs estimates, because they are welfare ranges, not relative values. So it should pop out that they are wrong, because I didnât go to the trouble of interpreting them correctly.
In any case, thanks for the references re: b) and c)
Re: b), it would in fact surprise me if my prior was uncalibrated. Iâd also say that I am fairly familiar with forecasting distributions. My sense is that if you wanted to make the argument that my estimates are uncalibrated, you can, but Iâd expect itâd be tricky.
Re: c), this is if you take a moral realist stance. If you take a moral relativist stance, or if I am just trying to describe that I do value, you have surprisingly little surface to object to.
Otherwise, you donât have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
Yes, that is part of the downside of the moral relativist position. On the other hand, if you take a moral realist position my strong impression is that you still canât convince e.g., a white supremacist, or an egoist, that all lives are equal, so you still share that downside. I realize that this is a longer argument though.
Anyways, I didnât want to leave your comment unanswered but I will choose to end this conversation here (though feel free to reply on your end).
I am actually a bit confused about why you bothered to answer. Like, no answer was fine, an answer saying that you hadnât read it but pointing to resources and pitfalls youâd expect me to fall into would have been welcome, but your answer is just weird to me.
May be worth also updating on https://ââforum.effectivealtruism.org/ââposts/ââWfeWN2X4k8w8nTeaS/ââtheories-of-welfare-and-welfare-range-estimates. Basically, you can roughly decompose the comparison as (currently achievable) peak human flourishing to the worst (currently achievable) human suffering (torture), and then that to the worst (currently achievable) chicken suffering. You could also rewrite your prior to be over each ratio (as well as the overall ratio), and update the joint distribution.
Seems like a good idea, but also a fair bit of work, so Iâd rather wait until RP releases their value ratios over actually existing humans and animals, and update on those. But if you want to do that, my code is open source.
Thanks for all this, Nuno. The upshot of Jasonâs post on whatâs wrong with the âholisticâ approach to moral weight assignments, my post about theories of welfare, and my post about the appropriate response to animal-friendly results is something like this: you should basically ignore your priors re: animalsâ welfare ranges as theyâre probably (a) not really about welfare ranges, (b) uncalibrated, and (c) objectionably biased.
You can see the posts above for material thatâs relevant to (b) and (c), but as evidence for (a), notice that your discussion of your prior isnât about the possible intensities of chickensâ valenced experiences, but about how much you care about those experiences. Iâm not criticizing you personally for this; it happens all the time. In EA, the moral weight of X relative to Y is often understood as an all-things-considered assessment of the relative importance of X relative to Y. I donât think people hear ârelative importanceâ as âhow valuable X is relative to Y conditional on a particular theory of value,â which is still more than we offered, but is in the right ballpark. Instead, they hear it as something like âhow valuable X is relative to Y,â âthe strength of my moral reasons to prioritize X in real-world situations relative to Y,â and âthe strength of my concern for X relative to Yâ all rolled into one. But if thatâs what your priorâs about, then it isnât particularly relevant to your prior about welfare-ranges-conditional-on-hedonism specifically.
Finally, note that if you do accept that your priors are vulnerable to these kinds of problems, then you either have to abandon or defend them. Otherwise, you donât have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
I agree with a), and mention this somewhat prominently in the post, so that kind of sours my reaction to the rest of your comment, as it feels like you are answering to something I didnât say:
and then later:
In any case, thanks for the references re: b) and c)
Re: b), it would in fact surprise me if my prior was uncalibrated. Iâd also say that I am fairly familiar with forecasting distributions. My sense is that if you wanted to make the argument that my estimates are uncalibrated, you can, but Iâd expect itâd be tricky.
Re: c), this is if you take a moral realist stance. If you take a moral relativist stance, or if I am just trying to describe that I do value, you have surprisingly little surface to object to.
Yes, that is part of the downside of the moral relativist position. On the other hand, if you take a moral realist position my strong impression is that you still canât convince e.g., a white supremacist, or an egoist, that all lives are equal, so you still share that downside. I realize that this is a longer argument though.
Anyways, I didnât want to leave your comment unanswered but I will choose to end this conversation here (though feel free to reply on your end).
I am actually a bit confused about why you bothered to answer. Like, no answer was fine, an answer saying that you hadnât read it but pointing to resources and pitfalls youâd expect me to fall into would have been welcome, but your answer is just weird to me.