You can see the posts above for material that’s relevant to (b) and (c), but as evidence for (a), notice that your discussion of your prior isn’t about the possible intensities of chickens’ valenced experiences, but about how much you care about those experiences. I’m not criticizing you personally for this; it happens all the time. In EA, the moral weight of X relative to Y is often understood as an all-things-considered assessment of the relative importance of X relative to Y. I don’t think people hear “relative importance” as “how valuable X is relative to Y conditional on a particular theory of value,” which is still more than we offered, but is in the right ballpark. Instead, they hear it as something like “how valuable X is relative to Y,” “the strength of my moral reasons to prioritize X in real-world situations relative to Y,” and “the strength of my concern for X relative to Y” all rolled into one. But if that’s what your prior’s about, then it isn’t particularly relevant to your prior about welfare-ranges-conditional-on-hedonism specifically.
Finally, note that if you do accept that your priors are vulnerable to these kinds of problems, then you either have to abandon or defend them. Otherwise, you don’t have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
I agree with a), and mention this somewhat prominently in the post, so that kind of sours my reaction to the rest of your comment, as it feels like you are answering to something I didn’t say:
The second shortcut I am taking is to interpret Rethink Priorities’s estimates as estimates of the relative value of humans and each species of animal—that is, to take their estimates as saying “a human is X times more valuable than a pig/chicken/shrimp/etc”. But RP explicitly notes that they are not that, they are just estimates of the range that welfare can take, from the worst experience to the best experience. You’d still have to adjust according to what proportion of that range is experienced, e.g., according to how much suffering a chicken in a factory farm experiences as a proportion of its maximum suffering.
and then later:
Note that I am in fact abusing RP’s estimates, because they are welfare ranges, not relative values. So it should pop out that they are wrong, because I didn’t go to the trouble of interpreting them correctly.
In any case, thanks for the references re: b) and c)
Re: b), it would in fact surprise me if my prior was uncalibrated. I’d also say that I am fairly familiar with forecasting distributions. My sense is that if you wanted to make the argument that my estimates are uncalibrated, you can, but I’d expect it’d be tricky.
Re: c), this is if you take a moral realist stance. If you take a moral relativist stance, or if I am just trying to describe that I do value, you have surprisingly little surface to object to.
Otherwise, you don’t have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
Yes, that is part of the downside of the moral relativist position. On the other hand, if you take a moral realist position my strong impression is that you still can’t convince e.g., a white supremacist, or an egoist, that all lives are equal, so you still share that downside. I realize that this is a longer argument though.
Anyways, I didn’t want to leave your comment unanswered but I will choose to end this conversation here (though feel free to reply on your end).
I am actually a bit confused about why you bothered to answer. Like, no answer was fine, an answer saying that you hadn’t read it but pointing to resources and pitfalls you’d expect me to fall into would have been welcome, but your answer is just weird to me.
Thanks for all this, Nuno. The upshot of Jason’s post on what’s wrong with the “holistic” approach to moral weight assignments, my post about theories of welfare, and my post about the appropriate response to animal-friendly results is something like this: you should basically ignore your priors re: animals’ welfare ranges as they’re probably (a) not really about welfare ranges, (b) uncalibrated, and (c) objectionably biased.
You can see the posts above for material that’s relevant to (b) and (c), but as evidence for (a), notice that your discussion of your prior isn’t about the possible intensities of chickens’ valenced experiences, but about how much you care about those experiences. I’m not criticizing you personally for this; it happens all the time. In EA, the moral weight of X relative to Y is often understood as an all-things-considered assessment of the relative importance of X relative to Y. I don’t think people hear “relative importance” as “how valuable X is relative to Y conditional on a particular theory of value,” which is still more than we offered, but is in the right ballpark. Instead, they hear it as something like “how valuable X is relative to Y,” “the strength of my moral reasons to prioritize X in real-world situations relative to Y,” and “the strength of my concern for X relative to Y” all rolled into one. But if that’s what your prior’s about, then it isn’t particularly relevant to your prior about welfare-ranges-conditional-on-hedonism specifically.
Finally, note that if you do accept that your priors are vulnerable to these kinds of problems, then you either have to abandon or defend them. Otherwise, you don’t have any response to the person who uses the same strategy to explain why they assign very low value to other humans, even if the face of evidence that these humans matter just as much as they do.
I agree with a), and mention this somewhat prominently in the post, so that kind of sours my reaction to the rest of your comment, as it feels like you are answering to something I didn’t say:
and then later:
In any case, thanks for the references re: b) and c)
Re: b), it would in fact surprise me if my prior was uncalibrated. I’d also say that I am fairly familiar with forecasting distributions. My sense is that if you wanted to make the argument that my estimates are uncalibrated, you can, but I’d expect it’d be tricky.
Re: c), this is if you take a moral realist stance. If you take a moral relativist stance, or if I am just trying to describe that I do value, you have surprisingly little surface to object to.
Yes, that is part of the downside of the moral relativist position. On the other hand, if you take a moral realist position my strong impression is that you still can’t convince e.g., a white supremacist, or an egoist, that all lives are equal, so you still share that downside. I realize that this is a longer argument though.
Anyways, I didn’t want to leave your comment unanswered but I will choose to end this conversation here (though feel free to reply on your end).
I am actually a bit confused about why you bothered to answer. Like, no answer was fine, an answer saying that you hadn’t read it but pointing to resources and pitfalls you’d expect me to fall into would have been welcome, but your answer is just weird to me.