I skimmed the piece on axiological asymmetries that you linked and am quite puzzled that you seem to start with the assumption of symmetry and look for evidence against it. I would expect asymmetry to be the more intuitive, therefore default, position. As the piece says
At just the first-order level, people tend to assume that (the worst) pain is worse than (the best) pleasure is pleasurable. The agonizing ends for non-human animals in factory farms and in the wild seem far worse than the best sort of life they could realize would be good. [...] it’s hard to find any organisms that risk the worst pains for the greatest pleasures and vice versa.
I would expect that a difference in magnitude between the best pleasure and worst possible is the most obvious explanation, but the piece concludes that these judgments are “far more plausibly explained by various cognitive biases”.
As far as I can tell this would suggest that either:
Someone who has recently experienced or is currently experiencing intense suffering (and therefore has a better understanding of the stakes) would be more willing to take the kind of roulette gamble described in the piece. This seems unlikely.
People’s assessments of hedonic states are deeply unreliable even if they have recent experience of the states in question. I don’t like this much because it means we have to fall back on physiological evidence for human pleasure/suffering, which, as shown by the mayonnaise example, can’t give us the full picture.
On a slightly separate note, I played around with the BOTEC to check the claim that assuming symmetry doesn’t change the numbers much and I was convinced. The extreme suffering-focused assumption (where perfect health is merely neutral) resulted in double the welfare gain of the symmetric assumption (when the increase in welfare as a percentage of the animals’ negative welfare range is held constant).
My main question on this last point is: why use “percentage of the animals’ negative welfare range” when “percentage of the animals’ total welfare range” seems more relevant and would not vary at all across different (a)symmetry assumptions?
Thanks for reading that Stan! Good question. I realize now that my report and the post together are a bit confusing because there are two types of symmetry at issue that seem to get blended together. I could have been clearer about this in the report. Sorry about that!
First, the post mentions the concept of welfare ranges being *symmetrical around the neutral point*. Assuming this means assuming that the best realizable welfare state is exactly as good as the worst realizable welfare state. That is assumed for simplicity, though the subsequent part of the post is meant to show that that assumption matters less than one might think.
Second, in my linked report, I focus on the concept of *axiological symmetries* which concern whether every fundamental good-making feature of a life has a corresponding fundamental bad-making feature. If we assume this and, for instance, believe that knowledge is a fundamental good-making feature, then we’d have to think that there is a corresponding fundamental bad-making feature (unjustified false belief, perhaps).
These concepts are closely related, as the existence of axiological asymmetries may provide reason to think that welfare is not symmetrical around the neutral point and vice versa. Nevertheless, and this is the crucial point, it could work out that there is complete axiological symmetry, yet welfare ranges are still not symmetrical around the neutral point. This could be because some beings are constituted in such a way that, at any moment in time, they can realize a greater quantity of fundamental bad-making features than fundamental good-making features (or vice versa).
Axiological asymmetries seem prima facie ad hoc. Without some argument for specific axiological asymmetries and without working out their axiological implications, I do think axiological symmetry should be the default assumption. There’s some nice discussion of this kind of issue in the Teresa Bruno-Niño paper cited in the report. In fact, it seems to me that both (what she calls) continuity and unity are theoretical virtues.
Now, even granting what I just wrote about axiological symmetry, perhaps the default assumption should be that welfare is not symmetrical around the neutral point for the reasons you gave. That seems totally reasonable! I personally don’t have strong views on this. Though, I do think there is a good evolutionary debunking argument to give for why animals (including humans) might be more motivated to avoid pain than accrue pleasure and why humans might be disposed to be risk-adverse in the roulette wheel example. I’m genuinely not sure how much these considerations suggest that the default is that welfare is not symmetrical around the neutral point.
Whether welfare is symmetrical around the neutral point is largely an empirical question, though. I wouldn’t be surprised if we discover that welfare is not symmetrical around the neutral point. That’s a very realistic possibility. Though still a viable possibility, I would be somewhat surprised if we discover any axiological asymmetries.
Thanks for your questions, Stan. Travis wrote the piece on axiological asymmetries and he can best respond on that front. FWIW, I’ll just say that I’m not convinced that there’s a difference of an order of magnitude between the best pleasure and the worst pain—or any difference at all—insofar as we’re focused on intensity per se. I’m inclined to think it’s just really hard to say and so I take symmetry as the default position. For all that, I’m open to the possibility that pleasures and pains of the same intensity have different impacts on welfare, perhaps because some sort of desire satisfaction theory of welfare is true, we’re risk-averse creatures, and we more strongly dislike signs of low fitness than the alternative. Point is: there may be other ways of accommodating your intuition than giving up the symmetry assumption.
To your main question, we distinguish the negative and positive portions of the welfare range because we want to sharply distinguish cases where the interventions flips the life from net negative to net positive. Imagine a case where an animal has a symmetrical welfare range and an intervention moves the animal either 60% of their negative welfare range or 60% of their total welfare range. In the former case, they’re still net negative; in the latter case, they now net positive. If you’re a totalist, that really matters: the “logic of the larder” argument doesn’t go through even post-intervention in the former case, whereas it does go through in the latter.
I skimmed the piece on axiological asymmetries that you linked and am quite puzzled that you seem to start with the assumption of symmetry and look for evidence against it. I would expect asymmetry to be the more intuitive, therefore default, position. As the piece says
I would expect that a difference in magnitude between the best pleasure and worst possible is the most obvious explanation, but the piece concludes that these judgments are “far more plausibly explained by various cognitive biases”.
As far as I can tell this would suggest that either:
Someone who has recently experienced or is currently experiencing intense suffering (and therefore has a better understanding of the stakes) would be more willing to take the kind of roulette gamble described in the piece. This seems unlikely.
People’s assessments of hedonic states are deeply unreliable even if they have recent experience of the states in question. I don’t like this much because it means we have to fall back on physiological evidence for human pleasure/suffering, which, as shown by the mayonnaise example, can’t give us the full picture.
On a slightly separate note, I played around with the BOTEC to check the claim that assuming symmetry doesn’t change the numbers much and I was convinced. The extreme suffering-focused assumption (where perfect health is merely neutral) resulted in double the welfare gain of the symmetric assumption (when the increase in welfare as a percentage of the animals’ negative welfare range is held constant).
My main question on this last point is: why use “percentage of the animals’ negative welfare range” when “percentage of the animals’ total welfare range” seems more relevant and would not vary at all across different (a)symmetry assumptions?
Thanks for reading that Stan! Good question. I realize now that my report and the post together are a bit confusing because there are two types of symmetry at issue that seem to get blended together. I could have been clearer about this in the report. Sorry about that!
First, the post mentions the concept of welfare ranges being *symmetrical around the neutral point*. Assuming this means assuming that the best realizable welfare state is exactly as good as the worst realizable welfare state. That is assumed for simplicity, though the subsequent part of the post is meant to show that that assumption matters less than one might think.
Second, in my linked report, I focus on the concept of *axiological symmetries* which concern whether every fundamental good-making feature of a life has a corresponding fundamental bad-making feature. If we assume this and, for instance, believe that knowledge is a fundamental good-making feature, then we’d have to think that there is a corresponding fundamental bad-making feature (unjustified false belief, perhaps).
These concepts are closely related, as the existence of axiological asymmetries may provide reason to think that welfare is not symmetrical around the neutral point and vice versa. Nevertheless, and this is the crucial point, it could work out that there is complete axiological symmetry, yet welfare ranges are still not symmetrical around the neutral point. This could be because some beings are constituted in such a way that, at any moment in time, they can realize a greater quantity of fundamental bad-making features than fundamental good-making features (or vice versa).
Axiological asymmetries seem prima facie ad hoc. Without some argument for specific axiological asymmetries and without working out their axiological implications, I do think axiological symmetry should be the default assumption. There’s some nice discussion of this kind of issue in the Teresa Bruno-Niño paper cited in the report. In fact, it seems to me that both (what she calls) continuity and unity are theoretical virtues.
https://www.pdcnet.org/msp/content/msp_2022_0999_11_25_29
Now, even granting what I just wrote about axiological symmetry, perhaps the default assumption should be that welfare is not symmetrical around the neutral point for the reasons you gave. That seems totally reasonable! I personally don’t have strong views on this. Though, I do think there is a good evolutionary debunking argument to give for why animals (including humans) might be more motivated to avoid pain than accrue pleasure and why humans might be disposed to be risk-adverse in the roulette wheel example. I’m genuinely not sure how much these considerations suggest that the default is that welfare is not symmetrical around the neutral point.
Whether welfare is symmetrical around the neutral point is largely an empirical question, though. I wouldn’t be surprised if we discover that welfare is not symmetrical around the neutral point. That’s a very realistic possibility. Though still a viable possibility, I would be somewhat surprised if we discover any axiological asymmetries.
Thanks for your questions, Stan. Travis wrote the piece on axiological asymmetries and he can best respond on that front. FWIW, I’ll just say that I’m not convinced that there’s a difference of an order of magnitude between the best pleasure and the worst pain—or any difference at all—insofar as we’re focused on intensity per se. I’m inclined to think it’s just really hard to say and so I take symmetry as the default position. For all that, I’m open to the possibility that pleasures and pains of the same intensity have different impacts on welfare, perhaps because some sort of desire satisfaction theory of welfare is true, we’re risk-averse creatures, and we more strongly dislike signs of low fitness than the alternative. Point is: there may be other ways of accommodating your intuition than giving up the symmetry assumption.
To your main question, we distinguish the negative and positive portions of the welfare range because we want to sharply distinguish cases where the interventions flips the life from net negative to net positive. Imagine a case where an animal has a symmetrical welfare range and an intervention moves the animal either 60% of their negative welfare range or 60% of their total welfare range. In the former case, they’re still net negative; in the latter case, they now net positive. If you’re a totalist, that really matters: the “logic of the larder” argument doesn’t go through even post-intervention in the former case, whereas it does go through in the latter.