Thanks for reading that Stan! Good question. I realize now that my report and the post together are a bit confusing because there are two types of symmetry at issue that seem to get blended together. I could have been clearer about this in the report. Sorry about that!
First, the post mentions the concept of welfare ranges being *symmetrical around the neutral point*. Assuming this means assuming that the best realizable welfare state is exactly as good as the worst realizable welfare state. That is assumed for simplicity, though the subsequent part of the post is meant to show that that assumption matters less than one might think.
Second, in my linked report, I focus on the concept of *axiological symmetries* which concern whether every fundamental good-making feature of a life has a corresponding fundamental bad-making feature. If we assume this and, for instance, believe that knowledge is a fundamental good-making feature, then we’d have to think that there is a corresponding fundamental bad-making feature (unjustified false belief, perhaps).
These concepts are closely related, as the existence of axiological asymmetries may provide reason to think that welfare is not symmetrical around the neutral point and vice versa. Nevertheless, and this is the crucial point, it could work out that there is complete axiological symmetry, yet welfare ranges are still not symmetrical around the neutral point. This could be because some beings are constituted in such a way that, at any moment in time, they can realize a greater quantity of fundamental bad-making features than fundamental good-making features (or vice versa).
Axiological asymmetries seem prima facie ad hoc. Without some argument for specific axiological asymmetries and without working out their axiological implications, I do think axiological symmetry should be the default assumption. There’s some nice discussion of this kind of issue in the Teresa Bruno-Niño paper cited in the report. In fact, it seems to me that both (what she calls) continuity and unity are theoretical virtues.
Now, even granting what I just wrote about axiological symmetry, perhaps the default assumption should be that welfare is not symmetrical around the neutral point for the reasons you gave. That seems totally reasonable! I personally don’t have strong views on this. Though, I do think there is a good evolutionary debunking argument to give for why animals (including humans) might be more motivated to avoid pain than accrue pleasure and why humans might be disposed to be risk-adverse in the roulette wheel example. I’m genuinely not sure how much these considerations suggest that the default is that welfare is not symmetrical around the neutral point.
Whether welfare is symmetrical around the neutral point is largely an empirical question, though. I wouldn’t be surprised if we discover that welfare is not symmetrical around the neutral point. That’s a very realistic possibility. Though still a viable possibility, I would be somewhat surprised if we discover any axiological asymmetries.
Thanks for reading that Stan! Good question. I realize now that my report and the post together are a bit confusing because there are two types of symmetry at issue that seem to get blended together. I could have been clearer about this in the report. Sorry about that!
First, the post mentions the concept of welfare ranges being *symmetrical around the neutral point*. Assuming this means assuming that the best realizable welfare state is exactly as good as the worst realizable welfare state. That is assumed for simplicity, though the subsequent part of the post is meant to show that that assumption matters less than one might think.
Second, in my linked report, I focus on the concept of *axiological symmetries* which concern whether every fundamental good-making feature of a life has a corresponding fundamental bad-making feature. If we assume this and, for instance, believe that knowledge is a fundamental good-making feature, then we’d have to think that there is a corresponding fundamental bad-making feature (unjustified false belief, perhaps).
These concepts are closely related, as the existence of axiological asymmetries may provide reason to think that welfare is not symmetrical around the neutral point and vice versa. Nevertheless, and this is the crucial point, it could work out that there is complete axiological symmetry, yet welfare ranges are still not symmetrical around the neutral point. This could be because some beings are constituted in such a way that, at any moment in time, they can realize a greater quantity of fundamental bad-making features than fundamental good-making features (or vice versa).
Axiological asymmetries seem prima facie ad hoc. Without some argument for specific axiological asymmetries and without working out their axiological implications, I do think axiological symmetry should be the default assumption. There’s some nice discussion of this kind of issue in the Teresa Bruno-Niño paper cited in the report. In fact, it seems to me that both (what she calls) continuity and unity are theoretical virtues.
https://www.pdcnet.org/msp/content/msp_2022_0999_11_25_29
Now, even granting what I just wrote about axiological symmetry, perhaps the default assumption should be that welfare is not symmetrical around the neutral point for the reasons you gave. That seems totally reasonable! I personally don’t have strong views on this. Though, I do think there is a good evolutionary debunking argument to give for why animals (including humans) might be more motivated to avoid pain than accrue pleasure and why humans might be disposed to be risk-adverse in the roulette wheel example. I’m genuinely not sure how much these considerations suggest that the default is that welfare is not symmetrical around the neutral point.
Whether welfare is symmetrical around the neutral point is largely an empirical question, though. I wouldn’t be surprised if we discover that welfare is not symmetrical around the neutral point. That’s a very realistic possibility. Though still a viable possibility, I would be somewhat surprised if we discover any axiological asymmetries.