We don’t actually have a great definition of what suffering is and, if we model it in terms of preferences, it bottoms out. AKA, there’s a point in suffering when I could imagine myself saying something like “This is the worst thing ever; get me out of here no matter what.”
Proponents or sympathizers of lexical NU (e.g. Tomasik) often make this claim, but I’m not at all persuaded. The hypothetical person you describe would beg for the suffering to stop even if continuing to experience it was necessary and sufficient to avoid an even more intense or longer episode of extreme suffering. So if this alleged datum of experience had the evidential force you attribute to it, it would actually undermine lexical NU.
It’s also super hard to really understand what it’s like to be in edge-case extreme suffering situations without actually being in one, and most people haven’t.
It’s even harderto understand what it’s like to experience comparably extreme happiness, since evolutionary pressures selected for brains capable of experiencing wider intensity ranges of suffering than of happiness. The kind of consideration you invoke here actually provides the basis for a debunking argument of the core intuition behind NU, as has been noted by Shulman and others. (Though admittedly many NUs appear not to be persuaded by this argument.)
I’m a moral anti-realist. There’s no strict reason why we can’t have weird dicontinuities in our utility functions if that’s what we actually have.
Humans have all sorts of weird and inconsistent attitudes. Regardless of whether you are a realist or an anti-realist, you need to reconcile this particular belief of yours with all the other beliefs you have, including the belief that an experience that is almost imperceptibly more intense than another experience can’t be infinitely (infinitely!) worse than it. Or, if you want a more vivid example, the belief that it would not be worth subjecting a quadrillion animals having perfectly happy lives to a lifetime of agony in factory farms solely to spare a single animal a mere second of slightly more intense agony just above the relevant critical threshold.
The hypothetical person you describe would beg for the suffering to stop even if continuing to experience it was necessary and sufficient to avoid an even more intense or longer episode of extreme suffering.
Yeah, I agree with this. More explicitly, I agree that it’s bad that the person won’t continue to experience suffering if it will cause them to experience worse suffering and that this implies that lexical trade-offs in suffering are weird. However
I said that “in terms of preferences, [suffering] bottoms out.” In this situation, you’re changing my example by proposing that there is a hypothetical yet worse form of suffering when I’m not convinced there is one after that point
The above point only addresses more intense suffering, not longer suffering. However I think you’re wrong about bringing up different lengths of suffering. When I talk about lexicality, I’m talking about valuing different experiences in different ways. A longer episode of extreme suffering and a shorter form of the same level of extreme suffering are in the same lexicality and can be traded off
It’s even harder to understand what it’s like to experience comparably extreme happiness, since evolutionary pressures selected for brains capable of experiencing wider intensity ranges of suffering than of happiness.
I agree with this and touched briefly on this in my writing. Even without the evolutionary argument, I’ll grant that imagining lexically worse forms of suffering also implies lexically better forms of happiness just as much. After all, in the same way that suffering could bottom out at “this is the worst thing ever and I’d do anything to make it stop”, happiness could ceiling at “this is the most amazing thing ever and I’d do anything to make it continue longer.”
Then you have to deal with the confusing problem of reconciling trade-offs between those kinds of experiences. Frankly, I have no idea how to do that.
Humans have all sorts of weird and inconsistent attitudes. Regardless of whether you are a realist or an anti-realist, you need to reconcile this particular belief of yours with all the other beliefs you have
I actually don’t need to do this for a couple reasons:
I said that I thought negative lexical utilitarianism was plausible. I think there’s something to it but I don’t have particularly strong opinions on it. This is true for total utilitarianism as well (though, frankly, I actually learn slightly more in favor of total utilitarianism at the moment)
The sorts of situations where lexical threshold utilitarianism differs from ordinary utilitarianism are extreme and I think my time is more pragmatically spent trying to help the world than it is on making my brain ethically self-consistent
As a side-note, negative lexical utilitarianism has infinitely bad forms of suffering so even giving it a small credence in your personal morality should imply that it dominates your personal morality. But, per the above bullet, this isn’t something I’m that interested in figuring out
Or, if you want a more vivid example, the belief that it would not be worth subjecting a quadrillion animals having perfectly happy lives to a lifetime of agony in factory farms solely to spare a single animal a mere second of slightly more intense agony just above the relevant critical threshold.
I would not trade a quadrillion animals having perfectly happy lives instead of agony in factory farms just to avoid a second of slightly more intense agony here. However, this isn’t the model of negative lexical utilitarianism I find plausible. The one I find plausible implies that there is no continuous space of subjective experiences spanning from bad to good; at some point things just hop from finitely bad suffering that can be reasoned about and traded to infinitely bad suffering that can’t be reasoned about and traded.
I guess you could argue that moralities are about how we should prefer subjective experiences as opposed to the subjective experiences themselves (...and thus that the above is completely compatible with total utilitarianism). However, as I mentioned
We don’t actually have a great definition of what suffering is and, if we model it in terms of preferences, it bottoms out. AKA, there’s a point in suffering when I could imagine myself saying something like “This is the worst thing ever; get me out of here no matter what.”
so I’m uncertain about the truth behind distinguishing subjective experience from preferences about them.
It is in the context of that uncertainty that I think negative lexical utilitarianism is plausible.
Proponents or sympathizers of lexical NU (e.g. Tomasik) often make this claim, but I’m not at all persuaded. The hypothetical person you describe would beg for the suffering to stop even if continuing to experience it was necessary and sufficient to avoid an even more intense or longer episode of extreme suffering. So if this alleged datum of experience had the evidential force you attribute to it, it would actually undermine lexical NU.
It’s even harder to understand what it’s like to experience comparably extreme happiness, since evolutionary pressures selected for brains capable of experiencing wider intensity ranges of suffering than of happiness. The kind of consideration you invoke here actually provides the basis for a debunking argument of the core intuition behind NU, as has been noted by Shulman and others. (Though admittedly many NUs appear not to be persuaded by this argument.)
Humans have all sorts of weird and inconsistent attitudes. Regardless of whether you are a realist or an anti-realist, you need to reconcile this particular belief of yours with all the other beliefs you have, including the belief that an experience that is almost imperceptibly more intense than another experience can’t be infinitely (infinitely!) worse than it. Or, if you want a more vivid example, the belief that it would not be worth subjecting a quadrillion animals having perfectly happy lives to a lifetime of agony in factory farms solely to spare a single animal a mere second of slightly more intense agony just above the relevant critical threshold.
Yeah, I agree with this. More explicitly, I agree that it’s bad that the person won’t continue to experience suffering if it will cause them to experience worse suffering and that this implies that lexical trade-offs in suffering are weird. However
I said that “in terms of preferences, [suffering] bottoms out.” In this situation, you’re changing my example by proposing that there is a hypothetical yet worse form of suffering when I’m not convinced there is one after that point
The above point only addresses more intense suffering, not longer suffering. However I think you’re wrong about bringing up different lengths of suffering. When I talk about lexicality, I’m talking about valuing different experiences in different ways. A longer episode of extreme suffering and a shorter form of the same level of extreme suffering are in the same lexicality and can be traded off
I agree with this and touched briefly on this in my writing. Even without the evolutionary argument, I’ll grant that imagining lexically worse forms of suffering also implies lexically better forms of happiness just as much. After all, in the same way that suffering could bottom out at “this is the worst thing ever and I’d do anything to make it stop”, happiness could ceiling at “this is the most amazing thing ever and I’d do anything to make it continue longer.”
Then you have to deal with the confusing problem of reconciling trade-offs between those kinds of experiences. Frankly, I have no idea how to do that.
I actually don’t need to do this for a couple reasons:
I said that I thought negative lexical utilitarianism was plausible. I think there’s something to it but I don’t have particularly strong opinions on it. This is true for total utilitarianism as well (though, frankly, I actually learn slightly more in favor of total utilitarianism at the moment)
The sorts of situations where lexical threshold utilitarianism differs from ordinary utilitarianism are extreme and I think my time is more pragmatically spent trying to help the world than it is on making my brain ethically self-consistent
As a side-note, negative lexical utilitarianism has infinitely bad forms of suffering so even giving it a small credence in your personal morality should imply that it dominates your personal morality. But, per the above bullet, this isn’t something I’m that interested in figuring out
I would not trade a quadrillion animals having perfectly happy lives instead of agony in factory farms just to avoid a second of slightly more intense agony here. However, this isn’t the model of negative lexical utilitarianism I find plausible. The one I find plausible implies that there is no continuous space of subjective experiences spanning from bad to good; at some point things just hop from finitely bad suffering that can be reasoned about and traded to infinitely bad suffering that can’t be reasoned about and traded.
I guess you could argue that moralities are about how we should prefer subjective experiences as opposed to the subjective experiences themselves (...and thus that the above is completely compatible with total utilitarianism). However, as I mentioned
so I’m uncertain about the truth behind distinguishing subjective experience from preferences about them.
It is in the context of that uncertainty that I think negative lexical utilitarianism is plausible.