Aside from my concern about extreme pain being rarer than ordinary pain, I also would find the conclusion that
″...the bulk of suffering is concentrated in a small percentage of experiences...”
very surprising. Standard computational neuroscience decision-making views such as RL models would say that if this is true, animals would have to spend most of their everyday effort trying to avoid extreme pain. But that seems wrong. E. g. we seek food to relieve mild hunger and get a nice taste and not because we once had a an extreme hunger experience that we learned from.
You could argue that the learning from extreme pain doesn’t track the subjective intensity of pain. But then people would be choosing e. g. a subjectively 10x worse pain over a <10x longer pain. In this cause I’d probably say that the subjective impression is misguided or ethically irrelevant, though that’s an ethical judgment.
Hm.. I’m somewhat new to this “RL perspective on animal behavior,” but from what I understand about it, I see a few caveats:
Probably not all suffering is related to learning in the same way. Depression certainly comes with a subjective wish for betterment, but often lacks any motivation to seek betterment.
Probably the animal first needs to experience traumatic pain for it to become preoccupied with it? This means that if extreme pain is rare, the claim in the OP could still be compatible with your observation that most animals aren’t preoccupied with avoiding it.
You could argue that the learning from extreme pain doesn’t track the subjective intensity of pain. But then people would be choosing e. g. a subjectively 10x worse pain over a <10x longer pain. In this cause I’d probably say that the subjective impression is misguided or ethically irrelevant, though that’s an ethical judgment.
I share your intuition for very clear-cut choice situations about two painful experiences. But you could imagine cases where a person chooses one thing (i.e., display some revealed preference), but feels like there’s an important sense in which they’d rather be the sort of person who chooses the other thing. I’m not sure this example applies to pure pain-vs.-pain comparisons, but it’s a reason I’m not on board with normative evaluations that focus solely on decisions taken after having become acquainted with certain experiences. For example, if I’m presented with either staying in bed or leaving bed + being subjected to electro shock + getting rewarded, I’m sure you can make the reward high enough that, after a few forced trials, I’ll start voluntarily choosing “leaving the bed” over “staying” every time. In this new situation, I’d now be waking up with intense longings for reward, longings painful enough that I’d prefer electro shocks followed by satisfaction over continuation of those longings. Note that this is an altogether different thought experiment compared to the original situation where I was waking up without longings. As you indicate, it seems like a further question whether, in the newer version (after acquaintance with shock + reward), we want to look at this as choosing the thing we learned is better, or as choosing something other than we would have chosen initially because we developed some type of addiction.
More generally, the observation I’d like to add to this (and similar) discussions is that humans seem to have two very different “modes” for selecting actions. The mode where I’m laying in bed comfortably but alert and agenty enough to decide how to spend my morning is a different, more “system-2-heavy” mode from one where I’m having hard-to-control longings (or aversions). In the more system-2-heavy mode, people may care about things that are very different from maximization of expected experienced reward. This can skew one’s revealed preferences about pain (or pleasure) tradeoffs in all kinds of ways, making it complicated (to say the least) to take this RL perspective (which I view as being primarily focused – at least in the sense that it’s “purer” there – on the more system-1-like mode) as the basis of normative evaluation.
As you say, such normative evaluation (assuming we are right about the descriptive features that make up the option space) comes down to subjective judgment calls, and I can see why you might have different intuitions from me.
(BTW I also found the claims in the OP surprising; and I’m not sure yet whether I agree with them.)
Adding to what Lucas mentioned (how we are motivated in part by longing/addiction for strong rewards): Suffering and negative reinforcement are correlated but are by no means the same thing. In the case of extreme suffering, there seems to be a point where the pain has already maxed out in terms of negative reinforcement capacity, and anything above it is just senseless suffering. Cluster headaches would not cause any less behavioral suppression if they were 10 or even 100 times less painful. Likewise, our ability to reason about pain and pleasure is limited by our state-dependent ability to imagine it. As I argued in the article, our ability to imagine any pain or pleasure that goes beyond our ability to extrapolate with the qualia accessible to us at the moment is very limited.
The bliss of 5-MeO-DMT or epileptic temporal lobe seizures is as Dostoevsky said “a happiness unthinkable in the normal state and unimaginable for anyone who hasn’t experienced it”. Likewise for extreme pain. So you wouldn’t be able to infer that these states exist (and are much more prevalent than one intuitively believes) merely from observing the patterns of reinforcement from a third-person point of view.
Aside from my concern about extreme pain being rarer than ordinary pain, I also would find the conclusion that
″...the bulk of suffering is concentrated in a small percentage of experiences...”
very surprising. Standard computational neuroscience decision-making views such as RL models would say that if this is true, animals would have to spend most of their everyday effort trying to avoid extreme pain. But that seems wrong. E. g. we seek food to relieve mild hunger and get a nice taste and not because we once had a an extreme hunger experience that we learned from.
You could argue that the learning from extreme pain doesn’t track the subjective intensity of pain. But then people would be choosing e. g. a subjectively 10x worse pain over a <10x longer pain. In this cause I’d probably say that the subjective impression is misguided or ethically irrelevant, though that’s an ethical judgment.
Hm.. I’m somewhat new to this “RL perspective on animal behavior,” but from what I understand about it, I see a few caveats:
Probably not all suffering is related to learning in the same way. Depression certainly comes with a subjective wish for betterment, but often lacks any motivation to seek betterment.
Probably the animal first needs to experience traumatic pain for it to become preoccupied with it? This means that if extreme pain is rare, the claim in the OP could still be compatible with your observation that most animals aren’t preoccupied with avoiding it.
I share your intuition for very clear-cut choice situations about two painful experiences. But you could imagine cases where a person chooses one thing (i.e., display some revealed preference), but feels like there’s an important sense in which they’d rather be the sort of person who chooses the other thing. I’m not sure this example applies to pure pain-vs.-pain comparisons, but it’s a reason I’m not on board with normative evaluations that focus solely on decisions taken after having become acquainted with certain experiences. For example, if I’m presented with either staying in bed or leaving bed + being subjected to electro shock + getting rewarded, I’m sure you can make the reward high enough that, after a few forced trials, I’ll start voluntarily choosing “leaving the bed” over “staying” every time. In this new situation, I’d now be waking up with intense longings for reward, longings painful enough that I’d prefer electro shocks followed by satisfaction over continuation of those longings. Note that this is an altogether different thought experiment compared to the original situation where I was waking up without longings. As you indicate, it seems like a further question whether, in the newer version (after acquaintance with shock + reward), we want to look at this as choosing the thing we learned is better, or as choosing something other than we would have chosen initially because we developed some type of addiction.
More generally, the observation I’d like to add to this (and similar) discussions is that humans seem to have two very different “modes” for selecting actions. The mode where I’m laying in bed comfortably but alert and agenty enough to decide how to spend my morning is a different, more “system-2-heavy” mode from one where I’m having hard-to-control longings (or aversions). In the more system-2-heavy mode, people may care about things that are very different from maximization of expected experienced reward. This can skew one’s revealed preferences about pain (or pleasure) tradeoffs in all kinds of ways, making it complicated (to say the least) to take this RL perspective (which I view as being primarily focused – at least in the sense that it’s “purer” there – on the more system-1-like mode) as the basis of normative evaluation.
As you say, such normative evaluation (assuming we are right about the descriptive features that make up the option space) comes down to subjective judgment calls, and I can see why you might have different intuitions from me.
(BTW I also found the claims in the OP surprising; and I’m not sure yet whether I agree with them.)
Adding to what Lucas mentioned (how we are motivated in part by longing/addiction for strong rewards): Suffering and negative reinforcement are correlated but are by no means the same thing. In the case of extreme suffering, there seems to be a point where the pain has already maxed out in terms of negative reinforcement capacity, and anything above it is just senseless suffering. Cluster headaches would not cause any less behavioral suppression if they were 10 or even 100 times less painful. Likewise, our ability to reason about pain and pleasure is limited by our state-dependent ability to imagine it. As I argued in the article, our ability to imagine any pain or pleasure that goes beyond our ability to extrapolate with the qualia accessible to us at the moment is very limited.
The bliss of 5-MeO-DMT or epileptic temporal lobe seizures is as Dostoevsky said “a happiness unthinkable in the normal state and unimaginable for anyone who hasn’t experienced it”. Likewise for extreme pain. So you wouldn’t be able to infer that these states exist (and are much more prevalent than one intuitively believes) merely from observing the patterns of reinforcement from a third-person point of view.