I am worried that exposing oneself to extreme amounts of suffering without also exposing oneself to extreme amounts of pleasure, happiness, tranquility, truth, etc., will predictably lead one to care a lot more about reducing suffering compared to doing something about other common human values, which seems to have happened here. And the fact that certain experiences like pain are a lot easier to induce (at extreme intensities) than other experiences creates a bias in which values people care the most about.
Carl Shulman made a similar point in this post: “This is important to remember since our intuitions and experience may mislead us about the intensity of pain and pleasure which are possible. In humans, the pleasure of orgasm may be less than the pain of deadly injury, since death is a much larger loss of reproductive success than a single sex act is a gain. But there is nothing problematic about the idea of much more intense pleasures, such that their combination with great pains would be satisfying on balance.”
Personally speaking, as someone who has been depressed and anxious most of my life and sometimes have (unintentionally) experienced extreme amounts of suffering, I don’t currently find myself caring more about pleasure/happiness compared to pain/suffering (I would say I care about them roughly the same). There’s also this thing I’ve noticed where sometimes when I’m suffering a lot, the suffering starts to “feel good” and I don’t mind it as much, and symmetrically, when I’ve been happy the happiness has started to “feel fake” somehow so overall I feel pretty confused about what terminal values I am even optimizing for (but thankfully it seems like on the current strategic landscape I don’t need to figure this out immediately).
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they’re too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
It’s also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities. Pleasure and suffering seem not to be functionally symmetric. Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals’ subjective wellbeing. Would similarly intense pleasure make us desperate to continue/experience it? It’s plausible to me that such desperation would actually just be bad or unpleasant, and so such a pleasurable state would be worse than other pleasurable ones. Or, at least, such desperation doesn’t seem to me to be inherently positively tied to its intensity. Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be (https://link.springer.com/article/10.1007/s13164-013-0171-2 ).
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they’re too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
If by “neartermism” you mean something like “how do we best help humans/animals/etc who currently exist using only technologies that currently exist, while completely ignoring the fact that AGI may be created within the next couple of decades” or “how do we make the next 1 year of experiences as good as we can while ignoring anything beyond that” or something along those lines, then I agree. But I guess I wasn’t really thinking along those lines since I find that kind of neartermism either pretty implausible or feel like it doesn’t really include all the relevant time periods I care about.
It’s also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren’t so ignorant that we can’t make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don’t apply equally in favour of negative skew), we should indeed expect the negative to be more efficient.
Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don’t apply equally in reverse, we have more reason to believe efficiencies should skew negative.
Also similar to gwern’s comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don’t (e.g. suffering won’t really be any less bad in an experience machine or if disembodied), then I’d expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences.
However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.
Are any of these arguments against symmetry fleshed out anywhere? I’d be interested if there’s anything that goes into these in more detail.
Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals’ subjective wellbeing.
I’m not sure I buy that the urgency of extreme pain is a necessary component of its intensity. It makes more sense to me that the intensity drives the urgency rather than the other way around, but I’m not sure. You could probably define the intensity of pain by the strength of one’s preference to stop it, but this just seems like a very good proxy to me.
Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be
I suspect these are due to implementation details in the brain that aren’t guaranteed to hold in longtermism (if we leave open the possibility of advanced neurotechnology).
I’m sympathetic to functionalism, and the attention, urgency or priority given to something seems likely defining of its intensity to me, at least for pain, and possibly generally. I don’t know what other effects would ground intensity in a way that’s not overly particular to specific physical/behavioural capacities or non-brain physiological responses (heart rate, stress hormones, etc.). (I don’t think reinforcement strength is defining.)
There are some attempts at functional definitions of pain and pleasure intensities here, and they seem fairly symmetric:
I’ll add that our understanding of pleasure and suffering and the moral value we assign to them may be necessarily human-relative, so if those phenomena turn out to be functionally asymmetric in humans (e.g. one defined by the necessity of a certain function with no sufficiently similar/symmetric counterpart in the other), then our concepts of pleasure and suffering will also be functionally asymmetric. I make some similar/related arguments in https://forum.effectivealtruism.org/posts/L4Cv8hvuun6vNL8rm/solution-to-the-two-envelopes-problem-for-moral-weights
I think any functionalist definition for the intensity of either would have to be asymmetric, at least insofar as intense pleasures (e.g. drug highs or euphoria associated with temporal lobe epilepsy) are associated with extreme contentedness rather than desperation for it to continue. Similarly-intense pains, on the other hand, do create a strong urgency for it to stop. This particular asymmetry seems present in the definitions you linked, so I’m a little sceptical of the claim that “super-pleasure” would necessitate an urgency for it to continue.
I’m not sure whether these kinds of functional asymmetries give much evidence one way or the other—it seems like it could skew positive just as much as negative. I agree that our understanding might very well be human-relative; I think that the cognitive disruptiveness of pain could be explained by the wider activation of networks across the brain compared to pleasure, for instance. I think a pleasure of the sort that activates a similar breadth of networks would feel qualitatively different, and that experiencing such a pleasure might change our views here.
I think this is a fair point, if you believe that pleasure can outweigh really awful suffering in practice. I do not currently believe this, for all practical purposes. Basically, my position is that these other human values—while somewhat valuable—are simply trivial in the face of the really awful suffering that is very common in our world.
Do you know of any ways I could experimentally expose myself to extreme amounts of pleasure, happiness, tranquility, and truth?
I’d be willing to expose myself to whatever you suggest, plus extreme suffering, to see if this changes my mind. Or we can work together to design a different experimental setup if you think that would produce better evidence.
Do you know of any ways I could experimentally expose myself to extreme amounts of pleasure, happiness, tranquility, and truth?
I’m not aware of any way to expose yourself to extreme amounts of pleasure, happiness, tranquility, and truth that is cheap, legal, time efficient, and safe. That’s part of the point I was trying to make in my original comment. If you’re willing forgo some of those requirements, then as Ian/Michael mentioned, for pleasure and tranquility I think certain psychedelics (possibly illegal depending on where you live, possibly unsafe, and depending on your disposition/luck may be a terrible idea) and meditation practices (possibly expensive, takes a long time, possibly unsafe) could be places to look into. For truth, maybe something like “learning all the fields and talking to all the people out there” (expensive, time-consuming, and probably unsafe/distressing), though I realize that’s a pretty unhelpful suggestion.
I’d be willing to expose myself to whatever you suggest, plus extreme suffering, to see if this changes my mind. Or we can work together to design a different experimental setup if you think that would produce better evidence.
I appreciate the offer, and think it’s brave/sincere/earnest of you (not trying to be snarky/dismissive/ironic here—I really wish more people had more of this trait that you seem to possess). My current thinking though is that humans need quite a benign environment in order to stay sane and be able to introspect well on their values (see discussion here, where I basically agree with Wei Dai), and that extreme experiences in general tend to make people “insane” in unpredictable ways. (See here for a similar concern I once voiced around psychedelics.) And even a bunch of seemingly non-extreme experiences (like reading the news, going on social media, or being exposed to various social environments like cults and Cultural Revolution-type dynamics) seem to have historically made a bunch of people insane and continue to make people insane. Basically, although flawed, I think we still have a bunch of humans around who are still basically sane or at least have some “grain of sanity” in them, and I think it’s incredibly important to preserve that sanity. So I would probably actively discourage people from undertaking such experiments in most cases.
If I wanted to prove or support the claim: “given the choice between preventing extreme suffering and giving people more [pleasure/happiness/tranquility/truth], we should pick the latter option” How would you recommend I go about proving or supporting that claim? I’d be keen to read or experience the strongest possible evidence for that claim. I’ve read a fair bit about pleasure and happiness, but for the other, less-tangible values (tranquility and truth) I’m less familiar with any arguments.
It would be a major update for me if I found evidence strong enough to convince me that giving people more tranquility and truth (and pleasure and happiness in any practical setting, under which I include many forms of longtermism) could be good enough to forego preventing extreme suffering. This would have major implications for my current work and my future directions, so I would like to understand this view as well as I can in case I’m wrong and therefore missing out on something important.
I was just about to share this. I guess some of the psychedelics in their pleasure scale figure could be the easiest to use to experience intense pleasure, depending on your local laws and enforcement.
I’m happy to consider this further if there are people who would find value in the outcome (particularly if there are people who would change decisions based on the outcome). I think it would be tractable to design something safe and legal, whether through psychedelics or some other tool.
I also have (moderate) depression and anxiety but I guess I wouldn’t consider my experiences ‘intense/extreme suffering’ (although ‘extreme amounts of suffering’, as you’ve written, might make sense here).
The kind of suffering that’s experienced when, e.g. being eaten alive by predators, seems to me to be qualitatively different from the depression-induced suffering I experience. I somehow also ‘got used to’ depression-suffering after a while (probably independent of the anti-depressant effects) and also don’t mind it as much as I did, but that numbness and somewhat bearable intensity doesn’t seem to come with the ‘more physical’ causes of suffering.
I am worried that exposing oneself to extreme amounts of suffering without also exposing oneself to extreme amounts of pleasure, happiness, tranquility, truth, etc., will predictably lead one to care a lot more about reducing suffering compared to doing something about other common human values, which seems to have happened here. And the fact that certain experiences like pain are a lot easier to induce (at extreme intensities) than other experiences creates a bias in which values people care the most about.
Carl Shulman made a similar point in this post: “This is important to remember since our intuitions and experience may mislead us about the intensity of pain and pleasure which are possible. In humans, the pleasure of orgasm may be less than the pain of deadly injury, since death is a much larger loss of reproductive success than a single sex act is a gain. But there is nothing problematic about the idea of much more intense pleasures, such that their combination with great pains would be satisfying on balance.”
Personally speaking, as someone who has been depressed and anxious most of my life and sometimes have (unintentionally) experienced extreme amounts of suffering, I don’t currently find myself caring more about pleasure/happiness compared to pain/suffering (I would say I care about them roughly the same). There’s also this thing I’ve noticed where sometimes when I’m suffering a lot, the suffering starts to “feel good” and I don’t mind it as much, and symmetrically, when I’ve been happy the happiness has started to “feel fake” somehow so overall I feel pretty confused about what terminal values I am even optimizing for (but thankfully it seems like on the current strategic landscape I don’t need to figure this out immediately).
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they’re too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
It’s also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities. Pleasure and suffering seem not to be functionally symmetric. Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals’ subjective wellbeing. Would similarly intense pleasure make us desperate to continue/experience it? It’s plausible to me that such desperation would actually just be bad or unpleasant, and so such a pleasurable state would be worse than other pleasurable ones. Or, at least, such desperation doesn’t seem to me to be inherently positively tied to its intensity. Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be (https://link.springer.com/article/10.1007/s13164-013-0171-2 ).
If by “neartermism” you mean something like “how do we best help humans/animals/etc who currently exist using only technologies that currently exist, while completely ignoring the fact that AGI may be created within the next couple of decades” or “how do we make the next 1 year of experiences as good as we can while ignoring anything beyond that” or something along those lines, then I agree. But I guess I wasn’t really thinking along those lines since I find that kind of neartermism either pretty implausible or feel like it doesn’t really include all the relevant time periods I care about.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren’t so ignorant that we can’t make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don’t apply equally in favour of negative skew), we should indeed expect the negative to be more efficient.
Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don’t apply equally in reverse, we have more reason to believe efficiencies should skew negative.
Also similar to gwern’s comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don’t (e.g. suffering won’t really be any less bad in an experience machine or if disembodied), then I’d expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences.
However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.
Are any of these arguments against symmetry fleshed out anywhere? I’d be interested if there’s anything that goes into these in more detail.
I’m not sure I buy that the urgency of extreme pain is a necessary component of its intensity. It makes more sense to me that the intensity drives the urgency rather than the other way around, but I’m not sure. You could probably define the intensity of pain by the strength of one’s preference to stop it, but this just seems like a very good proxy to me.
I suspect these are due to implementation details in the brain that aren’t guaranteed to hold in longtermism (if we leave open the possibility of advanced neurotechnology).
I’m sympathetic to functionalism, and the attention, urgency or priority given to something seems likely defining of its intensity to me, at least for pain, and possibly generally. I don’t know what other effects would ground intensity in a way that’s not overly particular to specific physical/behavioural capacities or non-brain physiological responses (heart rate, stress hormones, etc.). (I don’t think reinforcement strength is defining.)
There are some attempts at functional definitions of pain and pleasure intensities here, and they seem fairly symmetric:
https://welfarefootprint.org/technical-definitions/
and some more discussion here:
https://welfarefootprint.org/2024/03/12/positive-animal-welfare/
I’m afraid I don’t know anywhere else these arguments are fleshed out in more detail than what I shared in my first comment (https://link.springer.com/article/10.1007/s13164-013-0171-2).
I’ll add that our understanding of pleasure and suffering and the moral value we assign to them may be necessarily human-relative, so if those phenomena turn out to be functionally asymmetric in humans (e.g. one defined by the necessity of a certain function with no sufficiently similar/symmetric counterpart in the other), then our concepts of pleasure and suffering will also be functionally asymmetric. I make some similar/related arguments in https://forum.effectivealtruism.org/posts/L4Cv8hvuun6vNL8rm/solution-to-the-two-envelopes-problem-for-moral-weights
I think any functionalist definition for the intensity of either would have to be asymmetric, at least insofar as intense pleasures (e.g. drug highs or euphoria associated with temporal lobe epilepsy) are associated with extreme contentedness rather than desperation for it to continue. Similarly-intense pains, on the other hand, do create a strong urgency for it to stop. This particular asymmetry seems present in the definitions you linked, so I’m a little sceptical of the claim that “super-pleasure” would necessitate an urgency for it to continue.
I’m not sure whether these kinds of functional asymmetries give much evidence one way or the other—it seems like it could skew positive just as much as negative. I agree that our understanding might very well be human-relative; I think that the cognitive disruptiveness of pain could be explained by the wider activation of networks across the brain compared to pleasure, for instance. I think a pleasure of the sort that activates a similar breadth of networks would feel qualitatively different, and that experiencing such a pleasure might change our views here.
I think this is a fair point, if you believe that pleasure can outweigh really awful suffering in practice. I do not currently believe this, for all practical purposes. Basically, my position is that these other human values—while somewhat valuable—are simply trivial in the face of the really awful suffering that is very common in our world.
Do you know of any ways I could experimentally expose myself to extreme amounts of pleasure, happiness, tranquility, and truth?
I’d be willing to expose myself to whatever you suggest, plus extreme suffering, to see if this changes my mind. Or we can work together to design a different experimental setup if you think that would produce better evidence.
I’m not aware of any way to expose yourself to extreme amounts of pleasure, happiness, tranquility, and truth that is cheap, legal, time efficient, and safe. That’s part of the point I was trying to make in my original comment. If you’re willing forgo some of those requirements, then as Ian/Michael mentioned, for pleasure and tranquility I think certain psychedelics (possibly illegal depending on where you live, possibly unsafe, and depending on your disposition/luck may be a terrible idea) and meditation practices (possibly expensive, takes a long time, possibly unsafe) could be places to look into. For truth, maybe something like “learning all the fields and talking to all the people out there” (expensive, time-consuming, and probably unsafe/distressing), though I realize that’s a pretty unhelpful suggestion.
I appreciate the offer, and think it’s brave/sincere/earnest of you (not trying to be snarky/dismissive/ironic here—I really wish more people had more of this trait that you seem to possess). My current thinking though is that humans need quite a benign environment in order to stay sane and be able to introspect well on their values (see discussion here, where I basically agree with Wei Dai), and that extreme experiences in general tend to make people “insane” in unpredictable ways. (See here for a similar concern I once voiced around psychedelics.) And even a bunch of seemingly non-extreme experiences (like reading the news, going on social media, or being exposed to various social environments like cults and Cultural Revolution-type dynamics) seem to have historically made a bunch of people insane and continue to make people insane. Basically, although flawed, I think we still have a bunch of humans around who are still basically sane or at least have some “grain of sanity” in them, and I think it’s incredibly important to preserve that sanity. So I would probably actively discourage people from undertaking such experiments in most cases.
Sure, makes sense. Thanks for your reply.
If I wanted to prove or support the claim:
“given the choice between preventing extreme suffering and giving people more [pleasure/happiness/tranquility/truth], we should pick the latter option”
How would you recommend I go about proving or supporting that claim? I’d be keen to read or experience the strongest possible evidence for that claim. I’ve read a fair bit about pleasure and happiness, but for the other, less-tangible values (tranquility and truth) I’m less familiar with any arguments.
It would be a major update for me if I found evidence strong enough to convince me that giving people more tranquility and truth (and pleasure and happiness in any practical setting, under which I include many forms of longtermism) could be good enough to forego preventing extreme suffering. This would have major implications for my current work and my future directions, so I would like to understand this view as well as I can in case I’m wrong and therefore missing out on something important.
You may want to have a look at Logarithmic Scales of Pleasure and Pain if you haven’t already.
I was just about to share this. I guess some of the psychedelics in their pleasure scale figure could be the easiest to use to experience intense pleasure, depending on your local laws and enforcement.
That may be true; but for anyone tempted to try it, just a reminder that
I’m happy to consider this further if there are people who would find value in the outcome (particularly if there are people who would change decisions based on the outcome). I think it would be tractable to design something safe and legal, whether through psychedelics or some other tool.
I also have (moderate) depression and anxiety but I guess I wouldn’t consider my experiences ‘intense/extreme suffering’ (although ‘extreme amounts of suffering’, as you’ve written, might make sense here).
The kind of suffering that’s experienced when, e.g. being eaten alive by predators, seems to me to be qualitatively different from the depression-induced suffering I experience. I somehow also ‘got used to’ depression-suffering after a while (probably independent of the anti-depressant effects) and also don’t mind it as much as I did, but that numbness and somewhat bearable intensity doesn’t seem to come with the ‘more physical’ causes of suffering.