It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they’re too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
It’s also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities. Pleasure and suffering seem not to be functionally symmetric. Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals’ subjective wellbeing. Would similarly intense pleasure make us desperate to continue/experience it? It’s plausible to me that such desperation would actually just be bad or unpleasant, and so such a pleasurable state would be worse than other pleasurable ones. Or, at least, such desperation doesn’t seem to me to be inherently positively tied to its intensity. Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be (https://link.springer.com/article/10.1007/s13164-013-0171-2 ).
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they’re too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
If by “neartermism” you mean something like “how do we best help humans/animals/etc who currently exist using only technologies that currently exist, while completely ignoring the fact that AGI may be created within the next couple of decades” or “how do we make the next 1 year of experiences as good as we can while ignoring anything beyond that” or something along those lines, then I agree. But I guess I wasn’t really thinking along those lines since I find that kind of neartermism either pretty implausible or feel like it doesn’t really include all the relevant time periods I care about.
It’s also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren’t so ignorant that we can’t make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don’t apply equally in favour of negative skew), we should indeed expect the negative to be more efficient.
Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don’t apply equally in reverse, we have more reason to believe efficiencies should skew negative.
Also similar to gwern’s comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don’t (e.g. suffering won’t really be any less bad in an experience machine or if disembodied), then I’d expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences.
However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.
Are any of these arguments against symmetry fleshed out anywhere? I’d be interested if there’s anything that goes into these in more detail.
Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals’ subjective wellbeing.
I’m not sure I buy that the urgency of extreme pain is a necessary component of its intensity. It makes more sense to me that the intensity drives the urgency rather than the other way around, but I’m not sure. You could probably define the intensity of pain by the strength of one’s preference to stop it, but this just seems like a very good proxy to me.
Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be
I suspect these are due to implementation details in the brain that aren’t guaranteed to hold in longtermism (if we leave open the possibility of advanced neurotechnology).
I’m sympathetic to functionalism, and the attention, urgency or priority given to something seems likely defining of its intensity to me, at least for pain, and possibly generally. I don’t know what other effects would ground intensity in a way that’s not overly particular to specific physical/behavioural capacities or non-brain physiological responses (heart rate, stress hormones, etc.). (I don’t think reinforcement strength is defining.)
There are some attempts at functional definitions of pain and pleasure intensities here, and they seem fairly symmetric:
I’ll add that our understanding of pleasure and suffering and the moral value we assign to them may be necessarily human-relative, so if those phenomena turn out to be functionally asymmetric in humans (e.g. one defined by the necessity of a certain function with no sufficiently similar/symmetric counterpart in the other), then our concepts of pleasure and suffering will also be functionally asymmetric. I make some similar/related arguments in https://forum.effectivealtruism.org/posts/L4Cv8hvuun6vNL8rm/solution-to-the-two-envelopes-problem-for-moral-weights
I think any functionalist definition for the intensity of either would have to be asymmetric, at least insofar as intense pleasures (e.g. drug highs or euphoria associated with temporal lobe epilepsy) are associated with extreme contentedness rather than desperation for it to continue. Similarly-intense pains, on the other hand, do create a strong urgency for it to stop. This particular asymmetry seems present in the definitions you linked, so I’m a little sceptical of the claim that “super-pleasure” would necessitate an urgency for it to continue.
I’m not sure whether these kinds of functional asymmetries give much evidence one way or the other—it seems like it could skew positive just as much as negative. I agree that our understanding might very well be human-relative; I think that the cognitive disruptiveness of pain could be explained by the wider activation of networks across the brain compared to pleasure, for instance. I think a pleasure of the sort that activates a similar breadth of networks would feel qualitatively different, and that experiencing such a pleasure might change our views here.
It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they’re too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
It’s also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities. Pleasure and suffering seem not to be functionally symmetric. Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals’ subjective wellbeing. Would similarly intense pleasure make us desperate to continue/experience it? It’s plausible to me that such desperation would actually just be bad or unpleasant, and so such a pleasurable state would be worse than other pleasurable ones. Or, at least, such desperation doesn’t seem to me to be inherently positively tied to its intensity. Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be (https://link.springer.com/article/10.1007/s13164-013-0171-2 ).
If by “neartermism” you mean something like “how do we best help humans/animals/etc who currently exist using only technologies that currently exist, while completely ignoring the fact that AGI may be created within the next couple of decades” or “how do we make the next 1 year of experiences as good as we can while ignoring anything beyond that” or something along those lines, then I agree. But I guess I wasn’t really thinking along those lines since I find that kind of neartermism either pretty implausible or feel like it doesn’t really include all the relevant time periods I care about.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren’t so ignorant that we can’t make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don’t apply equally in favour of negative skew), we should indeed expect the negative to be more efficient.
Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don’t apply equally in reverse, we have more reason to believe efficiencies should skew negative.
Also similar to gwern’s comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don’t (e.g. suffering won’t really be any less bad in an experience machine or if disembodied), then I’d expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences.
However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.
Are any of these arguments against symmetry fleshed out anywhere? I’d be interested if there’s anything that goes into these in more detail.
I’m not sure I buy that the urgency of extreme pain is a necessary component of its intensity. It makes more sense to me that the intensity drives the urgency rather than the other way around, but I’m not sure. You could probably define the intensity of pain by the strength of one’s preference to stop it, but this just seems like a very good proxy to me.
I suspect these are due to implementation details in the brain that aren’t guaranteed to hold in longtermism (if we leave open the possibility of advanced neurotechnology).
I’m sympathetic to functionalism, and the attention, urgency or priority given to something seems likely defining of its intensity to me, at least for pain, and possibly generally. I don’t know what other effects would ground intensity in a way that’s not overly particular to specific physical/behavioural capacities or non-brain physiological responses (heart rate, stress hormones, etc.). (I don’t think reinforcement strength is defining.)
There are some attempts at functional definitions of pain and pleasure intensities here, and they seem fairly symmetric:
https://welfarefootprint.org/technical-definitions/
and some more discussion here:
https://welfarefootprint.org/2024/03/12/positive-animal-welfare/
I’m afraid I don’t know anywhere else these arguments are fleshed out in more detail than what I shared in my first comment (https://link.springer.com/article/10.1007/s13164-013-0171-2).
I’ll add that our understanding of pleasure and suffering and the moral value we assign to them may be necessarily human-relative, so if those phenomena turn out to be functionally asymmetric in humans (e.g. one defined by the necessity of a certain function with no sufficiently similar/symmetric counterpart in the other), then our concepts of pleasure and suffering will also be functionally asymmetric. I make some similar/related arguments in https://forum.effectivealtruism.org/posts/L4Cv8hvuun6vNL8rm/solution-to-the-two-envelopes-problem-for-moral-weights
I think any functionalist definition for the intensity of either would have to be asymmetric, at least insofar as intense pleasures (e.g. drug highs or euphoria associated with temporal lobe epilepsy) are associated with extreme contentedness rather than desperation for it to continue. Similarly-intense pains, on the other hand, do create a strong urgency for it to stop. This particular asymmetry seems present in the definitions you linked, so I’m a little sceptical of the claim that “super-pleasure” would necessitate an urgency for it to continue.
I’m not sure whether these kinds of functional asymmetries give much evidence one way or the other—it seems like it could skew positive just as much as negative. I agree that our understanding might very well be human-relative; I think that the cognitive disruptiveness of pain could be explained by the wider activation of networks across the brain compared to pleasure, for instance. I think a pleasure of the sort that activates a similar breadth of networks would feel qualitatively different, and that experiencing such a pleasure might change our views here.