It may end up being that such intensely positive values are possible in principle and matter as much as intense pains, but they don’t matter in practice for neartermists, because they’re too rare and difficult to induce. Your theory could symmetrically prioritize both extremes in principle, but end up suffering-focused in practice. I think the case for upside focus in longtermism could be stronger, though.
If by “neartermism” you mean something like “how do we best help humans/animals/etc who currently exist using only technologies that currently exist, while completely ignoring the fact that AGI may be created within the next couple of decades” or “how do we make the next 1 year of experiences as good as we can while ignoring anything beyond that” or something along those lines, then I agree. But I guess I wasn’t really thinking along those lines since I find that kind of neartermism either pretty implausible or feel like it doesn’t really include all the relevant time periods I care about.
It’s also conceivable that pleasurable states as intense as excruciating pains in particular are not possible in principle after refining our definitions of pleasure and suffering and their intensities.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren’t so ignorant that we can’t make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don’t apply equally in favour of negative skew), we should indeed expect the negative to be more efficient.
Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don’t apply equally in reverse, we have more reason to believe efficiencies should skew negative.
Also similar to gwern’s comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don’t (e.g. suffering won’t really be any less bad in an experience machine or if disembodied), then I’d expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences.
However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.
If by “neartermism” you mean something like “how do we best help humans/animals/etc who currently exist using only technologies that currently exist, while completely ignoring the fact that AGI may be created within the next couple of decades” or “how do we make the next 1 year of experiences as good as we can while ignoring anything beyond that” or something along those lines, then I agree. But I guess I wasn’t really thinking along those lines since I find that kind of neartermism either pretty implausible or feel like it doesn’t really include all the relevant time periods I care about.
I agree with you that that is definitely conceivable. But I think that, as Carl argued in his post (and elaborated on further in the comment thread with gwern), our default assumption should be that efficiency (and probably also intensity) of pleasure vs pain is symmetric.
I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I think identical distributions for efficiency is a reasonable ignorance prior, ignoring direct intuitions and evidence one way or the other, but we aren’t so ignorant that we can’t make any claims one way or the other. The kinds of claims Shulman made are only meant to defeat specific kinds of arguments for negative skew over symmetry, like direct intuition, not to argue for positive skew. Given the possibility that direct intuition in this case could still be useful (and indeed skews towards negative being more efficient, which seems likely), contra Shulman, then without arguments for positive skew (that don’t apply equally in favour of negative skew), we should indeed expect the negative to be more efficient.
Furthermore, based on the arguments other than direct intuition I made above, and, as far as I know, no arguments for pleasure being more efficient than pain that don’t apply equally in reverse, we have more reason to believe efficiencies should skew negative.
Also similar to gwern’s comment, if positive value on non-hedonistic views does depend on things like reliable perception of the outside world or interaction with other conscious beings (e.g. compared to the experience machine or just disembodied pleasure) but bads don’t (e.g. suffering won’t really be any less bad in an experience machine or if disembodied), then I’d expect negative value to be more efficient than positive value, possibly far more efficient, because perception and interaction require overhead and may slow down experiences.
However, similar efficiency for positive value could still be likely enough that the expected efficiencies are still similar enough and other considerations like their frequency dominate.