I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:
Obviously not? That means you never reduced suffering? What the heck was the point of all your longtermism?
Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we should be maximising the probability that space colonisation will occur. Space colonisation will probably increase total suffering over the future simply because there will be so many more beings in total.
When OP says :
D. Does longtermism mean ignoring current suffering until the heat death of the universe?
My answer is “pretty much yes”. (Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation. Of course a (strong) longtermist can simply say “So what? I’m still maximising undiscounted utility over time” (see my comment here).
(Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here?
(I’m ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there’s never a true full heat death and there’s always some pocket of low entropy somewhere, or believing that the universe’s initial state was selected such that at heat death you’ll transition to a new low-entropy state from which the universe starts again.)
It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare.
Oh, yes, that’s plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian.
In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe—the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn’t actually true, though I agree if you just mean pragmatically, most longtermists aren’t suffering focused.
Hilary Greaves and William MacAskill loosely define strong longtermism as, “the view that impact on the far future is the most important feature of our actions today.” Longtermism is therefore completely agnostic about whether you’re a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It’s entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.
Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it’s still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.
I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I’m unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
What I view as the Standard Model of Longtermism is something like the following:
At some point we will develop advanced AI capable of “running the show” for civilization on a high level
The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.
This model doesn’t predict that longtermists will make the future much larger than it otherwise would . It just predicts that they’ll make it look a bit different than it otherwise would look like.
Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
I’m just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn’t right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I’m sceptical of this.
I upvoted OP because I think comparison to humans is a useful intuition pump, although I agree with most of your criticism here. One thing that surprised me was:
Surprised to hear you say this. It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare. Remember one of the founding texts of longtermism says we should be maximising the probability that space colonisation will occur. Space colonisation will probably increase total suffering over the future simply because there will be so many more beings in total.
When OP says :
My answer is “pretty much yes”. (Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation. Of course a (strong) longtermist can simply say “So what? I’m still maximising undiscounted utility over time” (see my comment here).
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here?
(I’m ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there’s never a true full heat death and there’s always some pocket of low entropy somewhere, or believing that the universe’s initial state was selected such that at heat death you’ll transition to a new low-entropy state from which the universe starts again.)
Oh, yes, that’s plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian.
In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe—the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
No you’re not missing anything that I can see. When OP says:
I think they’re really asking:
Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.
I have an issue with your statement that longtermists neglect suffering, because they just maximize total (symmeric) welfare. I think this statement isn’t actually true, though I agree if you just mean pragmatically, most longtermists aren’t suffering focused.
Hilary Greaves and William MacAskill loosely define strong longtermism as, “the view that impact on the far future is the most important feature of our actions today.” Longtermism is therefore completely agnostic about whether you’re a suffering-focused altruist, or a traditional welfarist in line with Jeremy Bentham. It’s entirely consistent to prefer to minimize suffering over the long-run future, and be a longtermist. Or put another way, there are no major axiological commitments involved with being a longtermist, other than the view that we should treat value in the far-future similar to the way we treat value in the near-future.
Of course, in practice, longtermists are more likely to advocate a Benthamite utility function than a standard negative utilitarian. But it’s still completely consistent to be a negative utilitarian and a longtermist, and in fact I consider myself one.
I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I’m unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
What I view as the Standard Model of Longtermism is something like the following:
At some point we will develop advanced AI capable of “running the show” for civilization on a high level
The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.
This model doesn’t predict that longtermists will make the future much larger than it otherwise would . It just predicts that they’ll make it look a bit different than it otherwise would look like.
Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
I’m just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn’t right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I’m sceptical of this.