(Strong) longtermists will always ignore current suffering and focus on the future, provided it is vast in expectation
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here?
(I’m ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there’s never a true full heat death and there’s always some pocket of low entropy somewhere, or believing that the universe’s initial state was selected such that at heat death you’ll transition to a new low-entropy state from which the universe starts again.)
It is plausible that the EA longtermist community is increasing the expected amount of suffering in the future, but accepts this as they expect this suffering to be swamped by increases in total welfare.
Oh, yes, that’s plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian.
In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe—the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
But at the time of the heat death of the universe, the future is not vast in expectation? Am I missing something basic here?
(I’m ignoring weird stuff which I assume the OP was ignoring like acausal trade / multiverse cooperation, or infinitesimal probabilities of the universe suddenly turning infinite, or already being infinite such that there’s never a true full heat death and there’s always some pocket of low entropy somewhere, or believing that the universe’s initial state was selected such that at heat death you’ll transition to a new low-entropy state from which the universe starts again.)
Oh, yes, that’s plausible; just making a larger future will tend to increase the total amount of suffering (and the total amount of happiness), and this would be a bad trade in the eyes of a negative utilitarian.
In the context of the OP, I think that section was supposed to mean that longtermism would mean ignoring current utility until the heat death of the universe—the obvious axis of difference is long-term vs current, not happiness vs suffering (for example, you can have longtermist negative utilitarians). I was responding to that interpretation of the point, and accidentally said a technically false thing in response. Will edit.
No you’re not missing anything that I can see. When OP says:
I think they’re really asking:
Certainly the closer an impartial altruist is to heat death the less forward-looking the altruist needs to be.