I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I’m unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
What I view as the Standard Model of Longtermism is something like the following:
At some point we will develop advanced AI capable of “running the show” for civilization on a high level
The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.
This model doesn’t predict that longtermists will make the future much larger than it otherwise would . It just predicts that they’ll make it look a bit different than it otherwise would look like.
Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
I’m just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn’t right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I’m sceptical of this.
I agree with all of that. I was objecting to the implication that longtermists will necessarily reduce suffering. Also (although I’m unsure about this), I think that the EA longtermist community will increase expected suffering in the future, as it looks like they will look to maximise the number of beings in the universe.
What I view as the Standard Model of Longtermism is something like the following:
At some point we will develop advanced AI capable of “running the show” for civilization on a high level
The values in our AI will determine, to a large extent, the shape of our future cosmic civilization
One possibility is that AI values will be alien. From a human perspective, this will either cause extinction or something equally bad.
To avoid that last possibility, we ought to figure out how to instill human-centered values in our machines.
This model doesn’t predict that longtermists will make the future much larger than it otherwise would . It just predicts that they’ll make it look a bit different than it otherwise would look like.
Of course, there are other existential risks that longtermists care about. Avoiding those will have the effect of making the future larger in expectation, but most longtermists seem to agree that non-AI x-risks are small by comparison to AI.
I’m just making an observation that longtermists tend to be total utilitarians in which case they will want loads of beings in the future. They will want to use AI to help fulfill this purpose.
Of course maybe in the long reflection we will think more about population ethics and decide total utilitarianism isn’t right, or AI will decide this for us, in which case we may not work towards a huge future. But I happen to think total utilitarianism will win out, so I’m sceptical of this.