I don’t think you can get from the procreation asymmetry to only current and not future preferences matter. Even if you think that people being brought into existence and having their preferences fulfilled has no greater value than them not coming into existence, you might still want to block the existence of unfulfilled future preferences. Indeed, it seems any sane view has to accept that harms to future people if they do exist are bad, otherwise it would be okay to bring about unlimited future suffering, so long as the people who will suffer don’t exist yet.
Not coming into existence would not be a future harm to the person that doesn’t come into existence, because in that case it not only doesn’t exist, it also won’t exist. That’s different from a person that would suffer from something, because in that case it would exist.
My point is that even if you believe in the assymetry you should still care whether humans or AIs being in charge leads to higher utility for those who do exist, even if you are indifferent between either of those outcomes and neither humans nor AIs existing in the future.
It shows that just being person-affecting doesn’t mean that you can argue that since current human preferences are the only ones that exist now, and they are against extinction, person-affecting utilitarians don’t have to compare what a human-ruled future would be like to what an AI would be like, when deciding whether AIs replacing humans would be net bad from a utilitarian perspective. But maybe I was wrong to read you as denying that.
No here you seem to contradict the procreation asymmetry. When deciding whether we should create certain agents, we wouldn’t harm them if we decide against creating them. Even if the AIs would be happier than the humans.
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/​less frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
I disagree. If we have any choice at all over which future populations to create, we also have the option to not creating any descendants at all. Which would be advisable e.g. if we had reason to think both humans and AIs would have net bad lives in expectation.
I don’t think you can get from the procreation asymmetry to only current and not future preferences matter. Even if you think that people being brought into existence and having their preferences fulfilled has no greater value than them not coming into existence, you might still want to block the existence of unfulfilled future preferences. Indeed, it seems any sane view has to accept that harms to future people if they do exist are bad, otherwise it would be okay to bring about unlimited future suffering, so long as the people who will suffer don’t exist yet.
Not coming into existence would not be a future harm to the person that doesn’t come into existence, because in that case it not only doesn’t exist, it also won’t exist. That’s different from a person that would suffer from something, because in that case it would exist.
My point is that even if you believe in the assymetry you should still care whether humans or AIs being in charge leads to higher utility for those who do exist, even if you are indifferent between either of those outcomes and neither humans nor AIs existing in the future.
Yes, though I don’t think that contradicts anything I said originally.
It shows that just being person-affecting doesn’t mean that you can argue that since current human preferences are the only ones that exist now, and they are against extinction, person-affecting utilitarians don’t have to compare what a human-ruled future would be like to what an AI would be like, when deciding whether AIs replacing humans would be net bad from a utilitarian perspective. But maybe I was wrong to read you as denying that.
No here you seem to contradict the procreation asymmetry. When deciding whether we should create certain agents, we wouldn’t harm them if we decide against creating them. Even if the AIs would be happier than the humans.
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/​less frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
I disagree. If we have any choice at all over which future populations to create, we also have the option to not creating any descendants at all. Which would be advisable e.g. if we had reason to think both humans and AIs would have net bad lives in expectation.