No here you seem to contradict the procreation asymmetry. When deciding whether we should create certain agents, we wouldn’t harm them if we decide against creating them. Even if the AIs would be happier than the humans.
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/less frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
I disagree. If we have any choice at all over which future populations to create, we also have the option to not creating any descendants at all. Which would be advisable e.g. if we had reason to think both humans and AIs would have net bad lives in expectation.
No here you seem to contradict the procreation asymmetry. When deciding whether we should create certain agents, we wouldn’t harm them if we decide against creating them. Even if the AIs would be happier than the humans.
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/less frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
I disagree. If we have any choice at all over which future populations to create, we also have the option to not creating any descendants at all. Which would be advisable e.g. if we had reason to think both humans and AIs would have net bad lives in expectation.