For preference utilitarianism, there aren’t any fundamentally immoral “speciesist preferences”. Preferences just are what they are, and existing humans clearly have a strong and overwhelming-majority preference for humanity to continue to exist in the future. Do we have to weigh these preferences against the preferences of potential future AIs to exist, on pain of speciesism? No, because those AIs do not now exist, and non-existing entities do not have any preferences, nor will they have any if we don’t create them. So not creating them isn’t bad for them. Something could only be bad for them if they existed. This is called the procreation asymmetry. There are strong arguments for the procreation asymmetry being correct, see e.g. here.
The case is similar to a couple which is about to decide whether to have a baby or get a robot. The couple strongly prefers having the baby. Now, both not creating the baby and not creating the robot isn’t bad for the robot nor the baby, since neither would suffer their non-existence. However, there is still a reason to create the baby specifically: The parents want to have one. Not having a baby wouldn’t be bad for the non-existent baby, but it would be bad for the parents. So the extinction of humanity is bad because we don’t want humanity to go extinct.
Preferences just are what they are, and existing humans clearly have a strong and overwhelming-majority preference for humanity to continue to exist in the future. [...] So the extinction of humanity is bad because we don’t want humanity to go extinct.
This argument appears very similar to the one I addressed in the essay about how delaying or accelerating AI will impact the well-being of currently existing humans. My claim is not that it isn’t bad if humanity goes extinct; I am certainly not saying that it would be good if everyone died. Rather, my claim is that, if your reason for caring about human extinction arises from a concern for the preferences of the existing generation of humans, then you should likely push for accelerating AI so long as the probability of human extinction from AI is fairly low.
I’ll quote the full argument below:
Of course, one can still think—as I do—that human extinction would be a terrible outcome for the people who are alive when it occurs. Even if the AIs that replace us are just as morally valuable as we are from an impartial moral perspective, it would still be a moral disaster for all currently existing humans to die. However, if we accept this perspective, then we must also acknowledge that, from the standpoint of people living today, there appear to be compelling reasons to accelerate AI development rather than delay it for safety reasons.
The reasoning is straightforward: if AI becomes advanced enough to pose an existential threat to humanity, then it would almost certainly also be powerful enough to enable massive technological progress—potentially revolutionizing medicine, biotechnology, and other fields in ways that could drastically improve and extend human lives. For example, advanced AI could help develop cures for aging, eliminate extreme suffering, and significantly enhance human health through medical and biological interventions. These advancements could allow many people who are alive today to live much longer, healthier, and more fulfilling lives.
As economist Chad Jones has pointed out, delaying AI development means that the current generation of humans risks missing out on these transformative benefits. If AI is delayed for years or decades, a large fraction of people alive today—including those advocating for AI safety—would not live long enough to experience these life-extending technologies. This leads to a strong argument for accelerating AI, at least from the perspective of present-day individuals, unless one is either unusually risk-averse, or they have a very high confidence (such as above 50%) that AI will lead to human extinction.
To be clear, if someone genuinely believes there is a high probability that AI will wipe out humanity, then I agree that delaying AI would seem rational, since the high risk of personal death would outweigh the small possibility of a dramatically improved life. But for those who see AI extinction risk as relatively low (such as below 15%), accelerating AI development appears to be the more pragmatic personal choice.
Thus, while human extinction would undoubtedly be a disastrous event, the idea that even a small risk of extinction from AI justifies delaying its development—even if that delay results in large numbers of currently existing humans dying from preventable causes—is not supported by straightforward utilitarian reasoning. The key question here is what extinction actually entails. If human extinction means the total disappearance of all complex life and the permanent loss of all future value, then mitigating even a small risk of such an event might seem overwhelmingly important. However, if the outcome of human extinction is simply that AIs replace humans—while still continuing civilization and potentially generating vast amounts of moral value—then the reasoning behind delaying AI development changes fundamentally.
In this case, the clearest and most direct tradeoff is not about preventing “astronomical waste” in the classic sense (i.e., preserving the potential for future civilizations) but rather about whether the risk of AI takeover is acceptable to the current generation of humans. In other words, is it justifiable to impose costs on presently living people—including delaying potentially life-saving medical advancements—just to reduce a relatively small probability that humanity might be forcibly replaced by AI? This question is distinct from the broader existential risk arguments that typically focus on preserving all future potential value, and it suggests that delaying AI is not obviously justified by utilitarian logic alone.
This argument appears very similar to the one I addressed in the essay about how delaying or accelerating AI will impact the well-being of currently existing humans. My claim is not that it isn’t bad if humanity goes extinct; I am certainly not saying that it would be good if everyone died.
I’m not supposing you do. Of course most people have a strong preference not to die. But there is also (beyond that) a widespread preference for humanity not to go extinct. This is why it e.g. would be so depressing (as in the movie Children of Men) when a global virus made all humans infertile. Ending humanity is very different from and much worse than people merely dying at the end of their lives, which by itself doesn’t imply extinction. Many people would likely even sacrifice their own life in order to safe the future of humanity. We don’t have a similar preference for having AI descendants. That’s not speciesist, it’s just what our preferences are.
We can assess the strength of people’s preferences for future generations by analyzing their economic behavior. The key idea is that if people genuinely cared deeply about future generations, they would prioritize saving a huge portion of their income for the benefit of those future individuals rather than spending it on themselves in the present. This would indicate a strong intertemporal preference for improving the lives of future people over the well-being of currently existing individuals.
For instance, if people truly valued humanity as a whole far more than their own personal well-being, we would expect parents to allocate the vast majority of their income to their descendants (or humanity collectively) rather than using it for their own immediate needs and desires. However, empirical studies generally do not support the claim that people place far greater importance on the long-term preservation of humanity than on the well-being of currently existing individuals. In reality, most people tend to prioritize themselves and their children, while allocating only a relatively small portion of their income to charitable causes or savings intended to benefit future generations beyond their immediate children. If people were intrinsically and strongly committed to the abstract concept of humanity itself, rather than primarily concerned with the welfare of present individuals (including their immediate family and friends), we would expect to see much higher levels of long-term financial sacrifice for future generations than we actually observe.
To be clear, I’m not claiming that people don’t value their descendants, or the concept of humanity at all. Rather, my point is that this preference does not appear to be strong enough to override the considerations outlined in my previous argument. While I agree that people do have an independent preference for preserving humanity—beyond just their personal desire to avoid death—this preference is typically not way stronger than their own desire for self-preservation. As a result, my previous conclusion still holds: from the perspective of present-day individuals, accelerating AI development can still be easily justified if one does not believe in a high probability of human extinction from AI.
The economic behavior analysis falls short. People usually do not expect to have a significant impact on the survival of humanity. If in the past centuries people had saved a large part of their income for “future generations” (including for us) this would likely have had almost no impact on the survival of humanity, probably not even significantly on our present quality of life. The expected utility of saving money for future generations is simply too low compared to spending the money in the present for themselves. This does just mean that people (reasonably) expect to have little influence on the survival of humanity, not that they are relatively okay with humanity going extinct. If people could somehow directly influence, via voting perhaps, whether to trade a few extra years of life against a significant increase in the likelihood of humanity going extinct, I think the outcome would be predictable.
Though I’m indeed not specifically commenting here on what delaying AI could realistically achieve. My main point was only that the preferences for humanity not going extinct are significant and that they easily outweigh any preferences for future AI coming into existence, without relying on immoral speciesism.
I don’t think you can get from the procreation asymmetry to only current and not future preferences matter. Even if you think that people being brought into existence and having their preferences fulfilled has no greater value than them not coming into existence, you might still want to block the existence of unfulfilled future preferences. Indeed, it seems any sane view has to accept that harms to future people if they do exist are bad, otherwise it would be okay to bring about unlimited future suffering, so long as the people who will suffer don’t exist yet.
Not coming into existence would not be a future harm to the person that doesn’t come into existence, because in that case it not only doesn’t exist, it also won’t exist. That’s different from a person that would suffer from something, because in that case it would exist.
My point is that even if you believe in the assymetry you should still care whether humans or AIs being in charge leads to higher utility for those who do exist, even if you are indifferent between either of those outcomes and neither humans nor AIs existing in the future.
It shows that just being person-affecting doesn’t mean that you can argue that since current human preferences are the only ones that exist now, and they are against extinction, person-affecting utilitarians don’t have to compare what a human-ruled future would be like to what an AI would be like, when deciding whether AIs replacing humans would be net bad from a utilitarian perspective. But maybe I was wrong to read you as denying that.
No here you seem to contradict the procreation asymmetry. When deciding whether we should create certain agents, we wouldn’t harm them if we decide against creating them. Even if the AIs would be happier than the humans.
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/less frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
I disagree. If we have any choice at all over which future populations to create, we also have the option to not creating any descendants at all. Which would be advisable e.g. if we had reason to think both humans and AIs would have net bad lives in expectation.
For preference utilitarianism, there aren’t any fundamentally immoral “speciesist preferences”. Preferences just are what they are, and existing humans clearly have a strong and overwhelming-majority preference for humanity to continue to exist in the future. Do we have to weigh these preferences against the preferences of potential future AIs to exist, on pain of speciesism? No, because those AIs do not now exist, and non-existing entities do not have any preferences, nor will they have any if we don’t create them. So not creating them isn’t bad for them. Something could only be bad for them if they existed. This is called the procreation asymmetry. There are strong arguments for the procreation asymmetry being correct, see e.g. here.
The case is similar to a couple which is about to decide whether to have a baby or get a robot. The couple strongly prefers having the baby. Now, both not creating the baby and not creating the robot isn’t bad for the robot nor the baby, since neither would suffer their non-existence. However, there is still a reason to create the baby specifically: The parents want to have one. Not having a baby wouldn’t be bad for the non-existent baby, but it would be bad for the parents. So the extinction of humanity is bad because we don’t want humanity to go extinct.
This argument appears very similar to the one I addressed in the essay about how delaying or accelerating AI will impact the well-being of currently existing humans. My claim is not that it isn’t bad if humanity goes extinct; I am certainly not saying that it would be good if everyone died. Rather, my claim is that, if your reason for caring about human extinction arises from a concern for the preferences of the existing generation of humans, then you should likely push for accelerating AI so long as the probability of human extinction from AI is fairly low.
I’ll quote the full argument below:
I’m not supposing you do. Of course most people have a strong preference not to die. But there is also (beyond that) a widespread preference for humanity not to go extinct. This is why it e.g. would be so depressing (as in the movie Children of Men) when a global virus made all humans infertile. Ending humanity is very different from and much worse than people merely dying at the end of their lives, which by itself doesn’t imply extinction. Many people would likely even sacrifice their own life in order to safe the future of humanity. We don’t have a similar preference for having AI descendants. That’s not speciesist, it’s just what our preferences are.
We can assess the strength of people’s preferences for future generations by analyzing their economic behavior. The key idea is that if people genuinely cared deeply about future generations, they would prioritize saving a huge portion of their income for the benefit of those future individuals rather than spending it on themselves in the present. This would indicate a strong intertemporal preference for improving the lives of future people over the well-being of currently existing individuals.
For instance, if people truly valued humanity as a whole far more than their own personal well-being, we would expect parents to allocate the vast majority of their income to their descendants (or humanity collectively) rather than using it for their own immediate needs and desires. However, empirical studies generally do not support the claim that people place far greater importance on the long-term preservation of humanity than on the well-being of currently existing individuals. In reality, most people tend to prioritize themselves and their children, while allocating only a relatively small portion of their income to charitable causes or savings intended to benefit future generations beyond their immediate children. If people were intrinsically and strongly committed to the abstract concept of humanity itself, rather than primarily concerned with the welfare of present individuals (including their immediate family and friends), we would expect to see much higher levels of long-term financial sacrifice for future generations than we actually observe.
To be clear, I’m not claiming that people don’t value their descendants, or the concept of humanity at all. Rather, my point is that this preference does not appear to be strong enough to override the considerations outlined in my previous argument. While I agree that people do have an independent preference for preserving humanity—beyond just their personal desire to avoid death—this preference is typically not way stronger than their own desire for self-preservation. As a result, my previous conclusion still holds: from the perspective of present-day individuals, accelerating AI development can still be easily justified if one does not believe in a high probability of human extinction from AI.
The economic behavior analysis falls short. People usually do not expect to have a significant impact on the survival of humanity. If in the past centuries people had saved a large part of their income for “future generations” (including for us) this would likely have had almost no impact on the survival of humanity, probably not even significantly on our present quality of life. The expected utility of saving money for future generations is simply too low compared to spending the money in the present for themselves. This does just mean that people (reasonably) expect to have little influence on the survival of humanity, not that they are relatively okay with humanity going extinct. If people could somehow directly influence, via voting perhaps, whether to trade a few extra years of life against a significant increase in the likelihood of humanity going extinct, I think the outcome would be predictable.
Though I’m indeed not specifically commenting here on what delaying AI could realistically achieve. My main point was only that the preferences for humanity not going extinct are significant and that they easily outweigh any preferences for future AI coming into existence, without relying on immoral speciesism.
I don’t think you can get from the procreation asymmetry to only current and not future preferences matter. Even if you think that people being brought into existence and having their preferences fulfilled has no greater value than them not coming into existence, you might still want to block the existence of unfulfilled future preferences. Indeed, it seems any sane view has to accept that harms to future people if they do exist are bad, otherwise it would be okay to bring about unlimited future suffering, so long as the people who will suffer don’t exist yet.
Not coming into existence would not be a future harm to the person that doesn’t come into existence, because in that case it not only doesn’t exist, it also won’t exist. That’s different from a person that would suffer from something, because in that case it would exist.
My point is that even if you believe in the assymetry you should still care whether humans or AIs being in charge leads to higher utility for those who do exist, even if you are indifferent between either of those outcomes and neither humans nor AIs existing in the future.
Yes, though I don’t think that contradicts anything I said originally.
It shows that just being person-affecting doesn’t mean that you can argue that since current human preferences are the only ones that exist now, and they are against extinction, person-affecting utilitarians don’t have to compare what a human-ruled future would be like to what an AI would be like, when deciding whether AIs replacing humans would be net bad from a utilitarian perspective. But maybe I was wrong to read you as denying that.
No here you seem to contradict the procreation asymmetry. When deciding whether we should create certain agents, we wouldn’t harm them if we decide against creating them. Even if the AIs would be happier than the humans.
By creating certain agents in a scenario where it is (basically) guaranteed that there are some agents or other, we determine the amount of unfulfilled preferences in the future. Sensible person-affecting views still prefer agent-creating decision that lead to less frustrated existing future preferences over more existing future preferences.
EDIT: Look at it this way: we are not choosing between futures with zero subjects of welfare and futures with non-zero, where person-affecting views are indeed indifferent, so long as the future with subjects has net-positive utility. Rather we are choosing between two agent-filled futures: one with human agents and another with AIs. Sensible person-affecting views prefer the future with less unfulfilled preferences over the one with more, when both futures contain agents. So to make a person-affecting case against AIs replacing humans, you need to take into account whether AIs replacing humans leads to more/less frustrated preferences existing in the future, not just whether it frustrates the preferences of currently existing agents.
I disagree. If we have any choice at all over which future populations to create, we also have the option to not creating any descendants at all. Which would be advisable e.g. if we had reason to think both humans and AIs would have net bad lives in expectation.