So there are several largely independent reasons not to create AI agents that have moral or legal rights:
Most people today likely want the future to be controlled by our human descendants, not by artificial agents. According to preference utilitarianism, this means that creating AIs that are likely to take over in the future is bad. Note that this preference doesn’t need to be justified, as the mere existence of the preference suffices for its moral significance. This is similar to how, according to preference utilitarianism, death is bad merely because we do not want to die. No additional justification for the badness of death is required.
Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
Creating AI agents and giving them rights could easily lead to an AI population explosion and, in the more or less far future, a Malthusian catastrophe. Potentially after we are long dead. This then wouldn’t affect us directly, but it would likely mean that most future agents, human or not, would have to live under very bad subsistence conditions that barely make their existence possible. This would lead to low welfare for such future agents. So we should avoid the creation of agentic AIs that would lead to such a population explosion.
At least point 2 and 3 would also apply to emulated humans, not just AI agents.
Point 3 also applies to actual humans, not just AI agents or ems. It is a reason to coordinate limits on population growth in general. However, these limits should be stronger for AI agents than for humans, because of points 1 and 2.
Under a robust system of property rights, it becomes less economically advantageous to add new entities when resources are scarce, as scarcity naturally raises costs and lowers the incentives to grow populations indiscriminately.
I don’t think this is a viable alternative to enforcing limits on population growth. Creating new agents could well be a “moral hazard” in the sense that the majority of the likely long-term resource cost of that agent (the resources it consumes or claims for itself) does not have to be paid by the creator of the agent, but by future society. So the creator could well have a personal incentive to make new agents, even though their long term benefit as a whole is negative.
Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
It is essential to carefully distinguish between absolute wealth and relative wealth in this discussion, as one of my key arguments depends heavily on understanding this distinction. Specifically, if my claims about the practical effects of population growth are correct, then a massive increase in the AI population would likely result in significant enrichment for the current inhabitants of the world—meaning those individuals who existed prior to this population explosion. This enrichment would manifest as an increase in their absolute standard of living. However, it is also true that their relative control over the world’s resources and influence would decrease as a result of the population growth.
If you disagree with this conclusion, it seems there are two primary ways to challenge it:
You could argue that the factors I previously mentioned—such as innovation, economies of scale, and gains from trade—would not apply in the case of AI. For instance, this could be because AIs might rationally choose not to trade with humans, opting instead to harm humans by stealing from or even killing them. This could occur despite an initial legal framework designed to prevent such actions.
You could argue that population growth in general is harmful to the people who currently exist, on the grounds that it diminishes their wealth and overall well-being.
While I am not sure, I interpret your comment as consistent with the idea that you believe both objections are potentially valid. In that case, let me address each of these points in turn.
If your objection is more like point (1):
It is difficult for me to fully reply to this idea inside of a single brief comment, so, for now, I prefer to try to convince you of a weaker claim that I think may be sufficient to carry my point:
A major counterpoint to this objection is that, to the extent AIs are limited in their capabilities—much like humans—they could potentially be constrained by a well-designed legal system. Such a system could establish credible and enforceable threats of punishment for any agentic AI entities that violate the law. This would act as a deterrent, incentivizing agentic AIs to abide by the rules and cooperate peacefully.
Now, you might argue that not all AIs could be effectively constrained in this way. While that could be true (and I think it is worth discussing), I would hope we can find some common ground on the idea that at least some agentic AIs could be restrained through such mechanisms. If this is the case, then these AIs would have incentives to engage in mutually beneficial cooperation and trade with humans, even if they do not inherently share human values. This cooperative dynamic would create opportunities for mutual gains, enriching both humans and AIs.
If your objection is more like point (2):
If your objection is based on the idea that population growth inherently harms the people who already exist, I would argue that this perspective is at odds with the prevailing consensus in economics. In fact, it is widely regarded as a popular misconception that the world operates as a zero-sum system, where any gain for one group necessarily comes at the expense of another. Instead, standard economic models of growth and welfare generally predict that population growth is often beneficial to existing populations. It typically fosters innovation, expands markets, and creates opportunities for increased productivity, all of which frequently contribute to higher living standards for those who were already part of the population, especially those who own capital.
To the extent you are disagreeing with this prevailing economic consensus, I think it would be worth getting more specific about why exactly you disagree with these models.
So there are several largely independent reasons not to create AI agents that have moral or legal rights:
Most people today likely want the future to be controlled by our human descendants, not by artificial agents. According to preference utilitarianism, this means that creating AIs that are likely to take over in the future is bad. Note that this preference doesn’t need to be justified, as the mere existence of the preference suffices for its moral significance. This is similar to how, according to preference utilitarianism, death is bad merely because we do not want to die. No additional justification for the badness of death is required.
Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
Creating AI agents and giving them rights could easily lead to an AI population explosion and, in the more or less far future, a Malthusian catastrophe. Potentially after we are long dead. This then wouldn’t affect us directly, but it would likely mean that most future agents, human or not, would have to live under very bad subsistence conditions that barely make their existence possible. This would lead to low welfare for such future agents. So we should avoid the creation of agentic AIs that would lead to such a population explosion.
At least point 2 and 3 would also apply to emulated humans, not just AI agents.
Point 3 also applies to actual humans, not just AI agents or ems. It is a reason to coordinate limits on population growth in general. However, these limits should be stronger for AI agents than for humans, because of points 1 and 2.
I don’t think this is a viable alternative to enforcing limits on population growth. Creating new agents could well be a “moral hazard” in the sense that the majority of the likely long-term resource cost of that agent (the resources it consumes or claims for itself) does not have to be paid by the creator of the agent, but by future society. So the creator could well have a personal incentive to make new agents, even though their long term benefit as a whole is negative.
It is essential to carefully distinguish between absolute wealth and relative wealth in this discussion, as one of my key arguments depends heavily on understanding this distinction. Specifically, if my claims about the practical effects of population growth are correct, then a massive increase in the AI population would likely result in significant enrichment for the current inhabitants of the world—meaning those individuals who existed prior to this population explosion. This enrichment would manifest as an increase in their absolute standard of living. However, it is also true that their relative control over the world’s resources and influence would decrease as a result of the population growth.
If you disagree with this conclusion, it seems there are two primary ways to challenge it:
You could argue that the factors I previously mentioned—such as innovation, economies of scale, and gains from trade—would not apply in the case of AI. For instance, this could be because AIs might rationally choose not to trade with humans, opting instead to harm humans by stealing from or even killing them. This could occur despite an initial legal framework designed to prevent such actions.
You could argue that population growth in general is harmful to the people who currently exist, on the grounds that it diminishes their wealth and overall well-being.
While I am not sure, I interpret your comment as consistent with the idea that you believe both objections are potentially valid. In that case, let me address each of these points in turn.
If your objection is more like point (1):
It is difficult for me to fully reply to this idea inside of a single brief comment, so, for now, I prefer to try to convince you of a weaker claim that I think may be sufficient to carry my point:
A major counterpoint to this objection is that, to the extent AIs are limited in their capabilities—much like humans—they could potentially be constrained by a well-designed legal system. Such a system could establish credible and enforceable threats of punishment for any agentic AI entities that violate the law. This would act as a deterrent, incentivizing agentic AIs to abide by the rules and cooperate peacefully.
Now, you might argue that not all AIs could be effectively constrained in this way. While that could be true (and I think it is worth discussing), I would hope we can find some common ground on the idea that at least some agentic AIs could be restrained through such mechanisms. If this is the case, then these AIs would have incentives to engage in mutually beneficial cooperation and trade with humans, even if they do not inherently share human values. This cooperative dynamic would create opportunities for mutual gains, enriching both humans and AIs.
If your objection is more like point (2):
If your objection is based on the idea that population growth inherently harms the people who already exist, I would argue that this perspective is at odds with the prevailing consensus in economics. In fact, it is widely regarded as a popular misconception that the world operates as a zero-sum system, where any gain for one group necessarily comes at the expense of another. Instead, standard economic models of growth and welfare generally predict that population growth is often beneficial to existing populations. It typically fosters innovation, expands markets, and creates opportunities for increased productivity, all of which frequently contribute to higher living standards for those who were already part of the population, especially those who own capital.
To the extent you are disagreeing with this prevailing economic consensus, I think it would be worth getting more specific about why exactly you disagree with these models.