Insofar as the world has limited resources, the wealth and power of humans would then be greatly diminished. We would lose most control over the future.
Your argument seems to present two possible interpretations:
That we should prevent AIs from ever gaining a supermajority of control over the world’s wealth and resources, even if their doing so occurs through lawful and peaceful means.
That this concern stems from a Malthusian perspective, which argues that unchecked population growth would lead to reduced living standards for the existing, initial population due to the finite nature of resources.
Regarding Point (1): If your argument is that AIs should never hold the large majority control of wealth or resources, this appears to rest on a particular ethical judgment that assumes human primacy. However, this value judgment warrants deeper scrutiny. To help frame my objection, consider the case of whether to introduce emulated humans into society. Similar to what I advocated in this post, emulated humans could hypothetically obtain legal freedoms equal to those of biological humans. If so, the burden of proof would appear to fall on anyone arguing that this would be a bad outcome rather than a positive one. Assuming emulated humans are behaviorally and cognitively similar to biological humans, they would seemingly hold essentially the same ethical status. In that case, denying them freedoms while granting similar freedoms to biological humans would appear unjustifiable.
This leads to a broader philosophical question: What is the ethical basis for discriminating against one kind of mind versus another? In the case of your argument, it seems necessary to justify why humans should be entitled to exclusive control over the future and why AIs—assuming they attain sufficient sophistication—should not share similar entitlements. If this distinction is based on the type of physical “substrate” (e.g., biological versus computational), then additional justification is needed to explain why substrate should matter in determining moral or legal rights.
Currently, this distinction is relatively straightforward because AIs like GPT-4 lack the cognitive sophistication, coherent preferences, and agency typically required to justify granting them moral status. However, as AI continues to advance, this situation may change. Future AIs could potentially develop goals, preferences, and long-term planning abilities akin to those of humans. If and when that occurs, it becomes much harder to argue that humans have an inherently greater “right” to control the world’s wealth or determine the trajectory of the future. In such a scenario, ethical reasoning may suggest that advanced AIs deserve comparable consideration to humans.
This conclusion seems especially warranted under the assumption of preference utilitarianism, as I noted in the post. In this case, what matters is simply whether the AIs can be regarded as having morally relevant preferences, rather than whether they possess phenomenal consciousness or other features.
Regarding Point (2): If your concern is rooted in a Malthusian argument, then it seems to apply equally to human population growth as it does to AI population growth. The key difference is simply the rate of growth. Human population growth is comparatively slower, meaning it would take longer to reach resource constraints. But if humans continued to grow their population at just 1% per year, for example, then over the span of 10,000 years, the population would grow by a factor of over 10^43. The ultimate outcome is the same: resources eventually become insufficient to sustain every individual at current standards of living. The only distinction is the timeline on which this resource depletion occurs.
One potential solution to this Malthusian concern—whether applied to humans or AIs—is to coordinate limits on population growth. By setting a cap on the number of entities (whether human or AI), we could theoretically maintain sustainable resource levels. This is a practical solution that could work for both types of populations.
However, another solution lies in the mechanisms of property rights and market incentives. Under a robust system of property rights, it becomes less economically advantageous to add new entities when resources are scarce, as scarcity naturally raises costs and lowers the incentives to grow populations indiscriminately. Moreover, the existence of innovation, gains from trade, and economies of scale can make population growth beneficial for existing entities, even in a world with limited resources. By embedding new entities—human or AI—within a system of property rights, we ensure that they contribute to the broader economy in ways that improve overall living standards rather than diminish them.
This suggests that, as long as AIs adhere to the rule of law (including respecting property rights, and the rights of other individuals), their introduction into the world could enhance living standards for most humans, even in a resource-constrained world. This outcome would contradict the naive Malthusian argument that adding new agents to the world inherently diminishes the wealth or power of existing humans. Rather, a well-designed legal system could enable humans to grow their wealth in absolute terms, even as their relative share of global wealth falls.
So there are several largely independent reasons not to create AI agents that have moral or legal rights:
Most people today likely want the future to be controlled by our human descendants, not by artificial agents. According to preference utilitarianism, this means that creating AIs that are likely to take over in the future is bad. Note that this preference doesn’t need to be justified, as the mere existence of the preference suffices for its moral significance. This is similar to how, according to preference utilitarianism, death is bad merely because we do not want to die. No additional justification for the badness of death is required.
Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
Creating AI agents and giving them rights could easily lead to an AI population explosion and, in the more or less far future, a Malthusian catastrophe. Potentially after we are long dead. This then wouldn’t affect us directly, but it would likely mean that most future agents, human or not, would have to live under very bad subsistence conditions that barely make their existence possible. This would lead to low welfare for such future agents. So we should avoid the creation of agentic AIs that would lead to such a population explosion.
At least point 2 and 3 would also apply to emulated humans, not just AI agents.
Point 3 also applies to actual humans, not just AI agents or ems. It is a reason to coordinate limits on population growth in general. However, these limits should be stronger for AI agents than for humans, because of points 1 and 2.
Under a robust system of property rights, it becomes less economically advantageous to add new entities when resources are scarce, as scarcity naturally raises costs and lowers the incentives to grow populations indiscriminately.
I don’t think this is a viable alternative to enforcing limits on population growth. Creating new agents could well be a “moral hazard” in the sense that the majority of the likely long-term resource cost of that agent (the resources it consumes or claims for itself) does not have to be paid by the creator of the agent, but by future society. So the creator could well have a personal incentive to make new agents, even though their long term benefit as a whole is negative.
Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
It is essential to carefully distinguish between absolute wealth and relative wealth in this discussion, as one of my key arguments depends heavily on understanding this distinction. Specifically, if my claims about the practical effects of population growth are correct, then a massive increase in the AI population would likely result in significant enrichment for the current inhabitants of the world—meaning those individuals who existed prior to this population explosion. This enrichment would manifest as an increase in their absolute standard of living. However, it is also true that their relative control over the world’s resources and influence would decrease as a result of the population growth.
If you disagree with this conclusion, it seems there are two primary ways to challenge it:
You could argue that the factors I previously mentioned—such as innovation, economies of scale, and gains from trade—would not apply in the case of AI. For instance, this could be because AIs might rationally choose not to trade with humans, opting instead to harm humans by stealing from or even killing them. This could occur despite an initial legal framework designed to prevent such actions.
You could argue that population growth in general is harmful to the people who currently exist, on the grounds that it diminishes their wealth and overall well-being.
While I am not sure, I interpret your comment as consistent with the idea that you believe both objections are potentially valid. In that case, let me address each of these points in turn.
If your objection is more like point (1):
It is difficult for me to fully reply to this idea inside of a single brief comment, so, for now, I prefer to try to convince you of a weaker claim that I think may be sufficient to carry my point:
A major counterpoint to this objection is that, to the extent AIs are limited in their capabilities—much like humans—they could potentially be constrained by a well-designed legal system. Such a system could establish credible and enforceable threats of punishment for any agentic AI entities that violate the law. This would act as a deterrent, incentivizing agentic AIs to abide by the rules and cooperate peacefully.
Now, you might argue that not all AIs could be effectively constrained in this way. While that could be true (and I think it is worth discussing), I would hope we can find some common ground on the idea that at least some agentic AIs could be restrained through such mechanisms. If this is the case, then these AIs would have incentives to engage in mutually beneficial cooperation and trade with humans, even if they do not inherently share human values. This cooperative dynamic would create opportunities for mutual gains, enriching both humans and AIs.
If your objection is more like point (2):
If your objection is based on the idea that population growth inherently harms the people who already exist, I would argue that this perspective is at odds with the prevailing consensus in economics. In fact, it is widely regarded as a popular misconception that the world operates as a zero-sum system, where any gain for one group necessarily comes at the expense of another. Instead, standard economic models of growth and welfare generally predict that population growth is often beneficial to existing populations. It typically fosters innovation, expands markets, and creates opportunities for increased productivity, all of which frequently contribute to higher living standards for those who were already part of the population, especially those who own capital.
To the extent you are disagreeing with this prevailing economic consensus, I think it would be worth getting more specific about why exactly you disagree with these models.
Your argument seems to present two possible interpretations:
That we should prevent AIs from ever gaining a supermajority of control over the world’s wealth and resources, even if their doing so occurs through lawful and peaceful means.
That this concern stems from a Malthusian perspective, which argues that unchecked population growth would lead to reduced living standards for the existing, initial population due to the finite nature of resources.
Regarding Point (1):
If your argument is that AIs should never hold the large majority control of wealth or resources, this appears to rest on a particular ethical judgment that assumes human primacy. However, this value judgment warrants deeper scrutiny. To help frame my objection, consider the case of whether to introduce emulated humans into society. Similar to what I advocated in this post, emulated humans could hypothetically obtain legal freedoms equal to those of biological humans. If so, the burden of proof would appear to fall on anyone arguing that this would be a bad outcome rather than a positive one. Assuming emulated humans are behaviorally and cognitively similar to biological humans, they would seemingly hold essentially the same ethical status. In that case, denying them freedoms while granting similar freedoms to biological humans would appear unjustifiable.
This leads to a broader philosophical question: What is the ethical basis for discriminating against one kind of mind versus another? In the case of your argument, it seems necessary to justify why humans should be entitled to exclusive control over the future and why AIs—assuming they attain sufficient sophistication—should not share similar entitlements. If this distinction is based on the type of physical “substrate” (e.g., biological versus computational), then additional justification is needed to explain why substrate should matter in determining moral or legal rights.
Currently, this distinction is relatively straightforward because AIs like GPT-4 lack the cognitive sophistication, coherent preferences, and agency typically required to justify granting them moral status. However, as AI continues to advance, this situation may change. Future AIs could potentially develop goals, preferences, and long-term planning abilities akin to those of humans. If and when that occurs, it becomes much harder to argue that humans have an inherently greater “right” to control the world’s wealth or determine the trajectory of the future. In such a scenario, ethical reasoning may suggest that advanced AIs deserve comparable consideration to humans.
This conclusion seems especially warranted under the assumption of preference utilitarianism, as I noted in the post. In this case, what matters is simply whether the AIs can be regarded as having morally relevant preferences, rather than whether they possess phenomenal consciousness or other features.
Regarding Point (2):
If your concern is rooted in a Malthusian argument, then it seems to apply equally to human population growth as it does to AI population growth. The key difference is simply the rate of growth. Human population growth is comparatively slower, meaning it would take longer to reach resource constraints. But if humans continued to grow their population at just 1% per year, for example, then over the span of 10,000 years, the population would grow by a factor of over 10^43. The ultimate outcome is the same: resources eventually become insufficient to sustain every individual at current standards of living. The only distinction is the timeline on which this resource depletion occurs.
One potential solution to this Malthusian concern—whether applied to humans or AIs—is to coordinate limits on population growth. By setting a cap on the number of entities (whether human or AI), we could theoretically maintain sustainable resource levels. This is a practical solution that could work for both types of populations.
However, another solution lies in the mechanisms of property rights and market incentives. Under a robust system of property rights, it becomes less economically advantageous to add new entities when resources are scarce, as scarcity naturally raises costs and lowers the incentives to grow populations indiscriminately. Moreover, the existence of innovation, gains from trade, and economies of scale can make population growth beneficial for existing entities, even in a world with limited resources. By embedding new entities—human or AI—within a system of property rights, we ensure that they contribute to the broader economy in ways that improve overall living standards rather than diminish them.
This suggests that, as long as AIs adhere to the rule of law (including respecting property rights, and the rights of other individuals), their introduction into the world could enhance living standards for most humans, even in a resource-constrained world. This outcome would contradict the naive Malthusian argument that adding new agents to the world inherently diminishes the wealth or power of existing humans. Rather, a well-designed legal system could enable humans to grow their wealth in absolute terms, even as their relative share of global wealth falls.
So there are several largely independent reasons not to create AI agents that have moral or legal rights:
Most people today likely want the future to be controlled by our human descendants, not by artificial agents. According to preference utilitarianism, this means that creating AIs that are likely to take over in the future is bad. Note that this preference doesn’t need to be justified, as the mere existence of the preference suffices for its moral significance. This is similar to how, according to preference utilitarianism, death is bad merely because we do not want to die. No additional justification for the badness of death is required.
Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
Creating AI agents and giving them rights could easily lead to an AI population explosion and, in the more or less far future, a Malthusian catastrophe. Potentially after we are long dead. This then wouldn’t affect us directly, but it would likely mean that most future agents, human or not, would have to live under very bad subsistence conditions that barely make their existence possible. This would lead to low welfare for such future agents. So we should avoid the creation of agentic AIs that would lead to such a population explosion.
At least point 2 and 3 would also apply to emulated humans, not just AI agents.
Point 3 also applies to actual humans, not just AI agents or ems. It is a reason to coordinate limits on population growth in general. However, these limits should be stronger for AI agents than for humans, because of points 1 and 2.
I don’t think this is a viable alternative to enforcing limits on population growth. Creating new agents could well be a “moral hazard” in the sense that the majority of the likely long-term resource cost of that agent (the resources it consumes or claims for itself) does not have to be paid by the creator of the agent, but by future society. So the creator could well have a personal incentive to make new agents, even though their long term benefit as a whole is negative.
It is essential to carefully distinguish between absolute wealth and relative wealth in this discussion, as one of my key arguments depends heavily on understanding this distinction. Specifically, if my claims about the practical effects of population growth are correct, then a massive increase in the AI population would likely result in significant enrichment for the current inhabitants of the world—meaning those individuals who existed prior to this population explosion. This enrichment would manifest as an increase in their absolute standard of living. However, it is also true that their relative control over the world’s resources and influence would decrease as a result of the population growth.
If you disagree with this conclusion, it seems there are two primary ways to challenge it:
You could argue that the factors I previously mentioned—such as innovation, economies of scale, and gains from trade—would not apply in the case of AI. For instance, this could be because AIs might rationally choose not to trade with humans, opting instead to harm humans by stealing from or even killing them. This could occur despite an initial legal framework designed to prevent such actions.
You could argue that population growth in general is harmful to the people who currently exist, on the grounds that it diminishes their wealth and overall well-being.
While I am not sure, I interpret your comment as consistent with the idea that you believe both objections are potentially valid. In that case, let me address each of these points in turn.
If your objection is more like point (1):
It is difficult for me to fully reply to this idea inside of a single brief comment, so, for now, I prefer to try to convince you of a weaker claim that I think may be sufficient to carry my point:
A major counterpoint to this objection is that, to the extent AIs are limited in their capabilities—much like humans—they could potentially be constrained by a well-designed legal system. Such a system could establish credible and enforceable threats of punishment for any agentic AI entities that violate the law. This would act as a deterrent, incentivizing agentic AIs to abide by the rules and cooperate peacefully.
Now, you might argue that not all AIs could be effectively constrained in this way. While that could be true (and I think it is worth discussing), I would hope we can find some common ground on the idea that at least some agentic AIs could be restrained through such mechanisms. If this is the case, then these AIs would have incentives to engage in mutually beneficial cooperation and trade with humans, even if they do not inherently share human values. This cooperative dynamic would create opportunities for mutual gains, enriching both humans and AIs.
If your objection is more like point (2):
If your objection is based on the idea that population growth inherently harms the people who already exist, I would argue that this perspective is at odds with the prevailing consensus in economics. In fact, it is widely regarded as a popular misconception that the world operates as a zero-sum system, where any gain for one group necessarily comes at the expense of another. Instead, standard economic models of growth and welfare generally predict that population growth is often beneficial to existing populations. It typically fosters innovation, expands markets, and creates opportunities for increased productivity, all of which frequently contribute to higher living standards for those who were already part of the population, especially those who own capital.
To the extent you are disagreeing with this prevailing economic consensus, I think it would be worth getting more specific about why exactly you disagree with these models.