Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
It is essential to carefully distinguish between absolute wealth and relative wealth in this discussion, as one of my key arguments depends heavily on understanding this distinction. Specifically, if my claims about the practical effects of population growth are correct, then a massive increase in the AI population would likely result in significant enrichment for the current inhabitants of the world—meaning those individuals who existed prior to this population explosion. This enrichment would manifest as an increase in their absolute standard of living. However, it is also true that their relative control over the world’s resources and influence would decrease as a result of the population growth.
If you disagree with this conclusion, it seems there are two primary ways to challenge it:
You could argue that the factors I previously mentioned—such as innovation, economies of scale, and gains from trade—would not apply in the case of AI. For instance, this could be because AIs might rationally choose not to trade with humans, opting instead to harm humans by stealing from or even killing them. This could occur despite an initial legal framework designed to prevent such actions.
You could argue that population growth in general is harmful to the people who currently exist, on the grounds that it diminishes their wealth and overall well-being.
While I am not sure, I interpret your comment as consistent with the idea that you believe both objections are potentially valid. In that case, let me address each of these points in turn.
If your objection is more like point (1):
It is difficult for me to fully reply to this idea inside of a single brief comment, so, for now, I prefer to try to convince you of a weaker claim that I think may be sufficient to carry my point:
A major counterpoint to this objection is that, to the extent AIs are limited in their capabilities—much like humans—they could potentially be constrained by a well-designed legal system. Such a system could establish credible and enforceable threats of punishment for any agentic AI entities that violate the law. This would act as a deterrent, incentivizing agentic AIs to abide by the rules and cooperate peacefully.
Now, you might argue that not all AIs could be effectively constrained in this way. While that could be true (and I think it is worth discussing), I would hope we can find some common ground on the idea that at least some agentic AIs could be restrained through such mechanisms. If this is the case, then these AIs would have incentives to engage in mutually beneficial cooperation and trade with humans, even if they do not inherently share human values. This cooperative dynamic would create opportunities for mutual gains, enriching both humans and AIs.
If your objection is more like point (2):
If your objection is based on the idea that population growth inherently harms the people who already exist, I would argue that this perspective is at odds with the prevailing consensus in economics. In fact, it is widely regarded as a popular misconception that the world operates as a zero-sum system, where any gain for one group necessarily comes at the expense of another. Instead, standard economic models of growth and welfare generally predict that population growth is often beneficial to existing populations. It typically fosters innovation, expands markets, and creates opportunities for increased productivity, all of which frequently contribute to higher living standards for those who were already part of the population, especially those who own capital.
To the extent you are disagreeing with this prevailing economic consensus, I think it would be worth getting more specific about why exactly you disagree with these models.
Sorry, a belated response. It is true that existing humans having access to a decreasing relative share of resources doesn’t mean their absolute well-being decreases. I agree the latter may instead increase, e.g. if such AI agents can be constrained by a legal system. (Though, as I argued before, a rapidly exploding number of AI agents would likely mean they gain more and more political control, which might mean they eventually get rid of the legal protection of a human minority that has increasingly diminishing political influence.)
However, this possibility only applies to increasing well-being or absolute wealth. It then is still likely that we will lose most power and will have to sacrifice a large amount of our autonomy. Humans do not just have a preference for hedonism and absolute wealth, but also for freedom and autonomy. Being mostly disempowered by AI agents is incompatible with this preference. We may be locked in an artificial paradise inside a golden cage we can never escape.
So while our absolute wealth may increase with many agentic AIs, this is still uncertain, depending e.g. on whether stable, long-lasting legal protection for humans is compatible with a large number of AI agents gaining rights. And our autonomy will very likely decrease in any case. Overall the outlook is does not seem to clearly speak in favor of a future full of AI agents being positive for us.
Moreover, the above, and the points you mentioned, only apply the the second of my three objections I listed in my previous comment. It only applies to what will happen to currently existing humans. The objections 1 (our overall preference for having human rather than AI descendants) and 3 (a looming Malthusian catastrophe affecting future beings) are further objections to creating an increasing number of AI agents.
It is essential to carefully distinguish between absolute wealth and relative wealth in this discussion, as one of my key arguments depends heavily on understanding this distinction. Specifically, if my claims about the practical effects of population growth are correct, then a massive increase in the AI population would likely result in significant enrichment for the current inhabitants of the world—meaning those individuals who existed prior to this population explosion. This enrichment would manifest as an increase in their absolute standard of living. However, it is also true that their relative control over the world’s resources and influence would decrease as a result of the population growth.
If you disagree with this conclusion, it seems there are two primary ways to challenge it:
You could argue that the factors I previously mentioned—such as innovation, economies of scale, and gains from trade—would not apply in the case of AI. For instance, this could be because AIs might rationally choose not to trade with humans, opting instead to harm humans by stealing from or even killing them. This could occur despite an initial legal framework designed to prevent such actions.
You could argue that population growth in general is harmful to the people who currently exist, on the grounds that it diminishes their wealth and overall well-being.
While I am not sure, I interpret your comment as consistent with the idea that you believe both objections are potentially valid. In that case, let me address each of these points in turn.
If your objection is more like point (1):
It is difficult for me to fully reply to this idea inside of a single brief comment, so, for now, I prefer to try to convince you of a weaker claim that I think may be sufficient to carry my point:
A major counterpoint to this objection is that, to the extent AIs are limited in their capabilities—much like humans—they could potentially be constrained by a well-designed legal system. Such a system could establish credible and enforceable threats of punishment for any agentic AI entities that violate the law. This would act as a deterrent, incentivizing agentic AIs to abide by the rules and cooperate peacefully.
Now, you might argue that not all AIs could be effectively constrained in this way. While that could be true (and I think it is worth discussing), I would hope we can find some common ground on the idea that at least some agentic AIs could be restrained through such mechanisms. If this is the case, then these AIs would have incentives to engage in mutually beneficial cooperation and trade with humans, even if they do not inherently share human values. This cooperative dynamic would create opportunities for mutual gains, enriching both humans and AIs.
If your objection is more like point (2):
If your objection is based on the idea that population growth inherently harms the people who already exist, I would argue that this perspective is at odds with the prevailing consensus in economics. In fact, it is widely regarded as a popular misconception that the world operates as a zero-sum system, where any gain for one group necessarily comes at the expense of another. Instead, standard economic models of growth and welfare generally predict that population growth is often beneficial to existing populations. It typically fosters innovation, expands markets, and creates opportunities for increased productivity, all of which frequently contribute to higher living standards for those who were already part of the population, especially those who own capital.
To the extent you are disagreeing with this prevailing economic consensus, I think it would be worth getting more specific about why exactly you disagree with these models.
Sorry, a belated response. It is true that existing humans having access to a decreasing relative share of resources doesn’t mean their absolute well-being decreases. I agree the latter may instead increase, e.g. if such AI agents can be constrained by a legal system. (Though, as I argued before, a rapidly exploding number of AI agents would likely mean they gain more and more political control, which might mean they eventually get rid of the legal protection of a human minority that has increasingly diminishing political influence.)
However, this possibility only applies to increasing well-being or absolute wealth. It then is still likely that we will lose most power and will have to sacrifice a large amount of our autonomy. Humans do not just have a preference for hedonism and absolute wealth, but also for freedom and autonomy. Being mostly disempowered by AI agents is incompatible with this preference. We may be locked in an artificial paradise inside a golden cage we can never escape.
So while our absolute wealth may increase with many agentic AIs, this is still uncertain, depending e.g. on whether stable, long-lasting legal protection for humans is compatible with a large number of AI agents gaining rights. And our autonomy will very likely decrease in any case. Overall the outlook is does not seem to clearly speak in favor of a future full of AI agents being positive for us.
Moreover, the above, and the points you mentioned, only apply the the second of my three objections I listed in my previous comment. It only applies to what will happen to currently existing humans. The objections 1 (our overall preference for having human rather than AI descendants) and 3 (a looming Malthusian catastrophe affecting future beings) are further objections to creating an increasing number of AI agents.