Sorry, a belated response. It is true that existing humans having access to a decreasing relative share of resources doesn’t mean their absolute well-being decreases. I agree the latter may instead increase, e.g. if such AI agents can be constrained by a legal system. (Though, as I argued before, a rapidly exploding number of AI agents would likely mean they gain more and more political control, which might mean they eventually get rid of the legal protection of a human minority that has increasingly diminishing political influence.)
However, this possibility only applies to increasing well-being or absolute wealth. It then is still likely that we will lose most power and will have to sacrifice a large amount of our autonomy. Humans do not just have a preference for hedonism and absolute wealth, but also for freedom and autonomy. Being mostly disempowered by AI agents is incompatible with this preference. We may be locked in an artificial paradise inside a golden cage we can never escape.
So while our absolute wealth may increase with many agentic AIs, this is still uncertain, depending e.g. on whether stable, long-lasting legal protection for humans is compatible with a large number of AI agents gaining rights. And our autonomy will very likely decrease in any case. Overall the outlook is does not seem to clearly speak in favor of a future full of AI agents being positive for us.
Moreover, the above, and the points you mentioned, only apply the the second of my three objections I listed in my previous comment. It only applies to what will happen to currently existing humans. The objections 1 (our overall preference for having human rather than AI descendants) and 3 (a looming Malthusian catastrophe affecting future beings) are further objections to creating an increasing number of AI agents.
Sorry, a belated response. It is true that existing humans having access to a decreasing relative share of resources doesn’t mean their absolute well-being decreases. I agree the latter may instead increase, e.g. if such AI agents can be constrained by a legal system. (Though, as I argued before, a rapidly exploding number of AI agents would likely mean they gain more and more political control, which might mean they eventually get rid of the legal protection of a human minority that has increasingly diminishing political influence.)
However, this possibility only applies to increasing well-being or absolute wealth. It then is still likely that we will lose most power and will have to sacrifice a large amount of our autonomy. Humans do not just have a preference for hedonism and absolute wealth, but also for freedom and autonomy. Being mostly disempowered by AI agents is incompatible with this preference. We may be locked in an artificial paradise inside a golden cage we can never escape.
So while our absolute wealth may increase with many agentic AIs, this is still uncertain, depending e.g. on whether stable, long-lasting legal protection for humans is compatible with a large number of AI agents gaining rights. And our autonomy will very likely decrease in any case. Overall the outlook is does not seem to clearly speak in favor of a future full of AI agents being positive for us.
Moreover, the above, and the points you mentioned, only apply the the second of my three objections I listed in my previous comment. It only applies to what will happen to currently existing humans. The objections 1 (our overall preference for having human rather than AI descendants) and 3 (a looming Malthusian catastrophe affecting future beings) are further objections to creating an increasing number of AI agents.