I appreciate this proposal, but here is a counterargument.
Giving AI agents rights would result in a situation similar to the repugnant conclusion: If we give agentic AIs some rights, we are likely quickly flooded with a huge number of right bearing artificial individuals. This would then create strong pressure (both directly via the influence they have and abstractly via considerations of justice) to give them more and more rights, until they have similar rights to humans, including possibly voting rights. Insofar the world has limited resources, the wealth and power of humans would then be greatly diminished. We would lose most control over the future.
Anticipating these likely consequences, and employing backward induction, we have to conclude that we should not give AI agents rights. Arguably, creating agentic AIs in the first place may already be a step too far.
So there are several largely independent reasons not to create AI agents that have moral or legal rights:
Most people today likely want the future to be controlled by our human descendants, not by artificial agents. According to preference utilitarianism, this means that creating AIs that are likely to take over in the future is bad. Note that this preference doesn’t need to be justified, as the mere existence of the preference suffices for its moral significance. This is similar to how, according to preference utilitarianism, death is bad merely because we do not want to die. No additional justification for the badness of death is required.
Currently it looks like we could have this type of agentic AI quite soon, say in 15 years. That’s so soon that we (currently existing humans) could in the future be deprived of wealth and power by an exploding number of AI agents if we grant them a nonnegligible amount of rights. This could be quite bad for future welfare, including both our future preferences and our future wellbeing. So we shouldn’t make such agents in the first place.
Creating AI agents and giving them rights could easily lead to an AI population explosion and, in the more or less far future, a Malthusian catastrophe. Potentially after we are long dead. This then wouldn’t affect us directly, but it would likely mean that most future agents, human or not, would have to live under very bad subsistence conditions that barely make their existence possible. This would lead to low welfare for such future agents. So we should avoid the creation of agentic AIs that would lead to such a population explosion.
At least point 2 and 3 would also apply to emulated humans, not just AI agents.
Point 3 also applies to actual humans, not just AI agents or ems. It is a reason to coordinate limits on population growth in general. However, these limits should be stronger for AI agents than for humans, because of points 1 and 2.
I don’t think this is a viable alternative to enforcing limits on population growth. Creating new agents could well be a “moral hazard” in the sense that the majority of the likely long-term resource cost of that agent (the resources it consumes or claims for itself) does not have to be paid by the creator of the agent, but by future society. So the creator could well have a personal incentive to make new agents, even though their long term benefit as a whole is negative.