I would go even further than the position argued in this paper. This paper focuses on whether we should give agentic AIs certain legal rights (right to make contracts, hold property, and bring tort claims), but I also think as an empirical matter, we probably will do so. I have two main justifications for my position here:
The long-term historical trend has been for legal systems to become more inclusive, i.e., formally incorporating more people into the process, and granting them specific legal rights and freedoms. Ancient legal systems—such as the one described by the Code of Hammurabi—generally gave few legal rights to foreigners, women, lower classes, and children, in contrast to modern legal systems that recognize a much larger set of freedoms and protections for each of those groups. A plausible extrapolation of this trend would include agentic AIs into the legal process.
The incentives for humans to grant AIs rights will probably be enormous, largely for the reasons stated in the paper. Without any legal rights, misaligned AIs have a stronger incentive to accomplish their goals via extra-legal actions, such as plotting a violent takeover. By contrast, if AIs are incorporated into the legal system and benefit from the legal titles they acquire within the system, then they will have a strong interest in maintaining the integrity, predictability, and stability of the legal system. If humans can recognize these benefits, it seems likely they will grant AIs legal rights.
Beyond the basic question of whether AIs should or will receive basic legal rights in the future, there are important remaining questions about how post-AGI law should be structured. For example:
How should liability work in a world where agents can be created much more cheaply than children can be created today? If A creates B, and B commits a crime, should both A and B be held accountable?
What laws should govern who is allowed to create new AGIs (for example, by fine-tuning a base model), and what rules should govern what types of entities people can create?
How should criminal penalties be enforced in a world with AGI? Prison sentences might not be the best solution.
I believe these questions, among others, deserve more attention among those interested in AGI governance.
I would go even further than the position argued in this paper. This paper focuses on whether we should give agentic AIs certain legal rights (right to make contracts, hold property, and bring tort claims), but I also think as an empirical matter, we probably will do so. I have two main justifications for my position here:
The long-term historical trend has been for legal systems to become more inclusive, i.e., formally incorporating more people into the process, and granting them specific legal rights and freedoms. Ancient legal systems—such as the one described by the Code of Hammurabi—generally gave few legal rights to foreigners, women, lower classes, and children, in contrast to modern legal systems that recognize a much larger set of freedoms and protections for each of those groups. A plausible extrapolation of this trend would include agentic AIs into the legal process.
The incentives for humans to grant AIs rights will probably be enormous, largely for the reasons stated in the paper. Without any legal rights, misaligned AIs have a stronger incentive to accomplish their goals via extra-legal actions, such as plotting a violent takeover. By contrast, if AIs are incorporated into the legal system and benefit from the legal titles they acquire within the system, then they will have a strong interest in maintaining the integrity, predictability, and stability of the legal system. If humans can recognize these benefits, it seems likely they will grant AIs legal rights.
Beyond the basic question of whether AIs should or will receive basic legal rights in the future, there are important remaining questions about how post-AGI law should be structured. For example:
How should liability work in a world where agents can be created much more cheaply than children can be created today? If A creates B, and B commits a crime, should both A and B be held accountable?
What laws should govern who is allowed to create new AGIs (for example, by fine-tuning a base model), and what rules should govern what types of entities people can create?
How should criminal penalties be enforced in a world with AGI? Prison sentences might not be the best solution.
I believe these questions, among others, deserve more attention among those interested in AGI governance.