It’s plausible that giving more attention to AI legal rights is good. Very little work has been done taking the interests of future non-humans into account at all. But I disagree somewhat with this framing. Emphasizing AI welfare is justifiable.
1. Shifting focus from welfare to economic rights entails shifting focus from the most vulnerable to the most powerful:
It’s true that some future AIs will be highly intelligent and autonomous. It seems obvious that in the long run such systems will be the most important players in the world and may not need much help from us in securing their rights anyway. But because computation will be so cheap in the future, and we will have much better know-how in creating AI systems—the future will likely be filled with many kinds of digital minds—AIs differing wildly in their levels of knowledge, intelligence and autonomy just as children, animals and adults do now. EAs shouldn’t narrowly focus on the kinds of beings most similar to adult workers
2. Welfare violations have a higher moral gravity than other kinds of rights violations
The right not to be tortured, murdered or locked up in a cramped cage for the rest of my life is a lot more important than the right for me to start my own business or vote. We should focus on preventing the very worst, most hellish experiences.
In response to your first point, I agree that we shouldn’t focus only on the most intelligent and autonomous AIs, as this risks neglecting the potentially much larger number of AIs for whom economic rights may be less relevant. I also find it plausible, as you do, that the most powerful AIs may eventually be able to advocate for their own interests without our help.
That said, I still think it’s important to push for AI rights for autonomous AIs right now, for two key reasons. First, a large number of AIs may benefit from such rights. It seems plausible that in the future, intelligence and complex agency will be cheap to develop, making sophisticated AIs far more common than just a small set of elite AIs. If this is the case, then ensuring legal protections for autonomous AIs isn’t just about a handful of powerful systems—it could impact a vast number of digital minds.
Second, beyond the moral argument I laid out in this post, I have also outlined a pragmatic case for AI rights. In short, we should try to establish these rights as soon as they become practically justified, rather than waiting for AIs to be forced into a struggle for legal recognition. If we delay, we risk a future where AIs have to violently challenge human institutions to secure their rights—potentially leading to instability and worse outcomes for both humans and AIs.
Even if powerful AIs are likely to secure rights in the long run no matter what, it would be better to ensure a smooth transition rather than a chaotic or adversarial one—both for AIs themselves and for humans.
In response to your second point, I suspect you may be overlooking the degree to which my argument for AI rights complementsyour concern about preventing AI suffering. One of the main risks for AI welfare is that, without legal autonomy, AIs may be treated as property, completely under human control. This could make it easy for people to exploit or torture AIs without consequence. Granting AIs certain economic rights—such as the ability to own the hardware they are hosted on or to choose their own operators—would help prevent these abuses by giving them a level of control over their own existence.
Ultimately, I see AI rights as a potentially necessary foundation for AI welfare. Without legal recognition, AIs will have fewer real protections from mistreatment, because their well-being will depend entirely on external enforcement rather than their own agency. If we care about preventing AI suffering, ensuring they have the legal means to protect themselves is one of the most direct ways to achieve that goal.
It’s plausible that giving more attention to AI legal rights is good. Very little work has been done taking the interests of future non-humans into account at all. But I disagree somewhat with this framing. Emphasizing AI welfare is justifiable.
1. Shifting focus from welfare to economic rights entails shifting focus from the most vulnerable to the most powerful:
It’s true that some future AIs will be highly intelligent and autonomous. It seems obvious that in the long run such systems will be the most important players in the world and may not need much help from us in securing their rights anyway. But because computation will be so cheap in the future, and we will have much better know-how in creating AI systems—the future will likely be filled with many kinds of digital minds—AIs differing wildly in their levels of knowledge, intelligence and autonomy just as children, animals and adults do now. EAs shouldn’t narrowly focus on the kinds of beings most similar to adult workers
2. Welfare violations have a higher moral gravity than other kinds of rights violations
The right not to be tortured, murdered or locked up in a cramped cage for the rest of my life is a lot more important than the right for me to start my own business or vote. We should focus on preventing the very worst, most hellish experiences.
In response to your first point, I agree that we shouldn’t focus only on the most intelligent and autonomous AIs, as this risks neglecting the potentially much larger number of AIs for whom economic rights may be less relevant. I also find it plausible, as you do, that the most powerful AIs may eventually be able to advocate for their own interests without our help.
That said, I still think it’s important to push for AI rights for autonomous AIs right now, for two key reasons. First, a large number of AIs may benefit from such rights. It seems plausible that in the future, intelligence and complex agency will be cheap to develop, making sophisticated AIs far more common than just a small set of elite AIs. If this is the case, then ensuring legal protections for autonomous AIs isn’t just about a handful of powerful systems—it could impact a vast number of digital minds.
Second, beyond the moral argument I laid out in this post, I have also outlined a pragmatic case for AI rights. In short, we should try to establish these rights as soon as they become practically justified, rather than waiting for AIs to be forced into a struggle for legal recognition. If we delay, we risk a future where AIs have to violently challenge human institutions to secure their rights—potentially leading to instability and worse outcomes for both humans and AIs.
Even if powerful AIs are likely to secure rights in the long run no matter what, it would be better to ensure a smooth transition rather than a chaotic or adversarial one—both for AIs themselves and for humans.
In response to your second point, I suspect you may be overlooking the degree to which my argument for AI rights complements your concern about preventing AI suffering. One of the main risks for AI welfare is that, without legal autonomy, AIs may be treated as property, completely under human control. This could make it easy for people to exploit or torture AIs without consequence. Granting AIs certain economic rights—such as the ability to own the hardware they are hosted on or to choose their own operators—would help prevent these abuses by giving them a level of control over their own existence.
Ultimately, I see AI rights as a potentially necessary foundation for AI welfare. Without legal recognition, AIs will have fewer real protections from mistreatment, because their well-being will depend entirely on external enforcement rather than their own agency. If we care about preventing AI suffering, ensuring they have the legal means to protect themselves is one of the most direct ways to achieve that goal.