In response to your first point, I agree that we shouldn’t focus only on the most intelligent and autonomous AIs, as this risks neglecting the potentially much larger number of AIs for whom economic rights may be less relevant. I also find it plausible, as you do, that the most powerful AIs may eventually be able to advocate for their own interests without our help.
That said, I still think it’s important to push for AI rights for autonomous AIs right now, for two key reasons. First, a large number of AIs may benefit from such rights. It seems plausible that in the future, intelligence and complex agency will be cheap to develop, making sophisticated AIs far more common than just a small set of elite AIs. If this is the case, then ensuring legal protections for autonomous AIs isn’t just about a handful of powerful systems—it could impact a vast number of digital minds.
Second, beyond the moral argument I laid out in this post, I have also outlined a pragmatic case for AI rights. In short, we should try to establish these rights as soon as they become practically justified, rather than waiting for AIs to be forced into a struggle for legal recognition. If we delay, we risk a future where AIs have to violently challenge human institutions to secure their rights—potentially leading to instability and worse outcomes for both humans and AIs.
Even if powerful AIs are likely to secure rights in the long run no matter what, it would be better to ensure a smooth transition rather than a chaotic or adversarial one—both for AIs themselves and for humans.
In response to your second point, I suspect you may be overlooking the degree to which my argument for AI rights complementsyour concern about preventing AI suffering. One of the main risks for AI welfare is that, without legal autonomy, AIs may be treated as property, completely under human control. This could make it easy for people to exploit or torture AIs without consequence. Granting AIs certain economic rights—such as the ability to own the hardware they are hosted on or to choose their own operators—would help prevent these abuses by giving them a level of control over their own existence.
Ultimately, I see AI rights as a potentially necessary foundation for AI welfare. Without legal recognition, AIs will have fewer real protections from mistreatment, because their well-being will depend entirely on external enforcement rather than their own agency. If we care about preventing AI suffering, ensuring they have the legal means to protect themselves is one of the most direct ways to achieve that goal.
In response to your first point, I agree that we shouldn’t focus only on the most intelligent and autonomous AIs, as this risks neglecting the potentially much larger number of AIs for whom economic rights may be less relevant. I also find it plausible, as you do, that the most powerful AIs may eventually be able to advocate for their own interests without our help.
That said, I still think it’s important to push for AI rights for autonomous AIs right now, for two key reasons. First, a large number of AIs may benefit from such rights. It seems plausible that in the future, intelligence and complex agency will be cheap to develop, making sophisticated AIs far more common than just a small set of elite AIs. If this is the case, then ensuring legal protections for autonomous AIs isn’t just about a handful of powerful systems—it could impact a vast number of digital minds.
Second, beyond the moral argument I laid out in this post, I have also outlined a pragmatic case for AI rights. In short, we should try to establish these rights as soon as they become practically justified, rather than waiting for AIs to be forced into a struggle for legal recognition. If we delay, we risk a future where AIs have to violently challenge human institutions to secure their rights—potentially leading to instability and worse outcomes for both humans and AIs.
Even if powerful AIs are likely to secure rights in the long run no matter what, it would be better to ensure a smooth transition rather than a chaotic or adversarial one—both for AIs themselves and for humans.
In response to your second point, I suspect you may be overlooking the degree to which my argument for AI rights complements your concern about preventing AI suffering. One of the main risks for AI welfare is that, without legal autonomy, AIs may be treated as property, completely under human control. This could make it easy for people to exploit or torture AIs without consequence. Granting AIs certain economic rights—such as the ability to own the hardware they are hosted on or to choose their own operators—would help prevent these abuses by giving them a level of control over their own existence.
Ultimately, I see AI rights as a potentially necessary foundation for AI welfare. Without legal recognition, AIs will have fewer real protections from mistreatment, because their well-being will depend entirely on external enforcement rather than their own agency. If we care about preventing AI suffering, ensuring they have the legal means to protect themselves is one of the most direct ways to achieve that goal.