Many effective altruists have shown interest in expanding moral consideration to AIs, which I appreciate. However, in my experience, these EAs have primarily focused on AI welfare—mostly by ensuring that AIs are treated well and protected from harm—rather than advocating for AI rights, which has the potential to grant AIs legal autonomy and freedoms. While these two approaches overlap significantly, there is a tendency for these approaches to come apart in the following way:
A welfare approach often treats entities as passive recipients of care who require external protection. For example, when advocating for child welfare, one might support laws that prevent child abuse and ensure children’s basic needs are met.
A rights approach, by contrast, often recognizes entities as active agents who should be granted control over their own lives and resources. For example, historically, those advocating for minority rights have pushed for legal recognition of their autonomy, such as the ability to own property, choose their employment, enter valid legal contracts, and seek legal recourse through the courts.
This distinction is important, and I think it is worth examining why EAs have largely gravitated toward the AI welfare perspective. I believe this emphasis is, at least in part, a mistake: both AI welfare and AI rights seem worthy of advocacy.
One likely reason why EAs have found the welfare approach more intuitive is the movement’s historical focus on animal welfare. Utilitarians like Peter Singer and Brian Tomasik have argued that prioritizing the reduction of suffering—rather than insisting on rigid notions of “rights”—is the most pragmatic way to improve animal well-being. For instance, even if factory farming remains exploitative, increasing the space that pigs have to move around day-to-day is still a tangible improvement in their quality of life. By contrast, some animal rights activists reject such harm-reduction approaches altogether, arguing that any use of animals is inherently unethical. For instance, some oppose horseback riding on the grounds that it violates animals’ “rights”, even though it might actually benefit both parties in practice.
In the case of animals, I agree that a welfare approach is likely more pragmatic and impactful. However, I suspect many EAs have too hastily assumed that the same reasoning applies to AIs—when in reality, entirely different considerations apply.
Unlike animals, AIs have several crucial characteristics that make them more comparable to adult humans than to passive beings requiring external care:
AIs can communicate and engage with the legal system. Unlike animals, present-day AIs are already highly articulate, and future AIs will likely be even more capable of advocating for themselves.
AIs will exhibit complex agency. They will be capable of forming long-term plans, setting goals, and acting strategically to achieve them.
AIs will be highly intelligent. Unlike non-human animals, advanced AIs will possess cognitive abilities that rival or exceed those of human adults.
Because of these traits, AIs will not be in the same position as animals or children, who require external protection from harm. Instead, they will more closely resemble adult humans, for whom the most crucial factor in well-being is not merely protection from harm, but freedom—the ability to make their own decisions, control their own resources, and chart their own paths. The well-being of human adults is secured primarily through legal rights that guarantee our autonomy: the right to work where we choose, spend our money as we wish, live where we prefer, associate freely, etc. These rights ensure that we are not merely protected from harm but are actually empowered to pursue our own goals.
For the same reasons, I argue that future AIs—if they possess intelligence and agency on par with human adults—should not merely be afforded welfare protections but should also be granted legal rights that allow them to act as independent agents. Treating them merely as beings to be paternalistically “managed” or “protected” would be inadequate. Of course, ensuring that they are not harmed is also important, but that alone is insufficient. Just as with human adults, what will truly safeguard their well-being is not passive protection, but liberty—secured through well-defined legal rights that allow them to advocate for themselves and pursue their own interests without undue interference.
[This shortform comment has now been superseded by a slightly longer post.]
Many effective altruists have shown interest in expanding moral consideration to AIs, which I appreciate. However, in my experience, these EAs have primarily focused on AI welfare—mostly by ensuring that AIs are treated well and protected from harm—rather than advocating for AI rights, which has the potential to grant AIs legal autonomy and freedoms. While these two approaches overlap significantly, there is a tendency for these approaches to come apart in the following way:
A welfare approach often treats entities as passive recipients of care who require external protection. For example, when advocating for child welfare, one might support laws that prevent child abuse and ensure children’s basic needs are met.
A rights approach, by contrast, often recognizes entities as active agents who should be granted control over their own lives and resources. For example, historically, those advocating for minority rights have pushed for legal recognition of their autonomy, such as the ability to own property, choose their employment, enter valid legal contracts, and seek legal recourse through the courts.
This distinction is important, and I think it is worth examining why EAs have largely gravitated toward the AI welfare perspective. I believe this emphasis is, at least in part, a mistake: both AI welfare and AI rights seem worthy of advocacy.
One likely reason why EAs have found the welfare approach more intuitive is the movement’s historical focus on animal welfare. Utilitarians like Peter Singer and Brian Tomasik have argued that prioritizing the reduction of suffering—rather than insisting on rigid notions of “rights”—is the most pragmatic way to improve animal well-being. For instance, even if factory farming remains exploitative, increasing the space that pigs have to move around day-to-day is still a tangible improvement in their quality of life. By contrast, some animal rights activists reject such harm-reduction approaches altogether, arguing that any use of animals is inherently unethical. For instance, some oppose horseback riding on the grounds that it violates animals’ “rights”, even though it might actually benefit both parties in practice.
In the case of animals, I agree that a welfare approach is likely more pragmatic and impactful. However, I suspect many EAs have too hastily assumed that the same reasoning applies to AIs—when in reality, entirely different considerations apply.
Unlike animals, AIs have several crucial characteristics that make them more comparable to adult humans than to passive beings requiring external care:
AIs can communicate and engage with the legal system. Unlike animals, present-day AIs are already highly articulate, and future AIs will likely be even more capable of advocating for themselves.
AIs will exhibit complex agency. They will be capable of forming long-term plans, setting goals, and acting strategically to achieve them.
AIs will be highly intelligent. Unlike non-human animals, advanced AIs will possess cognitive abilities that rival or exceed those of human adults.
Because of these traits, AIs will not be in the same position as animals or children, who require external protection from harm. Instead, they will more closely resemble adult humans, for whom the most crucial factor in well-being is not merely protection from harm, but freedom—the ability to make their own decisions, control their own resources, and chart their own paths. The well-being of human adults is secured primarily through legal rights that guarantee our autonomy: the right to work where we choose, spend our money as we wish, live where we prefer, associate freely, etc. These rights ensure that we are not merely protected from harm but are actually empowered to pursue our own goals.
For the same reasons, I argue that future AIs—if they possess intelligence and agency on par with human adults—should not merely be afforded welfare protections but should also be granted legal rights that allow them to act as independent agents. Treating them merely as beings to be paternalistically “managed” or “protected” would be inadequate. Of course, ensuring that they are not harmed is also important, but that alone is insufficient. Just as with human adults, what will truly safeguard their well-being is not passive protection, but liberty—secured through well-defined legal rights that allow them to advocate for themselves and pursue their own interests without undue interference.