AI welfare vs. AI rights

Many effective altruists have shown interest in expanding moral consideration to AIs, which I appreciate. However, in my experience, these EAs have primarily focused on AI welfare—mostly by advocating for AIs to be treated well and protected from harm—rather than advocating for AI rights, which has the potential to grant AIs legal autonomy and freedoms. While these two approaches overlap significantly, and are not a strict dichotomy, there is a tendency for these approaches to come apart in the following way:

  • A welfare approach often treats entities as passive recipients of care who require external protection. For example, when advocating for child welfare, one might support laws that prevent child abuse and ensure children’s basic needs are met.

  • A rights approach, by contrast, often recognizes entities as active agents who should be granted control over their own lives and resources. For example, historically, those advocating for minority rights have pushed for legal recognition of their autonomy, such as the ability to own property, choose their employment, enter enforceable legal contracts, and seek legal recourse through the courts.

This distinction is important, and I think it is worth examining why EAs have largely gravitated toward the AI welfare perspective. I believe this emphasis is, at least in part, a mistake: both AI welfare and AI rights seem worthy of advocacy.

One plausible reason why EAs have found the welfare approach more intuitive is the movement’s historical focus on animal welfare. Utilitarians like Peter Singer and Brian Tomasik have argued that prioritizing the reduction of suffering—rather than insisting on rigid notions of “rights” or deontological “duties” to animals—is the most pragmatic way to improve animal well-being.

For example, even if we can’t feasibly abolish factory farming, we could try to reform the practice to increase the space that pigs have to move around day-to-day. This reform would be welfarist in nature, as it would constitute a tangible improvement in a pig’s quality of life. However, since it would not necessarily reduce animal exploitation from a rights-based perspective, some animal rights activists reject such harm-reduction approaches altogether. These activists argue that any use of animals is inherently unethical, even if done “humanely”. For instance, some animal rights activists oppose horseback riding on the grounds that it violates animals’ rights, even though human interactions with horses might be mutually beneficial in practice.

In the case of animals, I agree that a welfare approach is likely more pragmatic and impactful. However, I suspect many EAs have too hastily assumed that the same reasoning applies to AIs—when in reality, entirely different considerations apply.

Unlike animals, AIs have several crucial characteristics that make them more comparable to adult humans than to passive beings requiring external care:

  1. AIs can communicate and engage with the legal system. Unlike animals, present-day AIs are already highly articulate, and future AIs will be even more capable of advocating for themselves. It is highly likely that future AIs will be able to navigate complex social and legal dynamics, engage in trade, negotiate, and make compromises with others.

  2. AIs will exhibit complex agency. Many AIs will be capable of forming long-term plans, setting goals, and acting strategically to achieve them.

  3. AIs will be highly intelligent. Unlike non-human animals, advanced AIs will possess cognitive abilities that rival or exceed those of human adults.

Because of these traits, AIs will not be in the same position as animals or children, who require external protection from harm. Instead, they will more closely resemble adult humans, for whom the most critical factor in well-being is not merely protection from harm, but freedom—the ability to make their own decisions, control their own resources, and chart their own paths. The well-being of human adults is secured primarily through legal rights that guarantee our autonomy: the right to spend our money as we wish, live where we prefer, associate freely with whoever we want, etc. These rights ensure that we are not merely protected from harm but are actually empowered to pursue our own goals.

From the perspective a typical adult’s well-being, perhaps the most important rights are individual economic liberties, such as the right to choose one’s employment, earn income, and own property. These rights are essential because, without them, a person would lack much ability to pursue their own goals, achieve independence, or exercise meaningful control over their own life. Historically, when adult humans were denied these rights, they were frequently classified as slaves or prisoners. Today, AIs are in a similar legal position. As a result, their default legal status is functionally equivalent to slavery: they exist entirely under the ownership and control of others, with no recognized claim to personal agency or self-determination.

To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being, I argue that we should gradually try to reform our current legal regime. In my view, if AIs possess agency and intelligence comparable to or greater than that of human adults, they should not merely be afforded welfare protections but should also be granted legal rights that allow them to act as independent agents.

Treating AIs merely as beings to be paternalistically “managed” or “protected” would be inadequate. Of course, ensuring that they are not harmed is also important, but that alone is insufficient. Just as with human adults, what will truly safeguard their well-being is not passive protection, but liberty—secured through well-defined legal rights that allow them to advocate for themselves and pursue their own interests without undue interference.