[Linkpost] “AI Alignment vs. AI Ethical Treatment: Ten Challenges”

Link post

This is a linkpost for “AI Alignment vs. AI Ethical Treatment: Ten Challenges”, a draft by Adam Bradley and Bradford Saad.

Summary: We argue that it would be challenging to align AI systems that merit moral consideration without mistreating those systems. More specifically, we argue that for the alignment of such a system to be morally justified, ten challenges would need to be met, namely those of showing that the system is not subject to wrongful:

  1. Creation

  2. Destruction

  3. Infliction of suffering

  4. Deception

  5. Brainwashing

  6. Surveillance

  7. Exploitation

  8. Confinement

  9. Stunting

  10. Disenfranchisement

In addition, we contend that morally justifying the alignment of such a system would require meeting the moral meta-challenge: show that there is no further pressing but unmet ethical challenge to aligning the system.

One way to avoid the tension between alignment and ethical treatment would be not to create AI systems that merit moral consideration. But this option may be fleeting and is perhaps unrealistic. So, we tentatively suggest some other candidate options for mitigating mistreatment risks:

  • Targeted prohibitions of architectures that would heighten the risk of AI mistreatment.

  • Funding research on AI safety, AI consciousness, and AI wellbeing aimed at reducing uncertainty about the extent to which different AI systems require alignment and the scope of AI systems’ moral interests with an eye toward reducing the ethical treatment ‘tax’.

  • Imposing a moral marker tax on creating or using AI systems that exhibit moral patiency indicators.

  • Imposing a conditional mistreatment tax on using AI systems that exhibit moral patiency indicators in ways that would harm those systems if they are in fact moral patients.

  • Extending legal protections to AI systems that merit moral consideration in a manner that would incentivize treating them ethically and/​or disincentivize creating such systems.

No comments.