(Edit: Accidentally posted a duplicate link.)
Aligned with whom? by Anton Korinek and Avital Balwit (2022) has a possible answer. They write that an aligned AI system should have
direct alignment with its operator, and
social alignment with society at large.
Some examples of failures in direct and social alignment are provided in Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek, 2021).
We could expand the moral circle further by aligning AI with the interests of both human and non-human animals. Direct, social and sentient alignment?
As you mentioned, these alignments present conflicting interests that need mediation and resolution.
Current theme: default
Less Wrong (text)
Less Wrong (link)
(Edit: Accidentally posted a duplicate link.)
Aligned with whom? by Anton Korinek and Avital Balwit (2022) has a possible answer. They write that an aligned AI system should have
direct alignment with its operator, and
social alignment with society at large.
Some examples of failures in direct and social alignment are provided in Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek, 2021).
We could expand the moral circle further by aligning AI with the interests of both human and non-human animals. Direct, social and sentient alignment?
As you mentioned, these alignments present conflicting interests that need mediation and resolution.