Executive summary: If moral realism and the possibility of good reflective governance are correct, this has implications for AI strategy, including prioritizing AI applications that facilitate wisdom, cooperation, and reflection, and potentially aligning AI with “the good” rather than just with humans.
Key points:
It may be important to try to get AI systems into the basin of good reflective governance to help humanity.
This makes AI applications that facilitate wisdom, cooperation, and reflective processes a relatively higher strategic priority.
Aligning AI with “the good” rather than just with humans is a potential approach, but comes with risks if it fails.
Aligning AI with “the good” may be easier than aligning with humans, so it’s worth considering as a possibility.
Even if not the primary goal, aligning AI with “the good” could provide a backup “saving throw” if alignment with humans fails.
Potential barriers to this approach include not knowing how to implement it, the possibility it makes aligning with humans harder, and political costliness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: If moral realism and the possibility of good reflective governance are correct, this has implications for AI strategy, including prioritizing AI applications that facilitate wisdom, cooperation, and reflection, and potentially aligning AI with “the good” rather than just with humans.
Key points:
It may be important to try to get AI systems into the basin of good reflective governance to help humanity.
This makes AI applications that facilitate wisdom, cooperation, and reflective processes a relatively higher strategic priority.
Aligning AI with “the good” rather than just with humans is a potential approach, but comes with risks if it fails.
Aligning AI with “the good” may be easier than aligning with humans, so it’s worth considering as a possibility.
Even if not the primary goal, aligning AI with “the good” could provide a backup “saving throw” if alignment with humans fails.
Potential barriers to this approach include not knowing how to implement it, the possibility it makes aligning with humans harder, and political costliness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.