Executive summary: Compromise and cooperation between agents with differing values can be mutually beneficial, and we should develop institutions and mechanisms to encourage compromise to reduce risks from powerful future technologies like AI.
Key points:
When agents with differing values compete for power, compromise solutions can be mutually advantageous compared to winner-takes-all conflict.
Possible ways to promote compromise include advancing moral tolerance, democracy, trade, social stability, global governance, and philosophical sophistication.
International cooperation, especially avoiding an AI arms race between nations, is important for ensuring AI is developed with less risk-taking and more planning to avert potential harms.
Catastrophic risks could negatively impact prospects for compromise by increasing international hostility and accelerating AI races with less concern for safety.
Even from a pure negative utilitarian perspective, reducing non-extinction risks may be net positive by maintaining a relatively peaceful trajectory, though this is uncertain.
Sharing information between agents with differing values can be mutually beneficial under certain conditions, and mechanisms to compensate for information externalities are worth exploring.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Compromise and cooperation between agents with differing values can be mutually beneficial, and we should develop institutions and mechanisms to encourage compromise to reduce risks from powerful future technologies like AI.
Key points:
When agents with differing values compete for power, compromise solutions can be mutually advantageous compared to winner-takes-all conflict.
Possible ways to promote compromise include advancing moral tolerance, democracy, trade, social stability, global governance, and philosophical sophistication.
International cooperation, especially avoiding an AI arms race between nations, is important for ensuring AI is developed with less risk-taking and more planning to avert potential harms.
Catastrophic risks could negatively impact prospects for compromise by increasing international hostility and accelerating AI races with less concern for safety.
Even from a pure negative utilitarian perspective, reducing non-extinction risks may be net positive by maintaining a relatively peaceful trajectory, though this is uncertain.
Sharing information between agents with differing values can be mutually beneficial under certain conditions, and mechanisms to compensate for information externalities are worth exploring.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.