I agree with your reasoning here—while I think working on s-risks from AI conflict is a top priority, I wouldn’t give Dawn’s argument for it. This post gives the main arguments for why some “rational” AIs wouldn’t avoid conflicts by default, and some high-level ways we could steer AIs into the subset that would.
I agree with your reasoning here—while I think working on s-risks from AI conflict is a top priority, I wouldn’t give Dawn’s argument for it. This post gives the main arguments for why some “rational” AIs wouldn’t avoid conflicts by default, and some high-level ways we could steer AIs into the subset that would.
Agreed, and thanks for linking the article!