I agree with your reasoning hereâwhile I think working on s-risks from AI conflict is a top priority, I wouldnât give Dawnâs argument for it. This post gives the main arguments for why some ârationalâ AIs wouldnât avoid conflicts by default, and some high-level ways we could steer AIs into the subset that would.
I agree with your reasoning hereâwhile I think working on s-risks from AI conflict is a top priority, I wouldnât give Dawnâs argument for it. This post gives the main arguments for why some ârationalâ AIs wouldnât avoid conflicts by default, and some high-level ways we could steer AIs into the subset that would.
Agreed, and thanks for linking the article!