On the list of most important things in the world, retaining global international peace and stability rates very highly; instability is a critical risk factor for global catastrophic or X-risk. […] Lethal (or nonlethal) AWSs could also increase states’ ability to perpetrate violence against its own citizens; whether this increases or decreases stability of those states, seems, however, unclear.
I definitely think that war and instability could serve as very important risk factors for global catastrophic and existential risk.
But it seems plausible to me that the odds of global, long-lasting totalitarianism are in the same general ballpark as the odds of some of the existential catastrophes typically worried about. (The only quantitative estimates of the former which I’m aware of come from Bryan Caplan. See also.) And such a regime would probably itself be an existential catastrophe (at least by Bostrom and Ord’s definitions; see also).
As such, I’m hesitant to treat “increased political stability” as always an unalloyed existential security factor—some forms of it, in some contexts, could perhaps also be an important existential risk factor.
So if AWSs do increase the stability of autocratic states—or decouple their stability from how much popular support they have—this could in my view perhaps be one of their most troubling consequences.
(But if one buys all of the above arguments, that might push in favour of focusing on other things—e.g., genetic engineering, surveillance, global governance—even more than it pushes in favour of focusing on AWSs.)
[This comment is sort-of a tangent.]
I definitely think that war and instability could serve as very important risk factors for global catastrophic and existential risk.
But it seems plausible to me that the odds of global, long-lasting totalitarianism are in the same general ballpark as the odds of some of the existential catastrophes typically worried about. (The only quantitative estimates of the former which I’m aware of come from Bryan Caplan. See also.) And such a regime would probably itself be an existential catastrophe (at least by Bostrom and Ord’s definitions; see also).
As such, I’m hesitant to treat “increased political stability” as always an unalloyed existential security factor—some forms of it, in some contexts, could perhaps also be an important existential risk factor.
So if AWSs do increase the stability of autocratic states—or decouple their stability from how much popular support they have—this could in my view perhaps be one of their most troubling consequences.
(But if one buys all of the above arguments, that might push in favour of focusing on other things—e.g., genetic engineering, surveillance, global governance—even more than it pushes in favour of focusing on AWSs.)