But this is essentially separate from the global public goods issue, which you also seem to consider important (if I’m understanding your original point about “even the largest nation-states being only a small fraction of the world”),
The main dynamic I have in mind there is ‘country X being overwhelmingly technologically advantaged/disadvantaged ’ treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
Biotech threats are driven by violence. On AI, for rational regulators of a global state, a 1% or 10% chance of destroying society looks enough to mobilize immense resources and delay deployment of dangerous tech for safety engineering and testing. There are separate epistemic and internal coordination issues that lead to failures of rational part of the rational social planner model (e.g. US coronavirus policy has predictably failed to serve US interests or even the reelection aims of current officeholders, underuse of Tetlockian forecasting) that loom large (it’s hard to come up with a rational planner model explaining observed preparation for pandemics and AI disasters).
I’d say that given epistemic rationality in social policy setting, then you’re left with a big international coordination/brinksmanship issue, but you would get strict regulation against blowing up the world for small increments of profit.
The main dynamic I have in mind there is ‘country X being overwhelmingly technologically advantaged/disadvantaged ’ treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.
Biotech threats are driven by violence. On AI, for rational regulators of a global state, a 1% or 10% chance of destroying society looks enough to mobilize immense resources and delay deployment of dangerous tech for safety engineering and testing. There are separate epistemic and internal coordination issues that lead to failures of rational part of the rational social planner model (e.g. US coronavirus policy has predictably failed to serve US interests or even the reelection aims of current officeholders, underuse of Tetlockian forecasting) that loom large (it’s hard to come up with a rational planner model explaining observed preparation for pandemics and AI disasters).
I’d say that given epistemic rationality in social policy setting, then you’re left with a big international coordination/brinksmanship issue, but you would get strict regulation against blowing up the world for small increments of profit.