I’d say it’s the other way around, because longtermism increases both rewards and costs in prisoner’s dilemmas. Consider an AGI race or nuclear war. Longtermism can increase the attraction of control over the future (e.g. wanting to have a long term future following religion X instead of Y, or communist vs capitalist). During the US nuclear monopoly some scientists advocated for preemptive war based on ideas about long-run totalitarianism. So the payoff stakes of C-C are magnified, but likewise for D-C and C-D.
On the other hand, effective bargaining and cooperation between players today is sufficient to reap almost all the benefits of safety (most of which depend more on not investing in destruction than investing in safety, and the threat of destruction for the current generation is enough to pay for plenty of safety investment).
And coordinating on deals in the interest of current parties is closer to the curent world than fanatical longtermism.
But the critical thing is that risk is not just an ‘investment in safety’ but investments in catastrophically risky moves driven by games ruled out by optimal allocation.
Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I’m understanding your original point about “even the largest nation-states being only a small fraction of the world”), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
That’s not a very firm belief on my part—I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I’d be surprised if the latter were approximately none of the problem.
But this is essentially separate from the global public goods issue, which you also seem to consider important (if I’m understanding your original point about “even the largest nation-states being only a small fraction of the world”),
The main dynamic I have in mind there is ‘country X being overwhelmingly technologically advantaged/disadvantaged ’ treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
Biotech threats are driven by violence. On AI, for rational regulators of a global state, a 1% or 10% chance of destroying society looks enough to mobilize immense resources and delay deployment of dangerous tech for safety engineering and testing. There are separate epistemic and internal coordination issues that lead to failures of rational part of the rational social planner model (e.g. US coronavirus policy has predictably failed to serve US interests or even the reelection aims of current officeholders, underuse of Tetlockian forecasting) that loom large (it’s hard to come up with a rational planner model explaining observed preparation for pandemics and AI disasters).
I’d say that given epistemic rationality in social policy setting, then you’re left with a big international coordination/brinksmanship issue, but you would get strict regulation against blowing up the world for small increments of profit.
I’d say it’s the other way around, because longtermism increases both rewards and costs in prisoner’s dilemmas. Consider an AGI race or nuclear war. Longtermism can increase the attraction of control over the future (e.g. wanting to have a long term future following religion X instead of Y, or communist vs capitalist). During the US nuclear monopoly some scientists advocated for preemptive war based on ideas about long-run totalitarianism. So the payoff stakes of C-C are magnified, but likewise for D-C and C-D.
On the other hand, effective bargaining and cooperation between players today is sufficient to reap almost all the benefits of safety (most of which depend more on not investing in destruction than investing in safety, and the threat of destruction for the current generation is enough to pay for plenty of safety investment).
And coordinating on deals in the interest of current parties is closer to the curent world than fanatical longtermism.
But the critical thing is that risk is not just an ‘investment in safety’ but investments in catastrophically risky moves driven by games ruled out by optimal allocation.
Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I’m understanding your original point about “even the largest nation-states being only a small fraction of the world”), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
That’s not a very firm belief on my part—I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I’d be surprised if the latter were approximately none of the problem.
The main dynamic I have in mind there is ‘country X being overwhelmingly technologically advantaged/disadvantaged ’ treated as an outcome on par with global destruction, driving racing, and the necessity for international coordination to set global policy.
Biotech threats are driven by violence. On AI, for rational regulators of a global state, a 1% or 10% chance of destroying society looks enough to mobilize immense resources and delay deployment of dangerous tech for safety engineering and testing. There are separate epistemic and internal coordination issues that lead to failures of rational part of the rational social planner model (e.g. US coronavirus policy has predictably failed to serve US interests or even the reelection aims of current officeholders, underuse of Tetlockian forecasting) that loom large (it’s hard to come up with a rational planner model explaining observed preparation for pandemics and AI disasters).
I’d say that given epistemic rationality in social policy setting, then you’re left with a big international coordination/brinksmanship issue, but you would get strict regulation against blowing up the world for small increments of profit.