Thank you for your carefully thought-through essay on AI governance. Given your success as a forecaster of geopolitical events, could you sketch out for us how we might implement AI governance on, for example, Iran, North Korea, and Russia? You mention sensors on chips to report problematic behavior, etc. However, badly behaving nations might develop their own fabs. We could follow the examples of attacks on Iran’s nuclear weapons technologies. But would overt/covert military actions risk missing the creation of a “black ball” on the one hand, or escalation into global nuclear/chemical/biological conflict?
These are difficult problems, but thankfully not the ones we need to deal with immediately. None of Iran, Russia and North Korea are chip producers nor are they particularly close to SOTA in ML—if there is on-chip monitoring for manufacturers, and cloud compute has restrictions, there is little chance they accelerate. And we stopped the first two from getting nukes for decades, so export controls are definitely a useful mechanism. In addition, the incentive for nuclear states or otherwise dangerous rogue actors to develop AGI as a strategic asset is lessened if they aren’t needed for balance of power—so a global moratorium makes these states less likely to feel a need to keep up in order to stay in power.
That said, a moratorium isn’t a permanent solution to proliferation of dangerous tech, even if the regime were to end up being permanent. Like with nuclear weapons, we expect to raise costs of violating norms to be prohibitively high, and we can delay things for quite a long time, but if we don’t have further progress on safety, and we remain / become convinced that unaligned ASI is an existential threat, we would need to continually reassess how strong sanctions and enforcement needs to be to prevent existential catastrophe. But if we get a moratorium in non-rogue states, thankfully, we don’t need to answer these questions this decade, or maybe even next.
Thank you for your carefully thought-through essay on AI governance. Given your success as a forecaster of geopolitical events, could you sketch out for us how we might implement AI governance on, for example, Iran, North Korea, and Russia? You mention sensors on chips to report problematic behavior, etc. However, badly behaving nations might develop their own fabs. We could follow the examples of attacks on Iran’s nuclear weapons technologies. But would overt/covert military actions risk missing the creation of a “black ball” on the one hand, or escalation into global nuclear/chemical/biological conflict?
These are difficult problems, but thankfully not the ones we need to deal with immediately. None of Iran, Russia and North Korea are chip producers nor are they particularly close to SOTA in ML—if there is on-chip monitoring for manufacturers, and cloud compute has restrictions, there is little chance they accelerate. And we stopped the first two from getting nukes for decades, so export controls are definitely a useful mechanism. In addition, the incentive for nuclear states or otherwise dangerous rogue actors to develop AGI as a strategic asset is lessened if they aren’t needed for balance of power—so a global moratorium makes these states less likely to feel a need to keep up in order to stay in power.
That said, a moratorium isn’t a permanent solution to proliferation of dangerous tech, even if the regime were to end up being permanent. Like with nuclear weapons, we expect to raise costs of violating norms to be prohibitively high, and we can delay things for quite a long time, but if we don’t have further progress on safety, and we remain / become convinced that unaligned ASI is an existential threat, we would need to continually reassess how strong sanctions and enforcement needs to be to prevent existential catastrophe. But if we get a moratorium in non-rogue states, thankfully, we don’t need to answer these questions this decade, or maybe even next.