This advocates for risking nuclear war for the sake of preventing mere “AI training runs”. I find it highly unlikely that this risk-reward payoff is logical at a 10% x-risk estimate.
All else equal, this depends on what increase in risk of nuclear war you’re trading off against what decrease in x-risk from AI. We may have “increased” risk of nuclear war by providing aid to Ukraine in its war against Russia, but if it was indeed an increase it was probably small and worth the trade-off[1] against our other goals (such as disincentivizing the beginning of wars which might lead to nuclear escalation in the first place). I think approximately the only unusual part of Eliezer’s argument is the fact that he doesn’t beat around the bush in spelling out the implications.
Asserted for the sake of argument; I haven’t actually demonstrated that this is true but my point is more that there are many situations where we behave as if it is obviously a worthwhile trade-off to marginally increase the risk of nuclear war.
He’s not talking about a “marginal increase” in risk of nuclear war. What Eliezer is proposing is nuclear blackmail.
If China, today, told us that “you have 3 months to disband OpenAI or we will nuke you”, what are the chances that the US would comply? I guarantee you they are almost zero, because if the US gives in, then china can demand something else, and then something else, and so on. Instead, the US would probably try to talk them out of their ultimatum, or failing that, do a preemptive strike.
If the deadline does come, China can either launch the nukes and start armageddon, or reveal an empty threat and not be taken seriously, in which case the whole exercise is worthless.
He proposes instituting an international treaty, which seems to be aiming for the reference class of existing treaties around the proliferation of nuclear and biological weapons. He is not proposing that the United States issue unilateral threats of nuclear first strikes.
I do not believe this interpretation is correct. Here is the passage again, including the previous paragraph for added context:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for anyone, including governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
He advocates for bombing datacentres and being prepared to start shooting conflicts to destroy GPU clusters, and then advocates for “running some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs”. I cannot see any interpretation other than “threaten to bomb nuclear armed countries that train AI’s”.
To be fair, upon reading it again it’s more likely he means “threaten to conventionally bomb datacentres”. But this is still nuclear brinksmanship: bombing russia or china is an act of war, carrying a high chance of nuclear exchange.
All else equal, this depends on what increase in risk of nuclear war you’re trading off against what decrease in x-risk from AI. We may have “increased” risk of nuclear war by providing aid to Ukraine in its war against Russia, but if it was indeed an increase it was probably small and worth the trade-off[1] against our other goals (such as disincentivizing the beginning of wars which might lead to nuclear escalation in the first place). I think approximately the only unusual part of Eliezer’s argument is the fact that he doesn’t beat around the bush in spelling out the implications.
Asserted for the sake of argument; I haven’t actually demonstrated that this is true but my point is more that there are many situations where we behave as if it is obviously a worthwhile trade-off to marginally increase the risk of nuclear war.
He’s not talking about a “marginal increase” in risk of nuclear war. What Eliezer is proposing is nuclear blackmail.
If China, today, told us that “you have 3 months to disband OpenAI or we will nuke you”, what are the chances that the US would comply? I guarantee you they are almost zero, because if the US gives in, then china can demand something else, and then something else, and so on. Instead, the US would probably try to talk them out of their ultimatum, or failing that, do a preemptive strike.
If the deadline does come, China can either launch the nukes and start armageddon, or reveal an empty threat and not be taken seriously, in which case the whole exercise is worthless.
He proposes instituting an international treaty, which seems to be aiming for the reference class of existing treaties around the proliferation of nuclear and biological weapons. He is not proposing that the United States issue unilateral threats of nuclear first strikes.
I do not believe this interpretation is correct. Here is the passage again, including the previous paragraph for added context:
He advocates for bombing datacentres and being prepared to start shooting conflicts to destroy GPU clusters, and then advocates for “running some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs”. I cannot see any interpretation other than “threaten to bomb nuclear armed countries that train AI’s”.
To be fair, upon reading it again it’s more likely he means “threaten to conventionally bomb datacentres”. But this is still nuclear brinksmanship: bombing russia or china is an act of war, carrying a high chance of nuclear exchange.
Your post begins with,
And ends with,
If in the writing of a comment you realize that you were wrong, you can just say that.