When a very prominent member of the community is calling for governments to pre-commit to pre-emptive military strikes against countries allowing the construction of powerful AI in the relatively near-term, including against nuclear powers*, it’s really time for people to actually take seriously the stuff about rejecting naive utilitarianism where you do crazy-sounding stuff if a quick expected value calcualtion makes it look maximizing.
*At least I assume that’s what he means by being prepared to risk a higher chance of nuclear war.
Clarification for anyone who’s reading this comment outside of having read the article – the article calls for governments to adopt clear policies involving potential preemptive military strikes in certain circumstances (specifically, against a hypothetical “rogue datacenter”, as these datacenters could be used to build AGI), but it is not calling for any specific military strike right now.
Agreed! I think the policy proposal is a good one that makes a lot of sense, and I also think this is a good time to remind people that “international treaties with teeth are plausibly necessary here” doesn’t mean it’s open season on terrible naively consequentialist ideas that sound “similarly extreme”. See the Death With Dignity FAQ.
This goes considerably beyond ‘international treaties with teeth are plausibly necessary here’:
‘If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.’
Eliezer is proposing attacks on any countries that are building AI-above-a-certain-level, whether or not they sign up to the treaty. That is not a treaty enforcement mechanism. I also think “with teeth” kind of obscures by abstraction here (since it doesn’t necessarily sound like it means war/violence, but that’s what’s being proposed.)
This goes considerably beyond ‘international treaties with teeth are plausibly necessary here’… Eliezer is proposing attacks on any countries that are building AI-above-a-certain-level, whether or not they sign up to the treaty.
Is this actually inconsistent? If a country doesn’t sign up for the Biological Weapons Convention, and then acts in flagrant disregard of it, would they not be expected to be faced with retaliatory action from signatories, including, depending on specifics, plausibly up to military force? My sense was that people who pushed for the introduction and enforcement of the BWC would have imaged such a plausible response as within bounds.
I also think “with teeth” kind of obscures by abstraction here (since it doesn’t necessarily sound like it means war/violence, but that’s what’s being proposed.)
I don’t think what Eliezer is proposing would necessarily mean war/violence either – conditional on a world actually getting to the point were major countries are agreeing to such a treaty, I find it plausible that smaller countries would simply acquiesce in shutting down rogue datacenters. If they didn’t, before military force was used, diplomacy would be used. Then probably economic sanctions would be used. Eliezer is saying that governments should be willing to escalate to using military force if necessary, but I don’t think it’s obvious that in such a world military force would be necessary.
Yep, I +1 this response. I don’t think Eliezer is proposing anything unusual (given the belief that AGI is more dangerous than nukes, which is a very common belief in EA, though not universally shared). I think the unusual aspect is mostly just that Eliezer is being frank and honest about what treating AGI development and proliferation like nuclear proliferation looks like in the real world.
I’m not sure I have too much to add, and I think that I do have concerns about how Eliezer wrote some of this letter given the predictable pushback it’s seen, though maybe breaking the Overton Window is a price worth paying? I’m not sure there.
In any case, I just wanted to note that we have at least 2 historical examples of nations carrying out airstrikes on bases in other countries without that leading to war, though admittedly the nation attacked was not nuclear:
Operation Opera—where Israeli jets destroyed an unfinished nuclear reactor in Iraq in 1981.
Operation Orchard—where the Israeli airforce (again) destroyed a suspected covert nuclear facility in Syria in 2007.
Both of these cases were a nation taking action somewhat unilaterally against another, destroying the other nation’s capability with an airstrike, and what followed was not war but sabre-rattling and proxy conflict (note: That’s my takeaway as a lay non-expert, I may be wrong about the consequences of these strikes! The consequences of Opera especially seem to be a matter of some historical debate).
I’m sure that there are other historical examples that could be found which shed light on what Eliezer’s foreign policy would mean, though I do accept that with nuclear-armed states, all bets are off. Though also worth considering, China has (as far as I know) an unconditional policy on No Nuclear First Use, though that doesn’t preclude retaliation for non-nuclear air strikes on Chinese Soil; such as a disrupting trade, mass cyberattacks, or invading Taiwan in response, or them reversing that policy once actually under attack.
In both cases, that’s a nuclear power attacking a non-nuclear one. Contrast how Putin is being dealt with for doing Putin things—no one is suggesting bombing Russia.
Yeah, haven’t we learned anything from the last 6 months?
Looking forward to seeing the CEA, Toby Ord, and Will MacAskill statements condemning EY for calling for state-sponsored terrorism in a national magazine
When a very prominent member of the community is calling for governments to pre-commit to pre-emptive military strikes against countries allowing the construction of powerful AI in the relatively near-term, including against nuclear powers*, it’s really time for people to actually take seriously the stuff about rejecting naive utilitarianism where you do crazy-sounding stuff if a quick expected value calcualtion makes it look maximizing.
*At least I assume that’s what he means by being prepared to risk a higher chance of nuclear war.
Clarification for anyone who’s reading this comment outside of having read the article – the article calls for governments to adopt clear policies involving potential preemptive military strikes in certain circumstances (specifically, against a hypothetical “rogue datacenter”, as these datacenters could be used to build AGI), but it is not calling for any specific military strike right now.
Have edited. Does that help?
Yeah, I think that’s better
Agreed! I think the policy proposal is a good one that makes a lot of sense, and I also think this is a good time to remind people that “international treaties with teeth are plausibly necessary here” doesn’t mean it’s open season on terrible naively consequentialist ideas that sound “similarly extreme”. See the Death With Dignity FAQ.
This goes considerably beyond ‘international treaties with teeth are plausibly necessary here’:
‘If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.’
Eliezer is proposing attacks on any countries that are building AI-above-a-certain-level, whether or not they sign up to the treaty. That is not a treaty enforcement mechanism. I also think “with teeth” kind of obscures by abstraction here (since it doesn’t necessarily sound like it means war/violence, but that’s what’s being proposed.)
Is this actually inconsistent? If a country doesn’t sign up for the Biological Weapons Convention, and then acts in flagrant disregard of it, would they not be expected to be faced with retaliatory action from signatories, including, depending on specifics, plausibly up to military force? My sense was that people who pushed for the introduction and enforcement of the BWC would have imaged such a plausible response as within bounds.
I don’t think what Eliezer is proposing would necessarily mean war/violence either – conditional on a world actually getting to the point were major countries are agreeing to such a treaty, I find it plausible that smaller countries would simply acquiesce in shutting down rogue datacenters. If they didn’t, before military force was used, diplomacy would be used. Then probably economic sanctions would be used. Eliezer is saying that governments should be willing to escalate to using military force if necessary, but I don’t think it’s obvious that in such a world military force would be necessary.
Yep, I +1 this response. I don’t think Eliezer is proposing anything unusual (given the belief that AGI is more dangerous than nukes, which is a very common belief in EA, though not universally shared). I think the unusual aspect is mostly just that Eliezer is being frank and honest about what treating AGI development and proliferation like nuclear proliferation looks like in the real world.
He explained his reasons for doing that here:
https://twitter.com/ESYudkowsky/status/1641452620081668098
https://www.lesswrong.com/posts/Lz64L3yJEtYGkzMzu/rationality-and-the-english-language
I’m not sure I have too much to add, and I think that I do have concerns about how Eliezer wrote some of this letter given the predictable pushback it’s seen, though maybe breaking the Overton Window is a price worth paying? I’m not sure there.
In any case, I just wanted to note that we have at least 2 historical examples of nations carrying out airstrikes on bases in other countries without that leading to war, though admittedly the nation attacked was not nuclear:
Operation Opera—where Israeli jets destroyed an unfinished nuclear reactor in Iraq in 1981.
Operation Orchard—where the Israeli airforce (again) destroyed a suspected covert nuclear facility in Syria in 2007.
Both of these cases were a nation taking action somewhat unilaterally against another, destroying the other nation’s capability with an airstrike, and what followed was not war but sabre-rattling and proxy conflict (note: That’s my takeaway as a lay non-expert, I may be wrong about the consequences of these strikes! The consequences of Opera especially seem to be a matter of some historical debate).
I’m sure that there are other historical examples that could be found which shed light on what Eliezer’s foreign policy would mean, though I do accept that with nuclear-armed states, all bets are off. Though also worth considering, China has (as far as I know) an unconditional policy on No Nuclear First Use, though that doesn’t preclude retaliation for non-nuclear air strikes on Chinese Soil; such as a disrupting trade, mass cyberattacks, or invading Taiwan in response, or them reversing that policy once actually under attack.
In both cases, that’s a nuclear power attacking a non-nuclear one. Contrast how Putin is being dealt with for doing Putin things—no one is suggesting bombing Russia.
Yeah, haven’t we learned anything from the last 6 months?
Looking forward to seeing the CEA, Toby Ord, and Will MacAskill statements condemning EY for calling for state-sponsored terrorism in a national magazine