A theory of victory approach won’t work for AI. Theories of victory are borne out of a study of what hasn’t worked in warfare. You’ve got nothing to draw from in order to create an actual theory of victory. Instead, you appear to be proposing a few different strategies, which don’t appear to be very well thought out.
You argue that the U.S. could have established a monopoly on nuclear weapons development. How?The U.S. lost its monopoly to Russia due to acts of Russian espionage that took place at Los Alamos. How do you imagine that could have been prevented?
AI is software, and in software security, offense always has the advantage over defense. There is no network that cannot be breached w/ sufficient time and resources because software is inherently insecure.
While the USSR was indeed able to exfiltrate secrets from Los Alamos to speed up its nuclear program, it took a few more years for it to actually develop a nuclear weapon.
Russell (and we don’t necessarily agree here) argued that the US could have established a monopoly on nuclear development through nuclear coercion. That strategy doesn’t have anything to do with preventing espionage.
Once the genie is out of the bottle, it doesn’t matter, does it? Much of China’s current tech achievements began with industrial espionage. You can’t constrain a game-changing technology while excluding espionage as a factor.
It’s exactly the same issue with AI.
While you have an interesting theoretical concept, there’s no way to derive a strategy from it that would lead to AI safety that I can see.
The idea for this particular theory of victory is that, if some country (for example, the US) develops TAI first, it could use TAI to prevent other countries (for example, China) from developing TAI as well — including via espionage.
If TAI grants a decisive strategic advantage, then it follows that such a monopoly could be effectively enforced (for example, it’s plausible that TAI-enabled cybersecurity would effectively protect against non-TAI cyberoffense).
Again, I’m not necessarily endorsing this ToV. But it does seem plausible.
A theory of victory approach won’t work for AI. Theories of victory are borne out of a study of what hasn’t worked in warfare. You’ve got nothing to draw from in order to create an actual theory of victory. Instead, you appear to be proposing a few different strategies, which don’t appear to be very well thought out.
You argue that the U.S. could have established a monopoly on nuclear weapons development. How?The U.S. lost its monopoly to Russia due to acts of Russian espionage that took place at Los Alamos. How do you imagine that could have been prevented?
AI is software, and in software security, offense always has the advantage over defense. There is no network that cannot be breached w/ sufficient time and resources because software is inherently insecure.
While the USSR was indeed able to exfiltrate secrets from Los Alamos to speed up its nuclear program, it took a few more years for it to actually develop a nuclear weapon.
Russell (and we don’t necessarily agree here) argued that the US could have established a monopoly on nuclear development through nuclear coercion. That strategy doesn’t have anything to do with preventing espionage.
Once the genie is out of the bottle, it doesn’t matter, does it? Much of China’s current tech achievements began with industrial espionage. You can’t constrain a game-changing technology while excluding espionage as a factor.
It’s exactly the same issue with AI.
While you have an interesting theoretical concept, there’s no way to derive a strategy from it that would lead to AI safety that I can see.
The idea for this particular theory of victory is that, if some country (for example, the US) develops TAI first, it could use TAI to prevent other countries (for example, China) from developing TAI as well — including via espionage.
If TAI grants a decisive strategic advantage, then it follows that such a monopoly could be effectively enforced (for example, it’s plausible that TAI-enabled cybersecurity would effectively protect against non-TAI cyberoffense).
Again, I’m not necessarily endorsing this ToV. But it does seem plausible.