People interested in the analogy between AGI and how the superpowers managed to keep a lid on nuclear technology, might be interested in my post brainstorming how the world might have looked if nukes had been easier to build than they were in real life, but not so terrifyingly easy as in Nick Bostrom’s “Vulnerable World Hypothesis”: https://forum.effectivealtruism.org/posts/FtEPgeoThqpSMsnn6/nuclear-strategy-in-a-semi-vulnerable-world
But it does seem plausible to me that AI algorithmic improvements mean that the threshold level of control needed to ensure nonproliferation falls from “control semiconductor factories and supply chains” down to “control all individual GPUs”, which is equivalent to the difficulty level of the original Bostrom scenario.
People interested in the analogy between AGI and how the superpowers managed to keep a lid on nuclear technology, might be interested in my post brainstorming how the world might have looked if nukes had been easier to build than they were in real life, but not so terrifyingly easy as in Nick Bostrom’s “Vulnerable World Hypothesis”: https://forum.effectivealtruism.org/posts/FtEPgeoThqpSMsnn6/nuclear-strategy-in-a-semi-vulnerable-world
But it does seem plausible to me that AI algorithmic improvements mean that the threshold level of control needed to ensure nonproliferation falls from “control semiconductor factories and supply chains” down to “control all individual GPUs”, which is equivalent to the difficulty level of the original Bostrom scenario.