I’m a little confused by the focus on a global police state. If someone told me that, in the year 2230, humans were still around and AI hadn’t changed much since 2030, my first guess would be that this was mainly accomplished by some combination of very strong norms against building advanced AI and treaties/laws/monitoring/etc that focuses on the hardware used to create advanced AI, including its supply chains and what that hardware is used for. I would also guess that this required improvements in our ability to tell dangerous computing and the hardware that enables it apart from benign computing and its hardware. (Also, hearing this would be a huge update to me that the world is structured such that this boundary can be drawn in a way that doesn’t require us to monitor everyone all the time to see who is crossing it. So maybe I just have a low prior on this kind of police state being a feasible way to limit the development of technology.)
Somewhat relatedly:
> Given both hardware progress and algorithmic progress, the cost of training AI is dropping very quickly. The price of computation has historically fallen by half roughly every two to three years since 1945. This means that even if we could increase the cost of production of computer hardware by, say, 1000% through an international ban on the technology, it may only take a decade for continued hardware progress alone to drive costs back to their previous level, allowing actors across the world to train frontier AI despite the ban.
I think if there were a ban that drove up the price of hardware by 10x, wouldn’t this be a severe disincentive to keep developing the technology? It seems like the large profitability of computing hardware is a necessary ingredient for the rapid development and decrease in cost.
Overall, I thought this was a good contribution. Thanks!
I don’t think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states’ interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people’s perspective, and the other things that Matthew mentioned).