First, the recent surveys of the general public’s attitudes towards AI risk suggest that a strongly enforced global pause would actually get quite a bit of support. It’s not outside the public’s Overton Window. It might be considered an ‘extreme solution’ by AI industry insiders and e/acc cultists. But the public seems to understand that it’s just fundamentally dangerous to invent Artificial General Intelligence that’s as smart as smart humans (and much, much faster), or to invent Artificial Superintelligence. AI experts might patronize the public by claiming they’re just reacting to sensationalized Hollywood depictions of AI risk. But I don’t care. If the public understands the potential risks, through whatever media they’ve been exposed to, and if it leads them to support a pause, we might as well capitalize on public sentiment.
I think the public might support a pause on scaling, but I’m much more skeptical about the sort of hardware-inclusive pause that Holden discusses here:
global regulation-backed pause on all investment in and work on (a) general3 enhancement of AI capabilities beyond the current state of the art, including by scaling up large language models; (b) building more of the hardware (or parts of the pipeline most useful for more hardware) most useful for large-scale training runs (e.g., H100’s); (c) algorithmic innovations that could significantly contribute to (a)
A hardware-inclusive pause which is sufficient for pausing for >10 years would probably effectively dismantle companies like nvidia and would be at least a serious dent in TSMC. This would involve huge job loss and a large hit to the stock market. I expect people would not support such a pause which effectively requires dismantling a powerful industry.
It’s possible I’m overestimating the extent to which hardware needs to be stopped for such a ban to be robust and an improvement on the status quo.
I’m not an expert but economic damage seems to me plausibly like a question of implementation details. E.g. if you ask for a stop in hardware improvements at the same time as implementing hardware-level compute monitoring, this likely requires development of new technology to do efficiently which may allow the current companies to maintain their leading position.
Of course, restrictions are going to have some effect, and plausibly may hit Nvidia’s valuation but it is not at all clear that the economic consequences would necessarily be dramatic (the situation of the car industry and switching to E.V.’s might be vaguely analogous).
I think the public might support a pause on scaling, but I’m much more skeptical about the sort of hardware-inclusive pause that Holden discusses here:
A hardware-inclusive pause which is sufficient for pausing for >10 years would probably effectively dismantle companies like nvidia and would be at least a serious dent in TSMC. This would involve huge job loss and a large hit to the stock market. I expect people would not support such a pause which effectively requires dismantling a powerful industry.
It’s possible I’m overestimating the extent to which hardware needs to be stopped for such a ban to be robust and an improvement on the status quo.
I’m not an expert but economic damage seems to me plausibly like a question of implementation details. E.g. if you ask for a stop in hardware improvements at the same time as implementing hardware-level compute monitoring, this likely requires development of new technology to do efficiently which may allow the current companies to maintain their leading position.
Of course, restrictions are going to have some effect, and plausibly may hit Nvidia’s valuation but it is not at all clear that the economic consequences would necessarily be dramatic (the situation of the car industry and switching to E.V.’s might be vaguely analogous).