Holden—these are reasonable points. But I have two quibbles.
First, the recent surveys of the general public’s attitudes towards AI risk suggest that a strongly enforced global pause would actually get quite a bit of support. It’s not outside the public’s Overton Window. It might be considered an ‘extreme solution’ by AI industry insiders and e/acc cultists. But the public seems to understand that it’s just fundamentally dangerous to invent Artificial General Intelligence that’s as smart as smart humans (and much, much faster), or to invent Artificial Superintelligence. AI experts might patronize the public by claiming they’re just reacting to sensationalized Hollywood depictions of AI risk. But I don’t care. If the public understands the potential risks, through whatever media they’ve been exposed to, and if it leads them to support a pause, we might as well capitalize on public sentiment.
Second, I worry that EAs generally have a ‘policy fetish’, in assuming that the only way to slow down a technological field is through formal, government-sanctioned regulation and ‘good policy’ solutions. I think this is incorrect, both historically and logically. In this piece on moral stigmatization of AI, I argued that an informal, grass-roots, public moral backlash against the AI industry could accomplish almost everything formal regulation can accomplish, without many of the loopholes and downsides that regulation would face. If the general public realizes that AGI-directed research is just fundamentally stupid and reckless and a huge extinction risk, they can stigmatize AI researchers, funders, suppliers, etc in ways that shut down the industry—potentially for decades. If that public stigmatization goes global, the AI industry globally could be put on ‘pause’ for quite a while. Sure, we might delay some potential benefits from some narrow AI applications. But that’s a tradeoff most reasonable people would be willing to accept. (For example, if my generation misses out on AI-created longevity treatments, and we die, but our kids survive, without facing AGI-imposed extinction risks, that’s fine with me—and I think it would be OK with most parents.)
I understand that harnessing the power of moral stigmatization to shut down a promising-but-dangerous technology like AI isn’t the usual EA style, but at this point, it might be the only practical solution to pausing dangerous AI development.
Fully agree. A potential taboo on AGI is something that is far too often overlooked by people who worry about pauses not working well (e.g. see also Scott Alexander, Matthew Barnett, Nora Belrose).
This is true—it’s the same tactic anti-GMO lobbies, the NRA, NIMBYs, and anti-vaxxers have used. The public as a whole doesn’t need to be anti-AI, even a vocal minority will be enough to swing elections and ensure an unfavorable regulatory environment. If I had to guess, AI would end up like nuclear fission—not worth the hassle, but with no off-ramp, no way to unring the alarm bell.
First, the recent surveys of the general public’s attitudes towards AI risk suggest that a strongly enforced global pause would actually get quite a bit of support. It’s not outside the public’s Overton Window. It might be considered an ‘extreme solution’ by AI industry insiders and e/acc cultists. But the public seems to understand that it’s just fundamentally dangerous to invent Artificial General Intelligence that’s as smart as smart humans (and much, much faster), or to invent Artificial Superintelligence. AI experts might patronize the public by claiming they’re just reacting to sensationalized Hollywood depictions of AI risk. But I don’t care. If the public understands the potential risks, through whatever media they’ve been exposed to, and if it leads them to support a pause, we might as well capitalize on public sentiment.
I think the public might support a pause on scaling, but I’m much more skeptical about the sort of hardware-inclusive pause that Holden discusses here:
global regulation-backed pause on all investment in and work on (a) general3 enhancement of AI capabilities beyond the current state of the art, including by scaling up large language models; (b) building more of the hardware (or parts of the pipeline most useful for more hardware) most useful for large-scale training runs (e.g., H100’s); (c) algorithmic innovations that could significantly contribute to (a)
A hardware-inclusive pause which is sufficient for pausing for >10 years would probably effectively dismantle companies like nvidia and would be at least a serious dent in TSMC. This would involve huge job loss and a large hit to the stock market. I expect people would not support such a pause which effectively requires dismantling a powerful industry.
It’s possible I’m overestimating the extent to which hardware needs to be stopped for such a ban to be robust and an improvement on the status quo.
I’m not an expert but economic damage seems to me plausibly like a question of implementation details. E.g. if you ask for a stop in hardware improvements at the same time as implementing hardware-level compute monitoring, this likely requires development of new technology to do efficiently which may allow the current companies to maintain their leading position.
Of course, restrictions are going to have some effect, and plausibly may hit Nvidia’s valuation but it is not at all clear that the economic consequences would necessarily be dramatic (the situation of the car industry and switching to E.V.’s might be vaguely analogous).
I think the tech companies—and in particular the AGI companies—are already too powerful for such an informal public backlash to slow them down significantly.
Disagree. Almost every successful moral campaign in history started out as an informal public backlash against some evil or danger.
The AGI companies involve a few thousand people versus 8 billion, a few tens of billions of funding versus 360 trillion total global assets, and about 3 key nation-states (US, UK, China) versus 195 nation-states in the world.
Compared to actually powerful industries, AGI companies are very small potatoes. Very few people would miss them if they were set on ‘pause’.
Holden—these are reasonable points. But I have two quibbles.
First, the recent surveys of the general public’s attitudes towards AI risk suggest that a strongly enforced global pause would actually get quite a bit of support. It’s not outside the public’s Overton Window. It might be considered an ‘extreme solution’ by AI industry insiders and e/acc cultists. But the public seems to understand that it’s just fundamentally dangerous to invent Artificial General Intelligence that’s as smart as smart humans (and much, much faster), or to invent Artificial Superintelligence. AI experts might patronize the public by claiming they’re just reacting to sensationalized Hollywood depictions of AI risk. But I don’t care. If the public understands the potential risks, through whatever media they’ve been exposed to, and if it leads them to support a pause, we might as well capitalize on public sentiment.
Second, I worry that EAs generally have a ‘policy fetish’, in assuming that the only way to slow down a technological field is through formal, government-sanctioned regulation and ‘good policy’ solutions. I think this is incorrect, both historically and logically. In this piece on moral stigmatization of AI, I argued that an informal, grass-roots, public moral backlash against the AI industry could accomplish almost everything formal regulation can accomplish, without many of the loopholes and downsides that regulation would face. If the general public realizes that AGI-directed research is just fundamentally stupid and reckless and a huge extinction risk, they can stigmatize AI researchers, funders, suppliers, etc in ways that shut down the industry—potentially for decades. If that public stigmatization goes global, the AI industry globally could be put on ‘pause’ for quite a while. Sure, we might delay some potential benefits from some narrow AI applications. But that’s a tradeoff most reasonable people would be willing to accept. (For example, if my generation misses out on AI-created longevity treatments, and we die, but our kids survive, without facing AGI-imposed extinction risks, that’s fine with me—and I think it would be OK with most parents.)
I understand that harnessing the power of moral stigmatization to shut down a promising-but-dangerous technology like AI isn’t the usual EA style, but at this point, it might be the only practical solution to pausing dangerous AI development.
Fully agree. A potential taboo on AGI is something that is far too often overlooked by people who worry about pauses not working well (e.g. see also Scott Alexander, Matthew Barnett, Nora Belrose).
This is true—it’s the same tactic anti-GMO lobbies, the NRA, NIMBYs, and anti-vaxxers have used. The public as a whole doesn’t need to be anti-AI, even a vocal minority will be enough to swing elections and ensure an unfavorable regulatory environment. If I had to guess, AI would end up like nuclear fission—not worth the hassle, but with no off-ramp, no way to unring the alarm bell.
I think the public might support a pause on scaling, but I’m much more skeptical about the sort of hardware-inclusive pause that Holden discusses here:
A hardware-inclusive pause which is sufficient for pausing for >10 years would probably effectively dismantle companies like nvidia and would be at least a serious dent in TSMC. This would involve huge job loss and a large hit to the stock market. I expect people would not support such a pause which effectively requires dismantling a powerful industry.
It’s possible I’m overestimating the extent to which hardware needs to be stopped for such a ban to be robust and an improvement on the status quo.
I’m not an expert but economic damage seems to me plausibly like a question of implementation details. E.g. if you ask for a stop in hardware improvements at the same time as implementing hardware-level compute monitoring, this likely requires development of new technology to do efficiently which may allow the current companies to maintain their leading position.
Of course, restrictions are going to have some effect, and plausibly may hit Nvidia’s valuation but it is not at all clear that the economic consequences would necessarily be dramatic (the situation of the car industry and switching to E.V.’s might be vaguely analogous).
I think the tech companies—and in particular the AGI companies—are already too powerful for such an informal public backlash to slow them down significantly.
Disagree. Almost every successful moral campaign in history started out as an informal public backlash against some evil or danger.
The AGI companies involve a few thousand people versus 8 billion, a few tens of billions of funding versus 360 trillion total global assets, and about 3 key nation-states (US, UK, China) versus 195 nation-states in the world.
Compared to actually powerful industries, AGI companies are very small potatoes. Very few people would miss them if they were set on ‘pause’.
I hope you are right.
I imagine it going hand in hand with more formal backlashes (i.e. regulation, law, treaties).