PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).
Hi Matrice! I find this comment interesting. Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?
“Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature.”
Also, what evidence do you have for the below comment? For example, I met the leader of the voice actors association in Australia and we agreed on many topics, including the need for an AISI. In fact, I’d argue you’ve got something important wrong here—talking about existential risk instead of catastrophic risks to policymakers can be counterproductive because there aren’t many useful policies to prevent it (besides pausing).
“ the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk”
Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?
“Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature.”
“slowing down AI” != “slowing down AI because of x risk”
In addition to what @gw said on the public being in favor of slowing down AI, I’m mostly basing this on reactions to news about PauseAI protests on generic social media websites. The idea that LLMs scaling without further technological breakthrough will for sure lead to superintelligence in the coming decade is controversial by EA standards, fringe by general AI community standard, and resoundly mocked by the general public.
If other stakeholders agree with the existential risk perspective then that is of course great and should be encouraged. To develop further on what I meant (though see also the linked post), I am extremely skeptical that allying with copyright lobbyists is good by any EA/longtermist metric, when ~nobody think art generators pose any existential risk and big AI companies are already negotiating deals with copyright giants (or even the latter creating their own AI divisions as with Adobe Firefly or Disney’s new AI division), while independent EA-aligned research groups like EleutherAI are heavily dependent on the existence of open-source datasets.
There is enough of a scientific consensus that extinction risk from AGI is real and significant. Timelines are arguably much shorter in the case of AGI than climate change, so the movement needs to be ramped up in months-years, not years-decades.
It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.
I’d say more like late-20th Century (late 1980s?) in terms of scientific consensus, and mid-21st century (2040s?) in terms of how close global catastrophe is.
Re the broad coalition—the focus is on pausing AI, which will help all anti-AI causes.
Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra’s report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.
Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.
10% chance of a 10%[1] chance of extinction happening within 5 years[2] is more than enough to be shutting it all down immediately[3]. It’s actually kind of absurd how tolerant of death risk people are on this relative to those from the pharmaceutical, nuclear or aviation industries.
Crucially, p(doom)=1% isn’t the claim PauseAI protesters are making. Discussed outcomes should be fairly distributed over probable futures, if only to make sure your preferred policy is an improvement on most or all of those (this is where I would weakly agree with @Matthew_Barnett’s comment).
1% is very conservative (and based on broad surveys of AI researchers, who mostly are building the very technology causing the risk, so are obviously biased against it being high). The point I’m making is that even a 1% chance of death by collateral damage is totally unacceptable coming from any other industry. Supporting a Pause should therefore be a no brainer. (Or to be consistent we should be dismantling ~all regulation of ~all industry.)
Industry regulations tend to be based on statistical averages (i.e., from a global perspective, on certainties), not multiplications of subjective-Bayesian guesses. I don’t think the general public accepting any industry regulations commit them to Pascal-mugging-adjacent views. After all, 1% of existential risk (or at least global catastrophic risk) due to climate change, biodiversity collapse, or zoonotic pandemics seem plausible too. If you have any realistic amount of risk aversion, whether the remaining 99% of the futures (even from a strictly strong-longtermist perspective) are improved upon by pausing (worse, by flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen) is important!
1% (again, conservative[1]) is not a Pascal’s Mugging. 1%(+) catastrophic (not extinction) risk is plausible for climate change, and a lot is being done there (arguably, enough that we are on track to avert catastrophe if action[2] keeps scaling).
flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen
It’s anything but flippant[3]. And x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients, are well ontrack too. I’m happy to bear any reputation costs in the event we live through this. It’s unfortunate, but if there is no extinction, then of course people will say we were wrong. But there might well only be no extinction because of our actions![4]
To be clear, my point is that 1/ even inside the environmental movement calling for an immediate pause on all industry from the same argument you’re using is extremely fringe, 2/ the reputation costs in 99% of worlds will themselves increase existential risk in the (far more likely) case that AGI happens when (or after) most experts think it will happen.
1/ Unaligned ASI existing at all is equivalent to “doom-causing levels of CO2 over a doom-causing length of time”. We need an immediate pause on AGI development to prevent unaligned ASI. We don’t need an immediate pause on all industry to prevent doom-causing levels of CO2 over a doom-causing length of time.
The first of those has a weird resolution criteria of 30% year-on-year world GDP growth (“transformative” more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include “AI Dystopia” as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients of AGI, are well ontrack too.)
If there’s no humans left after AGI, then that’s also true for “weak general AI”. Transformative AI is also a far better target for what we’re talking about than “weak general AI”.
The “AI Dystopia” scenario is significantly different from what PauseAI rhetoric is centered about.
The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.
You don’t have to go as far back as the mid-19th-century to find a time before scientific consensus about global warming. You only need to go back to 1990 or so.
PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).
¹: Microsoft/OpenAI, Amazon/Anthropic, Google/DeepMind, Facebook/Meta
Hi Matrice! I find this comment interesting. Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?
“Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature.”
Also, what evidence do you have for the below comment? For example, I met the leader of the voice actors association in Australia and we agreed on many topics, including the need for an AISI. In fact, I’d argue you’ve got something important wrong here—talking about existential risk instead of catastrophic risks to policymakers can be counterproductive because there aren’t many useful policies to prevent it (besides pausing).
“ the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk”
“slowing down AI” != “slowing down AI because of x risk”
In addition to what @gw said on the public being in favor of slowing down AI, I’m mostly basing this on reactions to news about PauseAI protests on generic social media websites. The idea that LLMs scaling without further technological breakthrough will for sure lead to superintelligence in the coming decade is controversial by EA standards, fringe by general AI community standard, and resoundly mocked by the general public.
If other stakeholders agree with the existential risk perspective then that is of course great and should be encouraged. To develop further on what I meant (though see also the linked post), I am extremely skeptical that allying with copyright lobbyists is good by any EA/longtermist metric, when ~nobody think art generators pose any existential risk and big AI companies are already negotiating deals with copyright giants (or even the latter creating their own AI divisions as with Adobe Firefly or Disney’s new AI division), while independent EA-aligned research groups like EleutherAI are heavily dependent on the existence of open-source datasets.
There is enough of a scientific consensus that extinction risk from AGI is real and significant. Timelines are arguably much shorter in the case of AGI than climate change, so the movement needs to be ramped up in months-years, not years-decades.
I’d say more like late-20th Century (late 1980s?) in terms of scientific consensus, and mid-21st century (2040s?) in terms of how close global catastrophe is.
Re the broad coalition—the focus is on pausing AI, which will help all anti-AI causes.
Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra’s report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.
Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.
10% chance of a 10%[1] chance of extinction happening within 5 years[2] is more than enough to be shutting it all down immediately[3]. It’s actually kind of absurd how tolerant of death risk people are on this relative to those from the pharmaceutical, nuclear or aviation industries.
I outline here why 10% should be used rather than 50%.
Eyeballing the graph here, it looks like at least 10% by 2030.
I think it’s more like 90% [p(doom|AGI)] chance of a 50% chance [p(AGI in 5 years)].
Crucially, p(doom)=1% isn’t the claim PauseAI protesters are making. Discussed outcomes should be fairly distributed over probable futures, if only to make sure your preferred policy is an improvement on most or all of those (this is where I would weakly agree with @Matthew_Barnett’s comment).
1% is very conservative (and based on broad surveys of AI researchers, who mostly are building the very technology causing the risk, so are obviously biased against it being high). The point I’m making is that even a 1% chance of death by collateral damage is totally unacceptable coming from any other industry. Supporting a Pause should therefore be a no brainer. (Or to be consistent we should be dismantling ~all regulation of ~all industry.)
Industry regulations tend to be based on statistical averages (i.e., from a global perspective, on certainties), not multiplications of subjective-Bayesian guesses. I don’t think the general public accepting any industry regulations commit them to Pascal-mugging-adjacent views. After all, 1% of existential risk (or at least global catastrophic risk) due to climate change, biodiversity collapse, or zoonotic pandemics seem plausible too. If you have any realistic amount of risk aversion, whether the remaining 99% of the futures (even from a strictly strong-longtermist perspective) are improved upon by pausing (worse, by flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen) is important!
1% (again, conservative[1]) is not a Pascal’s Mugging. 1%(+) catastrophic (not extinction) risk is plausible for climate change, and a lot is being done there (arguably, enough that we are on track to avert catastrophe if action[2] keeps scaling).
It’s anything but flippant[3]. And x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients, are well on track too. I’m happy to bear any reputation costs in the event we live through this. It’s unfortunate, but if there is no extinction, then of course people will say we were wrong. But there might well only be no extinction because of our actions![4]
I actually think it’s more like 50%, and can argue this case if you think it’s a crux.
Including removing CO₂ from the atmosphere and/or deflecting solar radiation.
Please read the PauseAI website.
Or maybe we will just luck out [footnote 10 on linked post].
To be clear, my point is that 1/ even inside the environmental movement calling for an immediate pause on all industry from the same argument you’re using is extremely fringe, 2/ the reputation costs in 99% of worlds will themselves increase existential risk in the (far more likely) case that AGI happens when (or after) most experts think it will happen.
1/ Unaligned ASI existing at all is equivalent to “doom-causing levels of CO2 over a doom-causing length of time”. We need an immediate pause on AGI development to prevent unaligned ASI. We don’t need an immediate pause on all industry to prevent doom-causing levels of CO2 over a doom-causing length of time.
2/ It’s really not 99% of worlds. That is way too conservative. Metaculus puts 25% chance on weak AGI happening within 1 year and 25% on strong AGI happening within 3 years.
Metaculus puts (being significantly more bullish than actual AI/ML experts and populated with rationalists/EAs) <25% chance on transformative AI happening by the end of the decade and <8% chance of this leading to the traditional AI-go-foom scenario, so <2% p(doom) by the end of the decade. I can’t find a Metaculus poll on this but I would halve that to <1% for whether such transformative AI would be reached by simply scaling LLMs.
The first of those has a weird resolution criteria of 30% year-on-year world GDP growth (“transformative” more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include “AI Dystopia” as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients of AGI, are well on track too.)
If there’s no humans left after AGI, then that’s also true for “weak general AI”. Transformative AI is also a far better target for what we’re talking about than “weak general AI”.
The “AI Dystopia” scenario is significantly different from what PauseAI rhetoric is centered about.
The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.
You don’t have to go as far back as the mid-19th-century to find a time before scientific consensus about global warming. You only need to go back to 1990 or so.
Yes, I was thinking of James Hansen’s testimony to the US Senate in 1988 as being equivalent to some of the Senate hearings on AI last year.