There is enough of a scientific consensus that extinction risk from AGI is real and significant. Timelines are arguably much shorter in the case of AGI than climate change, so the movement needs to be ramped up in months-years, not years-decades.
It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.
I’d say more like late-20th Century (late 1980s?) in terms of scientific consensus, and mid-21st century (2040s?) in terms of how close global catastrophe is.
Re the broad coalition—the focus is on pausing AI, which will help all anti-AI causes.
Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra’s report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.
Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.
10% chance of a 10%[1] chance of extinction happening within 5 years[2] is more than enough to be shutting it all down immediately[3]. It’s actually kind of absurd how tolerant of death risk people are on this relative to those from the pharmaceutical, nuclear or aviation industries.
There is enough of a scientific consensus that extinction risk from AGI is real and significant. Timelines are arguably much shorter in the case of AGI than climate change, so the movement needs to be ramped up in months-years, not years-decades.
I’d say more like late-20th Century (late 1980s?) in terms of scientific consensus, and mid-21st century (2040s?) in terms of how close global catastrophe is.
Re the broad coalition—the focus is on pausing AI, which will help all anti-AI causes.
Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra’s report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.
Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.
10% chance of a 10%[1] chance of extinction happening within 5 years[2] is more than enough to be shutting it all down immediately[3]. It’s actually kind of absurd how tolerant of death risk people are on this relative to those from the pharmaceutical, nuclear or aviation industries.
I outline here why 10% should be used rather than 50%.
Eyeballing the graph here, it looks like at least 10% by 2030.
I think it’s more like 90% [p(doom|AGI)] chance of a 50% chance [p(AGI in 5 years)].