The first of those has a weird resolution criteria of 30% year-on-year world GDP growth (“transformative” more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include “AI Dystopia” as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients of AGI, are well ontrack too.)
If there’s no humans left after AGI, then that’s also true for “weak general AI”. Transformative AI is also a far better target for what we’re talking about than “weak general AI”.
The “AI Dystopia” scenario is significantly different from what PauseAI rhetoric is centered about.
The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.
Metaculus puts (being significantly more bullish than actual AI/ML experts and populated with rationalists/EAs) <25% chance on transformative AI happening by the end of the decade and <8% chance of this leading to the traditional AI-go-foom scenario, so <2% p(doom) by the end of the decade. I can’t find a Metaculus poll on this but I would halve that to <1% for whether such transformative AI would be reached by simply scaling LLMs.
The first of those has a weird resolution criteria of 30% year-on-year world GDP growth (“transformative” more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include “AI Dystopia” as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients of AGI, are well on track too.)
If there’s no humans left after AGI, then that’s also true for “weak general AI”. Transformative AI is also a far better target for what we’re talking about than “weak general AI”.
The “AI Dystopia” scenario is significantly different from what PauseAI rhetoric is centered about.
The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.