Thanks for posting! Tentative idea for tweaks: my intuition would be to modify the middle two branches into the following:
Long-term AI misuse
Stable authoritarianism
Value erosion from competition
“Lame” future
AI exacerbates other x-risk factors
(Great power / nuclear) conflict
Degraded epistemics
Other dangerous tech
Rationale:
“Other (AI-enabled) dangerous tech” feels to me like it clearly falls under “exacerbating other x-risk factors”
“AI-enabled dystopia” sounds broad enough to cover ~everything on the chart; “long-term AI misuse” might more precisely capture the thought there
“Authoritarianism” rather than “totalitarianism,” because the latter is often/historically defined as involving features more specific than the extent of the risks here (e.g. large political parties)
Stable authoritarianism admittedly sorta falls under both “long-term AI misue” and “AI exacerbates other x-risk factors.” My intuition is to put it in the former bucket, because extremely stable authoritarianism doesn’t seem very plausible if it’s not AI-enabled (in that sense, stable authoritarianism doesn’t seem like an “other” x-risk factor).
“Other (AI-enabled) dangerous tech” feels to me like it clearly falls under “exacerbating other x-risk factors”
I was trying to stipulate that the dangerous tech was a source of x-risk in itself, not just a risk factor (admittedly the boundary is fuzzy). The wording was “AI leads to deployment of technology that causes extinction or unrecoverable collapse” and the examples (which could have been clearer) were intended to be “a pathogen kills everyone” or “full scale nuclear war leads to unrecoverable collapse”
Thanks for posting! Tentative idea for tweaks: my intuition would be to modify the middle two branches into the following:
Long-term AI misuse
Stable authoritarianism
Value erosion from competition
“Lame” future
AI exacerbates other x-risk factors
(Great power / nuclear) conflict
Degraded epistemics
Other dangerous tech
Rationale:
“Other (AI-enabled) dangerous tech” feels to me like it clearly falls under “exacerbating other x-risk factors”
“AI-enabled dystopia” sounds broad enough to cover ~everything on the chart; “long-term AI misuse” might more precisely capture the thought there
“Authoritarianism” rather than “totalitarianism,” because the latter is often/historically defined as involving features more specific than the extent of the risks here (e.g. large political parties)
Stable authoritarianism admittedly sorta falls under both “long-term AI misue” and “AI exacerbates other x-risk factors.” My intuition is to put it in the former bucket, because extremely stable authoritarianism doesn’t seem very plausible if it’s not AI-enabled (in that sense, stable authoritarianism doesn’t seem like an “other” x-risk factor).
+1 for not including ~”misaligned, non-power-seeking AI”; that seems to be a somewhat common misinterpretation of some AI concerns.
Edit: good point in the below response!
Thanks, I agree with most of these suggestions.
I was trying to stipulate that the dangerous tech was a source of x-risk in itself, not just a risk factor (admittedly the boundary is fuzzy). The wording was “AI leads to deployment of technology that causes extinction or unrecoverable collapse” and the examples (which could have been clearer) were intended to be “a pathogen kills everyone” or “full scale nuclear war leads to unrecoverable collapse”