They do so because they think x-risk, which (if it occurs) involves the death of everyone
I’d prefer you not fixate on literally everyone dying because it’s actually pretty unclear if AI takeover would result in everyone dying. (The same applies for misuse risk, bioweapons misuse can be catastrophic without killing literally everyone.)
For discussion of whether AI takeover would lead to extinction see here, here, and here.
I wish there was a short term which clearly emphasizes “catastrophe-as-bad-as-over-a-billion-people-dying-or-humanity-losing-control-of-the-future”.
[Not relevant to the main argument of this post]
I’d prefer you not fixate on literally everyone dying because it’s actually pretty unclear if AI takeover would result in everyone dying. (The same applies for misuse risk, bioweapons misuse can be catastrophic without killing literally everyone.)
For discussion of whether AI takeover would lead to extinction see here, here, and here.
I wish there was a short term which clearly emphasizes “catastrophe-as-bad-as-over-a-billion-people-dying-or-humanity-losing-control-of-the-future”.
It’s called an existential catastrophe: https://www.fhi.ox.ac.uk/Existential-risk-and-existential-hope.pdf or if you mean 1 step down, it could be a “global catastrophe”.
or colloquially “doom” (though I don’t think this term has the right serious connotations)
Yeah. I also sometimes use ‘extinction-level’ if I expect my interlocutor not to already have a clear notion of ‘existential’.
lasting catastrophe?
perma-cataclysm?
hypercatastrophe?