The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.
That seems like an extremely unnatural thought process. Climate change is the perfect analogy—in these circles, it’s salient both as a tool of oppression and an x-risk.
I think far more selection of attitudes happens through paying attention to more extreme predictions, rather than through thinking / communicating strategically. Also, I’d guess people who spread these messages most consciously imagine a systemic collapse, rather than a literal extinction. As people don’t tend to think about longtermistic consequences, the distinction doesn’t seem that meaningful.
AI x-risk is more weird and terrifying and it goes against the heuristics that “technological progress is good”, “people have always feared new technologies they didn’t understand” and “the powerful draw attention away from their power”. Some people, for whom AI x-risk is hard to accept happen to overlap with AI ethics. My guess is that the proportion is similar in the general population—it’s just that some people in AI ethics feel particularly strong & confident about these heuristics.
Btw I think climate change could pose an x-risk in the broad sense (incl. 2nd-order effects & astronomic waste), just one that we’re very likely to solve (i.e. the tail risks, energy depletion, biodiversity decline or the social effects would have to surprise us).
That seems like an extremely unnatural thought process. Climate change is the perfect analogy—in these circles, it’s salient both as a tool of oppression and an x-risk.
I think far more selection of attitudes happens through paying attention to more extreme predictions, rather than through thinking / communicating strategically. Also, I’d guess people who spread these messages most consciously imagine a systemic collapse, rather than a literal extinction. As people don’t tend to think about longtermistic consequences, the distinction doesn’t seem that meaningful.
AI x-risk is more weird and terrifying and it goes against the heuristics that “technological progress is good”, “people have always feared new technologies they didn’t understand” and “the powerful draw attention away from their power”. Some people, for whom AI x-risk is hard to accept happen to overlap with AI ethics. My guess is that the proportion is similar in the general population—it’s just that some people in AI ethics feel particularly strong & confident about these heuristics.
Btw I think climate change could pose an x-risk in the broad sense (incl. 2nd-order effects & astronomic waste), just one that we’re very likely to solve (i.e. the tail risks, energy depletion, biodiversity decline or the social effects would have to surprise us).