I think that disagreement about the size of the risks is part of the equation. But it’s missing what is, for at least a few of the prominent critics, the main element—people like Timnit, Kate Crawford, Meredith Whittaker are bought in leftie ideologies focuses on things like “bias”, “prejudice”, and “disproportionate disadvantage”. So they see AI as primarily an instrument of oppression. The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.
Obviously this is not what is happening with all people in the FATE AI, or AI Ethics community, but I do think it’s what’s driving some of the loudest voices, and that we should be clear-eyed about it.
Interesting point, but why do these people think that climate change is going to cause likely extinction? Again, it’s because their thinking is politics-first. Their side of politics is warning of a likely “climate catastrophe”, so they have to make that catastrophe as bad as possible—existential.
The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.
That seems like an extremely unnatural thought process. Climate change is the perfect analogy—in these circles, it’s salient both as a tool of oppression and an x-risk.
I think far more selection of attitudes happens through paying attention to more extreme predictions, rather than through thinking / communicating strategically. Also, I’d guess people who spread these messages most consciously imagine a systemic collapse, rather than a literal extinction. As people don’t tend to think about longtermistic consequences, the distinction doesn’t seem that meaningful.
AI x-risk is more weird and terrifying and it goes against the heuristics that “technological progress is good”, “people have always feared new technologies they didn’t understand” and “the powerful draw attention away from their power”. Some people, for whom AI x-risk is hard to accept happen to overlap with AI ethics. My guess is that the proportion is similar in the general population—it’s just that some people in AI ethics feel particularly strong & confident about these heuristics.
Btw I think climate change could pose an x-risk in the broad sense (incl. 2nd-order effects & astronomic waste), just one that we’re very likely to solve (i.e. the tail risks, energy depletion, biodiversity decline or the social effects would have to surprise us).
I think that disagreement about the size of the risks is part of the equation. But it’s missing what is, for at least a few of the prominent critics, the main element—people like Timnit, Kate Crawford, Meredith Whittaker are bought in leftie ideologies focuses on things like “bias”, “prejudice”, and “disproportionate disadvantage”. So they see AI as primarily an instrument of oppression. The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.
Obviously this is not what is happening with all people in the FATE AI, or AI Ethics community, but I do think it’s what’s driving some of the loudest voices, and that we should be clear-eyed about it.
I disagree because I think these people would be in favour of action to mitigate x-risk from extreme climate change and nuclear war.
Interesting point, but why do these people think that climate change is going to cause likely extinction? Again, it’s because their thinking is politics-first. Their side of politics is warning of a likely “climate catastrophe”, so they have to make that catastrophe as bad as possible—existential.
That seems like an extremely unnatural thought process. Climate change is the perfect analogy—in these circles, it’s salient both as a tool of oppression and an x-risk.
I think far more selection of attitudes happens through paying attention to more extreme predictions, rather than through thinking / communicating strategically. Also, I’d guess people who spread these messages most consciously imagine a systemic collapse, rather than a literal extinction. As people don’t tend to think about longtermistic consequences, the distinction doesn’t seem that meaningful.
AI x-risk is more weird and terrifying and it goes against the heuristics that “technological progress is good”, “people have always feared new technologies they didn’t understand” and “the powerful draw attention away from their power”. Some people, for whom AI x-risk is hard to accept happen to overlap with AI ethics. My guess is that the proportion is similar in the general population—it’s just that some people in AI ethics feel particularly strong & confident about these heuristics.
Btw I think climate change could pose an x-risk in the broad sense (incl. 2nd-order effects & astronomic waste), just one that we’re very likely to solve (i.e. the tail risks, energy depletion, biodiversity decline or the social effects would have to surprise us).