Even though existential risk neglect usually is explained by general biases that don’t pertain to specific risks, it is sometimes acknowledged that there are important AI-specific biases. E.g. the AI risk expert Stuart Russell has made an illuminating thought-experiment:
The arrival of superintelligent AI is in many ways analogous to the arrival of a superior alien civilization but much more likely to occur. Perhaps most important, AI, unlike aliens, is something over which we have some say. Then I asked the audience to imagine what would happen if we received notice from a superior alien civilization that they would arrive on Earth in thirty to fifty years. The word pandemonium doesn’t begin to describe it. Yet our response to the anticipated arrival of superintelligent AI has been . . . well, underwhelming begins to describe it.
I think Russell is right: we would react much more strongly to a notice of an alien invasion. AI risk is unprecedented, difficult to comprehend, and may sound outlandish or even laughable. Those features arguably make people inclined to downshift existential risk from AI. By contrast, they are likely much more inclined to take seriously existential risks that are easier to grasp and/or have known historical precedents.
...
I’m not sure I believe that AI-specific biases are the whole story, however. I do think that people also have a general tendency to neglect existential risk. But I think that AI-specific biases are an important part of the story.
If this is true, then one upshot could be that efforts to counter biases relating to existential risk should largely be directed specifically at existential risk from AI, rather than existential risk in general. Relatedly, I think that part of the existential risk community is sometimes a bit too inclined to talk about existential risk in general, when it’s more appropriate to talk about specific risks; such as AI risk. Existential risk is a very heterogeneous concept—the risks are very different not only psychologically but also in terms of how likely they are—and mostly using the general existential risk concept may mask that.
I’ve written a blog post about general vs AI-specific explanations of existential risk neglect of potential interest to some. Some excerpts:
Even though existential risk neglect usually is explained by general biases that don’t pertain to specific risks, it is sometimes acknowledged that there are important AI-specific biases. E.g. the AI risk expert Stuart Russell has made an illuminating thought-experiment:
I think Russell is right: we would react much more strongly to a notice of an alien invasion. AI risk is unprecedented, difficult to comprehend, and may sound outlandish or even laughable. Those features arguably make people inclined to downshift existential risk from AI. By contrast, they are likely much more inclined to take seriously existential risks that are easier to grasp and/or have known historical precedents.
...
I’m not sure I believe that AI-specific biases are the whole story, however. I do think that people also have a general tendency to neglect existential risk. But I think that AI-specific biases are an important part of the story.
If this is true, then one upshot could be that efforts to counter biases relating to existential risk should largely be directed specifically at existential risk from AI, rather than existential risk in general. Relatedly, I think that part of the existential risk community is sometimes a bit too inclined to talk about existential risk in general, when it’s more appropriate to talk about specific risks; such as AI risk. Existential risk is a very heterogeneous concept—the risks are very different not only psychologically but also in terms of how likely they are—and mostly using the general existential risk concept may mask that.