Assessments of non-AI x-risk are relevant to AI safety discussions because some of the hesitance to pause or slow AI progress is driven by a belief that it will help eliminate other threats if it goes well.
I tend to believe that risk from non-AI sources is pretty low, and I’m therefore somewhat alarmed when I see people suggest or state relatively high probabilities of civilisational collapse without AI intervention. Could be worth trying to assess how widespread this view is and trying to argue directly against it.
Assessments of non-AI x-risk are relevant to AI safety discussions because some of the hesitance to pause or slow AI progress is driven by a belief that it will help eliminate other threats if it goes well.
I tend to believe that risk from non-AI sources is pretty low, and I’m therefore somewhat alarmed when I see people suggest or state relatively high probabilities of civilisational collapse without AI intervention. Could be worth trying to assess how widespread this view is and trying to argue directly against it.