I’ve upvoted this because I think the parallels between A.I. worries and apocalyptic religious stuff are genuinely epistemically worrying, and I inclined to think that the most likely path is that A.I. risk turns out to be yet another failed apocalyptic prediction. (This is compatible with work on it being high value in expectation.)
But I think there’s an issue with your framing of “whose predictions of apocalypse should we trust more, climate scientists or A.I. risk people”: if apocalyptic predictions means predictions of human extinction, it’s not clear to me that most climate scientists are making them (at least in official scientific work). I think this is certainly how people prioritizing A.I. risk over climate change interpret the consensus among climate scientists.
I am using apocalyptic broadly, such that it applies both to the existential risk and to global catastrophic risk. IMPO if a nuclear war kills 99% of the population it is still apocalyptic.
I think that distinguishing between global catastrophic risk and existential risk is extremely difficult. While climate scientists don’t generally predict human extinction I think this is largely because of pressure to not appear alarmist and due to state and corporate interests to downplay and ignore climate change. On the other hand, it doesn’t seem like theirs any particular forces working to downplay AI risk.
I’ve upvoted this because I think the parallels between A.I. worries and apocalyptic religious stuff are genuinely epistemically worrying, and I inclined to think that the most likely path is that A.I. risk turns out to be yet another failed apocalyptic prediction. (This is compatible with work on it being high value in expectation.)
But I think there’s an issue with your framing of “whose predictions of apocalypse should we trust more, climate scientists or A.I. risk people”: if apocalyptic predictions means predictions of human extinction, it’s not clear to me that most climate scientists are making them (at least in official scientific work). I think this is certainly how people prioritizing A.I. risk over climate change interpret the consensus among climate scientists.
I am using apocalyptic broadly, such that it applies both to the existential risk and to global catastrophic risk. IMPO if a nuclear war kills 99% of the population it is still apocalyptic.
I think that distinguishing between global catastrophic risk and existential risk is extremely difficult. While climate scientists don’t generally predict human extinction I think this is largely because of pressure to not appear alarmist and due to state and corporate interests to downplay and ignore climate change. On the other hand, it doesn’t seem like theirs any particular forces working to downplay AI risk.