Iâve upvoted this because I think the parallels between A.I. worries and apocalyptic religious stuff are genuinely epistemically worrying, and I inclined to think that the most likely path is that A.I. risk turns out to be yet another failed apocalyptic prediction. (This is compatible with work on it being high value in expectation.)
But I think thereâs an issue with your framing of âwhose predictions of apocalypse should we trust more, climate scientists or A.I. risk peopleâ: if apocalyptic predictions means predictions of human extinction, itâs not clear to me that most climate scientists are making them (at least in official scientific work). I think this is certainly how people prioritizing A.I. risk over climate change interpret the consensus among climate scientists.
I am using apocalyptic broadly, such that it applies both to the existential risk and to global catastrophic risk. IMPO if a nuclear war kills 99% of the population it is still apocalyptic.
I think that distinguishing between global catastrophic risk and existential risk is extremely difficult. While climate scientists donât generally predict human extinction I think this is largely because of pressure to not appear alarmist and due to state and corporate interests to downplay and ignore climate change. On the other hand, it doesnât seem like theirs any particular forces working to downplay AI risk.
Iâve upvoted this because I think the parallels between A.I. worries and apocalyptic religious stuff are genuinely epistemically worrying, and I inclined to think that the most likely path is that A.I. risk turns out to be yet another failed apocalyptic prediction. (This is compatible with work on it being high value in expectation.)
But I think thereâs an issue with your framing of âwhose predictions of apocalypse should we trust more, climate scientists or A.I. risk peopleâ: if apocalyptic predictions means predictions of human extinction, itâs not clear to me that most climate scientists are making them (at least in official scientific work). I think this is certainly how people prioritizing A.I. risk over climate change interpret the consensus among climate scientists.
I am using apocalyptic broadly, such that it applies both to the existential risk and to global catastrophic risk. IMPO if a nuclear war kills 99% of the population it is still apocalyptic.
I think that distinguishing between global catastrophic risk and existential risk is extremely difficult. While climate scientists donât generally predict human extinction I think this is largely because of pressure to not appear alarmist and due to state and corporate interests to downplay and ignore climate change. On the other hand, it doesnât seem like theirs any particular forces working to downplay AI risk.