Basic answer: They aren’t different, and a lot of climate change/environmentalism also has apocalyptic tendencies as well. Ember is just biased toward climate change being true. A shorter version of jackava’s comment follows.
Specifically, both climate change and AI risk have apocalyptic elements in people’s minds, and this is virtually unsurprising given people’s brains. Climate Change is real, but crucially very likely not to be a risk. AI risk could very well go the same way.
I will add a section to this paper to clarify this. The argument isn’t that AI is apocalyptic and therefore untrustworthy but that the apocalyptic narratives in AI parrel other apocalyptic narratives that are untrustworthy(specifically old religious narratives). This paper shows that there are narratives within environmentalist circles but that alone isn’t enough for my argument to apply.
I also don’t understand why no one has commented on my reframing of AI risk which I don’t take to be suspect. I very obviously view AI as a threat there’s an entire section on it.
Basic answer: They aren’t different, and a lot of climate change/environmentalism also has apocalyptic tendencies as well. Ember is just biased toward climate change being true. A shorter version of jackava’s comment follows.
Specifically, both climate change and AI risk have apocalyptic elements in people’s minds, and this is virtually unsurprising given people’s brains. Climate Change is real, but crucially very likely not to be a risk. AI risk could very well go the same way.
There’s a study showing that climate change/environmentalism has apocalyptic elements to it: https://doi.org/10.1111/j.1540-8159.2005.09566.x-i1.
I will add a section to this paper to clarify this. The argument isn’t that AI is apocalyptic and therefore untrustworthy but that the apocalyptic narratives in AI parrel other apocalyptic narratives that are untrustworthy(specifically old religious narratives). This paper shows that there are narratives within environmentalist circles but that alone isn’t enough for my argument to apply.
I also don’t understand why no one has commented on my reframing of AI risk which I don’t take to be suspect. I very obviously view AI as a threat there’s an entire section on it.