My guess is that you don’t understand AI risk arguments well enough to be able to pass an Ideological Turing Test for them, ie, be able to restate them in your own words in a way that proponents would buy.
This is a good point. I might write up a paragraph if I get the chance. In my head, I took it for granted that everyone would be on board with this but it’d probably be better to go over some of the data.
I am confused as to why anyone would downvote this.
I agree with your assessment of the group think and why your comment was probably downvoted but for what it’s worth, I don’t think it’s weird that people here are sensitive to what sounds like a dismissal of AI risk since many people in EA circles are genuinely deeply afraid of it and some plan their lives around it.
Basic answer: They aren’t different, and a lot of climate change/environmentalism also has apocalyptic tendencies as well. Ember is just biased toward climate change being true. A shorter version of jackava’s comment follows.
Specifically, both climate change and AI risk have apocalyptic elements in people’s minds, and this is virtually unsurprising given people’s brains. Climate Change is real, but crucially very likely not to be a risk. AI risk could very well go the same way.
I will add a section to this paper to clarify this. The argument isn’t that AI is apocalyptic and therefore untrustworthy but that the apocalyptic narratives in AI parrel other apocalyptic narratives that are untrustworthy(specifically old religious narratives). This paper shows that there are narratives within environmentalist circles but that alone isn’t enough for my argument to apply.
I also don’t understand why no one has commented on my reframing of AI risk which I don’t take to be suspect. I very obviously view AI as a threat there’s an entire section on it.
x
My guess is that you don’t understand AI risk arguments well enough to be able to pass an Ideological Turing Test for them, ie, be able to restate them in your own words in a way that proponents would buy.
x
It seems helpful to be able to understand contrary ideas so you can reject them more confidently! :)
I hope you take care too.
This is a good point. I might write up a paragraph if I get the chance. In my head, I took it for granted that everyone would be on board with this but it’d probably be better to go over some of the data.
I am confused as to why anyone would downvote this.
x
I agree with your assessment of the group think and why your comment was probably downvoted but for what it’s worth, I don’t think it’s weird that people here are sensitive to what sounds like a dismissal of AI risk since many people in EA circles are genuinely deeply afraid of it and some plan their lives around it.
This makes sense
I appreciate the explanation.
Basic answer: They aren’t different, and a lot of climate change/environmentalism also has apocalyptic tendencies as well. Ember is just biased toward climate change being true. A shorter version of jackava’s comment follows.
Specifically, both climate change and AI risk have apocalyptic elements in people’s minds, and this is virtually unsurprising given people’s brains. Climate Change is real, but crucially very likely not to be a risk. AI risk could very well go the same way.
There’s a study showing that climate change/environmentalism has apocalyptic elements to it: https://doi.org/10.1111/j.1540-8159.2005.09566.x-i1.
I will add a section to this paper to clarify this. The argument isn’t that AI is apocalyptic and therefore untrustworthy but that the apocalyptic narratives in AI parrel other apocalyptic narratives that are untrustworthy(specifically old religious narratives). This paper shows that there are narratives within environmentalist circles but that alone isn’t enough for my argument to apply.
I also don’t understand why no one has commented on my reframing of AI risk which I don’t take to be suspect. I very obviously view AI as a threat there’s an entire section on it.