I think for other prominent x-risks like pandemics, climate change and nuclear war, there are “smaller scale versions” which are well known and have caused lots of harm (Ebola / COVID, extreme weather events, Hiroshima and Nagasaki). I think these make the risk from more extreme versions of these events feel realistic and scary.
For AI, I can’t think of what smaller scale versions of uncontrollable AI would look like, and I don’t think other faults with AI have caused severe enough / well known enough harm yet. So AI x-risk feels more like a fiction thing.
So based on my ideas on this, raising awareness on immediate harms from AI, (eg—surveillance uses, discrimination) could make the x-risk feel realistic too (at least the x-risk from AI controlled by malicious actors) - this could be tested in a study. But I can’t think of a smaller scale version of uncontrollable AI per se.
I think for other prominent x-risks like pandemics, climate change and nuclear war, there are “smaller scale versions” which are well known and have caused lots of harm (Ebola / COVID, extreme weather events, Hiroshima and Nagasaki). I think these make the risk from more extreme versions of these events feel realistic and scary.
For AI, I can’t think of what smaller scale versions of uncontrollable AI would look like, and I don’t think other faults with AI have caused severe enough / well known enough harm yet. So AI x-risk feels more like a fiction thing.
So based on my ideas on this, raising awareness on immediate harms from AI, (eg—surveillance uses, discrimination) could make the x-risk feel realistic too (at least the x-risk from AI controlled by malicious actors) - this could be tested in a study. But I can’t think of a smaller scale version of uncontrollable AI per se.
This is a really great point and makes sense as to why it’s a lot less spoken about than other risks.