I think something about properly testing powerful new technologies and making sure they’re not used to hurt people sounds pretty intuitive. I think people intuitively get that anything with military applications can cause serious accidents or be misused by bad actors.
I’m aware a problem with AI risk or AI safety is that it doesn’t distinguish other AI-related ethics or security concerns from the AI alignment problem, as the EA community’s primary concern about advanced AI. I got interesting answers to a question I recently asked on LessWrong about who else has this same attitude towards this kind of conceptual language.
I think something about properly testing powerful new technologies and making sure they’re not used to hurt people sounds pretty intuitive. I think people intuitively get that anything with military applications can cause serious accidents or be misused by bad actors.
Unfortunately this isn’t a very good description of the concern about AI, and so even if it “polls better” I’d be reluctant to use it.
I’m aware a problem with AI risk or AI safety is that it doesn’t distinguish other AI-related ethics or security concerns from the AI alignment problem, as the EA community’s primary concern about advanced AI. I got interesting answers to a question I recently asked on LessWrong about who else has this same attitude towards this kind of conceptual language.
“AI is the new Nuclear Weapons. We don’t an arms race which leads to unsafe technologies” perhaps?