I’m aware a problem with AI risk or AI safety is that it doesn’t distinguish other AI-related ethics or security concerns from the AI alignment problem, as the EA community’s primary concern about advanced AI. I got interesting answers to a question I recently asked on LessWrong about who else has this same attitude towards this kind of conceptual language.
Unfortunately this isn’t a very good description of the concern about AI, and so even if it “polls better” I’d be reluctant to use it.
I’m aware a problem with AI risk or AI safety is that it doesn’t distinguish other AI-related ethics or security concerns from the AI alignment problem, as the EA community’s primary concern about advanced AI. I got interesting answers to a question I recently asked on LessWrong about who else has this same attitude towards this kind of conceptual language.