I think this type of misuse is an emerging AI alignment problem.
Misuse can be important or interesting, but the word “alignment” should be reserved for problems like the problem of making systems try to do what their operators want, especially making very capable systems not kill everyone.
Misuse can be important or interesting, but the word “alignment” should be reserved for problems like the problem of making systems try to do what their operators want, especially making very capable systems not kill everyone.