I think it’s worth saying that the context of “maximize paperclips” is not one where the person literally says the words “maximize paperclips” or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you’ll get it doing things you wouldn’t as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they’d have to kill themselves or their loved ones to make more paperclips.
The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we’d still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.
I think it’s worth saying that the context of “maximize paperclips” is not one where the person literally says the words “maximize paperclips” or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you’ll get it doing things you wouldn’t as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they’d have to kill themselves or their loved ones to make more paperclips.
The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we’d still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.