Getting an AI to want the same things that humans want would definitely be helpful, but the points of Quintin’s that I was responding to mostly don’t seem to be about that? “AI control research is easier” and “Why AI is easier to control than humans:” talk about resetting AIs, controlling their sensory inputs, manipulating their internal representations, and AIs being cheaper test subjects. Those sound like they are more about control rather than getting the AI to desire what humans want it to desire. I disagree with Quintin’s characterization of the training process as teaching the model anything to do with what the AI itself wants, and I don’t think current AI systems actually desire anything in the same sense that humans do.
I do think it is plausible that it will be easier to control what a future AI wants compared to controlling what a human wants, but by the same token, that means it will be easier for a human-level AI to exercise self-control over its own desires. e.g. I might want to not eat junk food for health reasons, but I have no good way to bind myself to that, at least not without making myself miserable. A human-level AI would have an easier time self-modifying into something that never craved the AI equivalent of junk food (and was never unhappy about that), because it is made out of Python code and floating point matrices instead of neurons.
Getting an AI to want the same things that humans want would definitely be helpful, but the points of Quintin’s that I was responding to mostly don’t seem to be about that? “AI control research is easier” and “Why AI is easier to control than humans:” talk about resetting AIs, controlling their sensory inputs, manipulating their internal representations, and AIs being cheaper test subjects. Those sound like they are more about control rather than getting the AI to desire what humans want it to desire. I disagree with Quintin’s characterization of the training process as teaching the model anything to do with what the AI itself wants, and I don’t think current AI systems actually desire anything in the same sense that humans do.
I do think it is plausible that it will be easier to control what a future AI wants compared to controlling what a human wants, but by the same token, that means it will be easier for a human-level AI to exercise self-control over its own desires. e.g. I might want to not eat junk food for health reasons, but I have no good way to bind myself to that, at least not without making myself miserable. A human-level AI would have an easier time self-modifying into something that never craved the AI equivalent of junk food (and was never unhappy about that), because it is made out of Python code and floating point matrices instead of neurons.