With ‘paperclip maximizer’ scenarios I refer to scenarios in which a powerful AI system is set to pursue a goal, it pursues that goal without a good model of human psychology, intent and ethics, and produces disastrous unintended consequences.
Thanks for stating your assumptions clearly! Maybe I am confused here, but this seems like a very different definition of “paperclip maximizer” than the ones I have seen other people use. I am under the impression that the main problem with alignment is not a lack of ability of an agent to model human preferences, psychology, intent, etc., but the lack of a precise algorithm to encode a willingness to care about human preferences, etc. The classic phrase is “The Genie knows, but doesn’t care.”
I see it as quite conceivable that human common sense, intent disambiguation and ethical decision making can be simulated in much the same way as the language they produce. This means that it would seem feasible to build AI models that either integrate simulated humans as part of their action selection mechanism, or at least automatically poll a simulated human (or an ensemble of simulated humans) about their judgement of specific actions under consideration (‘Would Jean-Luc Picard approve of turning everything into paperclips? No.’).
I moderately agree with this! All else being equal, if language modeling is differentially easier than other AI tasks, then I would imagine that Iterated Distillation and Amplification, or something similar to it, will be comparatively more likely to be viable than other AI Safety proposals. That said, some people think human modeling is not a free win.
imply that we need to create an explicit, optimized model of ethics before we venture into creating strong AI
I think most (all?) AI Safety groups are much more humble about what is possible (or at least realistic).
Thanks for stating your assumptions clearly! Maybe I am confused here, but this seems like a very different definition of “paperclip maximizer” than the ones I have seen other people use. I am under the impression that the main problem with alignment is not a lack of ability of an agent to model human preferences, psychology, intent, etc., but the lack of a precise algorithm to encode a willingness to care about human preferences, etc. The classic phrase is “The Genie knows, but doesn’t care.”
I moderately agree with this! All else being equal, if language modeling is differentially easier than other AI tasks, then I would imagine that Iterated Distillation and Amplification, or something similar to it, will be comparatively more likely to be viable than other AI Safety proposals. That said, some people think human modeling is not a free win.
I think most (all?) AI Safety groups are much more humble about what is possible (or at least realistic).