Is GPT-3 the death of the paperclip maximizer?

Short post conveying a single but fundamental and perhaps controversial idea that I would like to see discussed more. I don’t think the idea is novel, but it gets new traction from the progress of unsupervised language learning that culminated into the current excitement about GPT-3. It is also not particularly fleshed out, and I would be interested in the current opinion of people more involved in AI alignment.

I see GPT-3 and the work leading up to it as strong indication that ‘paperclip maximizer’ scenarios of AI misalignment are not particularly difficult to avoid. With ‘paperclip maximizer’ scenarios I refer to scenarios in which a powerful AI system is set to pursue a goal, it pursues that goal without a good model of human psychology, intent and ethics, and produces disastrous unintended consequences. ‘Paperclip maximizer’ scenarios are motivating significant branches of AI alignment and EA discourse. Among other things, they imply that we need to create an explicit, optimized model of ethics before we venture into creating strong AI.

GPT-3 shows us that unsupervised models have become astoundingly good at simulating humans in generating all kinds of texts , including comedy that makes Eliezer Yudkowsky have oddly superimposed emotions .

I see it as quite conceivable that human common sense, intent disambiguation and ethical decision making can be simulated in much the same way as the language they produce. This means that it would seem feasible to build AI models that either integrate simulated humans as part of their action selection mechanism, or at least automatically poll a simulated human (or an ensemble of simulated humans) about their judgement of specific actions under consideration (‘Would Jean-Luc Picard approve of turning everything into paperclips? No.‘).

While such a mechanism might not ensure optimality of decisions in a utilitarian sense, it is very conceivable that it would be effective in preventing significantly misaligned and unintended decisions that are at the core of ‘paperclip maximizer’ type scenarios. It would also remove the necessity of formalizing a solid, widely agreed upon model of ethical decision making, which might very well be an unachievable goal.