Thank you so much, Geoffrey, for this compelling argument! I completely agree that a moral backlash against AI is very plausible (and in my estimation, imminent), especially from people whose career paths are or will be indefinitely automated away by AI, and who will not thrive in the new niche of ‘startup founder who only hires AGI employees.’
OpenAI’s mission is to create “highly autonomous systems that outperform humans at most economically valuable work” (their definition of AGI).
I cannot overstate how broadly unpopular this mission is. In fact, in my experience, whenever I told someone occupationally far-removed from AI about OpenAI’s mission, they immediately diagnosed this mission as dystopian. They were also very open-minded about the plausibility of catastrophic and existential dangers in such an AGI-led future, with very few exceptions.
The only bottleneck is that most people currently don’t believe that AGI is imminent. This is, of course, starting to change: for example, with the recent CAIS statement signed by leading AI researchers and notable figures.
We should tirelessly prepare for a future in which AGI leaves large numbers of people jobless, purposeless, and angry. I don’t know all the answers to how we should prepare for this dystopia, but I’m confident that we should prioritize the following two things:
(1) Truthful, frequent, and high-quality communication to angry people. e.g., communication about the true cause of their hardships—AGI companies—so that they don’t blame a scapegoat.
(2) Make preparations for the prevention and resolution of the first (near-catastrophic or catastrophic) AI disaster, as well as for proposing and implementing effective AI-disaster-prevention policies after the first AI disaster.
Peter—thanks for a very helpful reply. I think your diagnosis is correct.
Your point about the wrong scapegoats getting blamed is very important. If we start seeing mass AI-induced unemployment, the AI companies will have every incentive to launch disinformation campaigns leading people to blame any other scapegoats for the unemployment—e.g. immigrants, outsourcing to China/SE Asia, ‘systemic racism’, whatever distracts attention from the problem of cognitive automation. The AI companies don’t currently have the stomach to manipulate public opinion in these ways—they don’t have too. But the history of corporate propaganda suggests that every business facing a public backlash will do anything necessary to scapegoat others outside their industry. Big Tech will be no different in this regard. And they’ll have the full power of mass-customized AI propaganda systems to shape public opinion.
Thank you so much, Geoffrey, for this compelling argument! I completely agree that a moral backlash against AI is very plausible (and in my estimation, imminent), especially from people whose career paths are or will be indefinitely automated away by AI, and who will not thrive in the new niche of ‘startup founder who only hires AGI employees.’
OpenAI’s mission is to create “highly autonomous systems that outperform humans at most economically valuable work” (their definition of AGI).
I cannot overstate how broadly unpopular this mission is. In fact, in my experience, whenever I told someone occupationally far-removed from AI about OpenAI’s mission, they immediately diagnosed this mission as dystopian. They were also very open-minded about the plausibility of catastrophic and existential dangers in such an AGI-led future, with very few exceptions.
The only bottleneck is that most people currently don’t believe that AGI is imminent. This is, of course, starting to change: for example, with the recent CAIS statement signed by leading AI researchers and notable figures.
We should tirelessly prepare for a future in which AGI leaves large numbers of people jobless, purposeless, and angry. I don’t know all the answers to how we should prepare for this dystopia, but I’m confident that we should prioritize the following two things:
(1) Truthful, frequent, and high-quality communication to angry people. e.g., communication about the true cause of their hardships—AGI companies—so that they don’t blame a scapegoat.
(2) Make preparations for the prevention and resolution of the first (near-catastrophic or catastrophic) AI disaster, as well as for proposing and implementing effective AI-disaster-prevention policies after the first AI disaster.
Peter—thanks for a very helpful reply. I think your diagnosis is correct.
Your point about the wrong scapegoats getting blamed is very important. If we start seeing mass AI-induced unemployment, the AI companies will have every incentive to launch disinformation campaigns leading people to blame any other scapegoats for the unemployment—e.g. immigrants, outsourcing to China/SE Asia, ‘systemic racism’, whatever distracts attention from the problem of cognitive automation. The AI companies don’t currently have the stomach to manipulate public opinion in these ways—they don’t have too. But the history of corporate propaganda suggests that every business facing a public backlash will do anything necessary to scapegoat others outside their industry. Big Tech will be no different in this regard. And they’ll have the full power of mass-customized AI propaganda systems to shape public opinion.