Holden—thanks for this thoughtful and constructive piece.
However, I think a crucial strategy is missing here.
If we’re serious that AI imposes existential risks on humanity, then the best thing that AI companies can do to help us survive this pivotal century is simple: Shut down their AI research. Do something else. Act like they care about the fate of their kids and grandkids.
AI research doesn’t need to be shut down forever. Maybe just for the next few centuries, until we better understand the risks and how to manage them.
I simply don’t understand why so many EAs are encouraging AI development as if it’s too cool to question, too inevitable to challenge, and too incentivized to deter. Almost all of us agree that AI will impose potentially catastrophic risks. We all agree that AI alignment is far from solved, and many of us believe it probably won’t be solved in time to save us from recklessly fast AI development.
We probably can’t shut down AI research through government regulation or gentle coaxing, given the coordination problems, governance problems, arms races, and corporate incentives. But we could probably do it through promoting new social & ethical norms that impose a heavy moral stigma against AI research, AI researchers, and AI companies. Historically, intense moral stigmatization has been successful at handicapping, delaying, pausing, defunding, marginalizing, and/or shutting down many research fields. And moral stigmatization in the modern social media world can operate even more quickly, powerfully, globally, and effectively. (I’m working on a longer piece about this moral stigmatization strategy for reducing AI X-risk.)
In short: maybe it’s time for EA to stop playing nice with the AI industry—given that the AI industry is not playing safely with humanity’s future.
And maybe it’s time to call a spade a spade: if AI companies are pursuing AI capabilities at a rate that could end our species, without any credible safeguards that could protect our species, then they’re evil. Maybe we should say they’re evil, treat them as evil, and encourage others to do the same, until they stop doing evil.
A deeper problem to this is market forces—investments is pouring into the industry and its just not going to stop especially as we’ve seen how fast chatGPT was adopted (100M users in 2 months). This is a big reason why AI industries will not stop, they have the support of economics to push the boundaries of the AI. My hope is there are installed AI safety guidelines on the first one that will be adopted by billions of people.
Miguel—the market forces are strong, but they can be over-ridden by moral stigmatization and moral disgust.
If it becomes morally taboo to invest in AI companies, to work in AI research, to promote AI development, or to vote for pro-AI politicians, then AI research will be handicapped. Just as many other areas of research and development have been handicapped by moral taboos over the last century.
Greed is a strong emotion driving AI investment. But moral disgust can be an even stronger emotion that could reduce AI investment.
Greed is one thing. It is a human universal problem. I would say that a big chunk is greedy but there are those who seek to adapt and were just trying to help build it properly. People in the alignment research probably are those in these category but not sure of how does the moral standards is for them.
Weighing on moral disgust, my analysis is it is possible to push this concept but I believe the general public will not gravitate to this—most will choose the technology camp, as those that will defend AI will explain it from the standpoint that it will “make things easier”—an easier idea to sell.
Holden—thanks for this thoughtful and constructive piece.
However, I think a crucial strategy is missing here.
If we’re serious that AI imposes existential risks on humanity, then the best thing that AI companies can do to help us survive this pivotal century is simple: Shut down their AI research. Do something else. Act like they care about the fate of their kids and grandkids.
AI research doesn’t need to be shut down forever. Maybe just for the next few centuries, until we better understand the risks and how to manage them.
I simply don’t understand why so many EAs are encouraging AI development as if it’s too cool to question, too inevitable to challenge, and too incentivized to deter. Almost all of us agree that AI will impose potentially catastrophic risks. We all agree that AI alignment is far from solved, and many of us believe it probably won’t be solved in time to save us from recklessly fast AI development.
We probably can’t shut down AI research through government regulation or gentle coaxing, given the coordination problems, governance problems, arms races, and corporate incentives. But we could probably do it through promoting new social & ethical norms that impose a heavy moral stigma against AI research, AI researchers, and AI companies. Historically, intense moral stigmatization has been successful at handicapping, delaying, pausing, defunding, marginalizing, and/or shutting down many research fields. And moral stigmatization in the modern social media world can operate even more quickly, powerfully, globally, and effectively. (I’m working on a longer piece about this moral stigmatization strategy for reducing AI X-risk.)
In short: maybe it’s time for EA to stop playing nice with the AI industry—given that the AI industry is not playing safely with humanity’s future.
And maybe it’s time to call a spade a spade: if AI companies are pursuing AI capabilities at a rate that could end our species, without any credible safeguards that could protect our species, then they’re evil. Maybe we should say they’re evil, treat them as evil, and encourage others to do the same, until they stop doing evil.
If I saw a path to slowing down or stopping AI development, reliably and worldwide, I think it’d be worth considering.
But I don’t think advising particular AI companies to essentially shut down (or radically change their mission) is a promising step toward that goal.
And I think partial progress toward that goal is worse than none, if it slows down relatively caution-oriented players without slowing down others.
Hello Geofrey,
A deeper problem to this is market forces—investments is pouring into the industry and its just not going to stop especially as we’ve seen how fast chatGPT was adopted (100M users in 2 months). This is a big reason why AI industries will not stop, they have the support of economics to push the boundaries of the AI. My hope is there are installed AI safety guidelines on the first one that will be adopted by billions of people.
Thank you.
Miguel—the market forces are strong, but they can be over-ridden by moral stigmatization and moral disgust.
If it becomes morally taboo to invest in AI companies, to work in AI research, to promote AI development, or to vote for pro-AI politicians, then AI research will be handicapped. Just as many other areas of research and development have been handicapped by moral taboos over the last century.
Greed is a strong emotion driving AI investment. But moral disgust can be an even stronger emotion that could reduce AI investment.
Greed is one thing. It is a human universal problem. I would say that a big chunk is greedy but there are those who seek to adapt and were just trying to help build it properly. People in the alignment research probably are those in these category but not sure of how does the moral standards is for them.
Weighing on moral disgust, my analysis is it is possible to push this concept but I believe the general public will not gravitate to this—most will choose the technology camp, as those that will defend AI will explain it from the standpoint that it will “make things easier”—an easier idea to sell.