Will—many of these AGI side-effects seem plausible—and almost all are alarming, with extremely high risks of catastrophe and disruption to almost every aspect of human life and civilization.
My main take-away from such thinking is that human individuals and institutions have very poor capacity to respond to AGI disruptions quickly, decisively, and intelligently enough to avoid harmful side-effects. Even if the AGI is technically ‘aligned’ enough not to directly cause human extinction, its downstream technological, economic, and cultural side-effects seem so dangerously unpredictable that we are very unlikely to manage them well.
Thus, AGI would be a massive X-risk amplifier in almost every other domain of human life. As I’ve argued many times, whatever upsides we can reap from AGI will still be there in a century, or a millennium, but whatever downsides are imposed by AGI could start hurting us within a few years. There’s a huge temporal asymmetry to consider. (Maybe we can solve alignment in the next few centuries, and we’d feel reasonably safe proceeding with AGI research. But maybe not. There’s every reason to take our time when we’re facing a Great Filter.)
Therefore it seems like a top priority for EA to pause, slow, or stop AGI development ASAP, through both formal moratoria/regulations and informal moral stigmatization of the AI industry (as I argued here).
We face a key decision point, right now, in 2023. Does EA keep playing nice with the AI industry that is driving at top speed into maximizing extinction risk? Or do we take a stand against the most dangerous industry in human history?
Will—many of these AGI side-effects seem plausible—and almost all are alarming, with extremely high risks of catastrophe and disruption to almost every aspect of human life and civilization.
My main take-away from such thinking is that human individuals and institutions have very poor capacity to respond to AGI disruptions quickly, decisively, and intelligently enough to avoid harmful side-effects. Even if the AGI is technically ‘aligned’ enough not to directly cause human extinction, its downstream technological, economic, and cultural side-effects seem so dangerously unpredictable that we are very unlikely to manage them well.
Thus, AGI would be a massive X-risk amplifier in almost every other domain of human life. As I’ve argued many times, whatever upsides we can reap from AGI will still be there in a century, or a millennium, but whatever downsides are imposed by AGI could start hurting us within a few years. There’s a huge temporal asymmetry to consider. (Maybe we can solve alignment in the next few centuries, and we’d feel reasonably safe proceeding with AGI research. But maybe not. There’s every reason to take our time when we’re facing a Great Filter.)
Therefore it seems like a top priority for EA to pause, slow, or stop AGI development ASAP, through both formal moratoria/regulations and informal moral stigmatization of the AI industry (as I argued here).
We face a key decision point, right now, in 2023. Does EA keep playing nice with the AI industry that is driving at top speed into maximizing extinction risk? Or do we take a stand against the most dangerous industry in human history?