First I will answer to the “nuclear war” is not existential issue. Even a NATO Russia full exchange in the worst nuclear winter case, would not kill everybody. But what kind of societies will be left after the shock? Military aristocracies, North Korea like totalitarian regimes, large tracts of Somalian anarchy awaiting to be invaded by their imperialist neighbors, etc. Nothing else can keep political coherence after such a shock. Nuclear supremacy will be the only natural goal of any surviving political entity.
The problem with nuclear weapons is that it is an unavoidable step in technological process. At some point, you have the “Godlike powers” and the “medieval institutions”, no matter how many times you iterate. Let’s simplify: if you need 1000 years to recover from nuclear war, and (given the intractability of then Human alignment problem) a nuclear major nuclear war every 150 years, you are in some new kind of Malthusian trap (more specifically, a nuclear fueled Hobbesian trap).
In reality I don’t expect a post nuclear war world to be one of 1000 years of recovery and then a major nuclear war (the “Canticle for Leibowitz” typical story), but more a world of totalitarian belicism, with frequent nuclear exchanges and the whole society oriented for war. At some point, if AGI is possible, some country will develop it, with the kind of purpose that guarantees it to be Skynet.
As a consequence, if we have not an alternative solution perspective to the human alignment problem, my view is that we should try to develop AGI as soon as possible, because we are the best version of the Mankind that can develop it (We the Mankind of 2023, we, the Western democratic world: both “we”).
I agree with the idea that nuclear wars, whether small or large, would probably push human civilization in a bad, slower-growth, more zero-sum and hateful, more-warlike direction. And thus, the idea of civilizational recovery is not as bright a silver lining as it seems (although it is still worth something).
I disagree that this means that we should “try to develop AGI as soon as possible”, which connotes to me “tech companies racing to deploy more and more powerful systems without much attention paid to alignment concerns, and spurred on by a sense of economic competition rather than cooperating for the good of humanity, or being subject to any kind of democratic oversight”.
I don’t think we should pause AI development indefinitely—because like you say, eventually something would go wrong, whether a nuclear war or someone skirting the ban to train a dangerous superintelligent AI themselves. But I would be very happy to “pause” for a few years while the USA / western world figures out a regulatory scheme to restrain the sense of an arms race between tech companies, and puts together some sort of “manhattan/apollo project for alignment”. Then we could spend a decade working hard on alignment, while also developing AI capabilities in a more deliberate, responsible, centralized way. At the end of that decade I think we would still be ahead of China and everyone else, and I think we would have put humanity in a much better position than if we tried to rush to get AGI “as soon as possible”.
First I will answer to the “nuclear war” is not existential issue. Even a NATO Russia full exchange in the worst nuclear winter case, would not kill everybody. But what kind of societies will be left after the shock? Military aristocracies, North Korea like totalitarian regimes, large tracts of Somalian anarchy awaiting to be invaded by their imperialist neighbors, etc. Nothing else can keep political coherence after such a shock. Nuclear supremacy will be the only natural goal of any surviving political entity.
The problem with nuclear weapons is that it is an unavoidable step in technological process. At some point, you have the “Godlike powers” and the “medieval institutions”, no matter how many times you iterate. Let’s simplify: if you need 1000 years to recover from nuclear war, and (given the intractability of then Human alignment problem) a nuclear major nuclear war every 150 years, you are in some new kind of Malthusian trap (more specifically, a nuclear fueled Hobbesian trap).
In reality I don’t expect a post nuclear war world to be one of 1000 years of recovery and then a major nuclear war (the “Canticle for Leibowitz” typical story), but more a world of totalitarian belicism, with frequent nuclear exchanges and the whole society oriented for war. At some point, if AGI is possible, some country will develop it, with the kind of purpose that guarantees it to be Skynet.
As a consequence, if we have not an alternative solution perspective to the human alignment problem, my view is that we should try to develop AGI as soon as possible, because we are the best version of the Mankind that can develop it (We the Mankind of 2023, we, the Western democratic world: both “we”).
I agree with the idea that nuclear wars, whether small or large, would probably push human civilization in a bad, slower-growth, more zero-sum and hateful, more-warlike direction. And thus, the idea of civilizational recovery is not as bright a silver lining as it seems (although it is still worth something).
I disagree that this means that we should “try to develop AGI as soon as possible”, which connotes to me “tech companies racing to deploy more and more powerful systems without much attention paid to alignment concerns, and spurred on by a sense of economic competition rather than cooperating for the good of humanity, or being subject to any kind of democratic oversight”.
I don’t think we should pause AI development indefinitely—because like you say, eventually something would go wrong, whether a nuclear war or someone skirting the ban to train a dangerous superintelligent AI themselves. But I would be very happy to “pause” for a few years while the USA / western world figures out a regulatory scheme to restrain the sense of an arms race between tech companies, and puts together some sort of “manhattan/apollo project for alignment”. Then we could spend a decade working hard on alignment, while also developing AI capabilities in a more deliberate, responsible, centralized way. At the end of that decade I think we would still be ahead of China and everyone else, and I think we would have put humanity in a much better position than if we tried to rush to get AGI “as soon as possible”.