I agree with the idea that nuclear wars, whether small or large, would probably push human civilization in a bad, slower-growth, more zero-sum and hateful, more-warlike direction. And thus, the idea of civilizational recovery is not as bright a silver lining as it seems (although it is still worth something).
I disagree that this means that we should “try to develop AGI as soon as possible”, which connotes to me “tech companies racing to deploy more and more powerful systems without much attention paid to alignment concerns, and spurred on by a sense of economic competition rather than cooperating for the good of humanity, or being subject to any kind of democratic oversight”.
I don’t think we should pause AI development indefinitely—because like you say, eventually something would go wrong, whether a nuclear war or someone skirting the ban to train a dangerous superintelligent AI themselves. But I would be very happy to “pause” for a few years while the USA / western world figures out a regulatory scheme to restrain the sense of an arms race between tech companies, and puts together some sort of “manhattan/apollo project for alignment”. Then we could spend a decade working hard on alignment, while also developing AI capabilities in a more deliberate, responsible, centralized way. At the end of that decade I think we would still be ahead of China and everyone else, and I think we would have put humanity in a much better position than if we tried to rush to get AGI “as soon as possible”.
I agree with the idea that nuclear wars, whether small or large, would probably push human civilization in a bad, slower-growth, more zero-sum and hateful, more-warlike direction. And thus, the idea of civilizational recovery is not as bright a silver lining as it seems (although it is still worth something).
I disagree that this means that we should “try to develop AGI as soon as possible”, which connotes to me “tech companies racing to deploy more and more powerful systems without much attention paid to alignment concerns, and spurred on by a sense of economic competition rather than cooperating for the good of humanity, or being subject to any kind of democratic oversight”.
I don’t think we should pause AI development indefinitely—because like you say, eventually something would go wrong, whether a nuclear war or someone skirting the ban to train a dangerous superintelligent AI themselves. But I would be very happy to “pause” for a few years while the USA / western world figures out a regulatory scheme to restrain the sense of an arms race between tech companies, and puts together some sort of “manhattan/apollo project for alignment”. Then we could spend a decade working hard on alignment, while also developing AI capabilities in a more deliberate, responsible, centralized way. At the end of that decade I think we would still be ahead of China and everyone else, and I think we would have put humanity in a much better position than if we tried to rush to get AGI “as soon as possible”.