I’m not Jan, but I think (paraphrasing) “Superintelligence will give godlike power and might kill us all. Our solution is that the good guys should race as fast as possible to build the artificial god at breakneck speed first, and then hope to align it with duct tape and prayer” should not, frankly, be your first resort strategy. If this becomes the US’s/China’s natsec community’s first introduction to considerations around superintelligence or AGI or alignment etc, I think it will predictably increase x-risk by making the zero- (actually negative-) sum framing lodged in people’s heads, before they stumble across other considerations.
Can you say more about how you think the solving things part pulls towards x-risk?
I’m not Jan, but I think (paraphrasing) “Superintelligence will give godlike power and might kill us all. Our solution is that the good guys should race as fast as possible to build the artificial god at breakneck speed first, and then hope to align it with duct tape and prayer” should not, frankly, be your first resort strategy. If this becomes the US’s/China’s natsec community’s first introduction to considerations around superintelligence or AGI or alignment etc, I think it will predictably increase x-risk by making the zero- (actually negative-) sum framing lodged in people’s heads, before they stumble across other considerations.