I do think we have to argue that national securitisation is more dangerous than humanity securitisation, or non-securitised alternatives. I think its important to note that whilst I explicitly discuss humanity macrosecuritisation, there are other alternatives as well that Aschenbrenner’s national securitisation compromises, as I briefly argue in the piece.
Of course, I have not and was not intending to provide an entire and complete argument for this (it is only 6,000 words) , although I think I do go further to proving this than you give me credit for here. As I summarise in the piece, the Sears (2023) thesis provides a convincing argument from empirical examples that national securitisation (and a failure of humanity macrosecuritisation) is the most common factor in the failure of Great Powers to adequately combat existential threats (eg the failure of the Baruch Plan/international control of nuclear energy, the promotion of technology competition around AI vs arms agreements with the threat of nuclear winter, BWC, montreal protocol). Given this limited but still significant data that I draw on, I do think it is unfair to suggest that I haven’t provided an argument that national securitisation isn’t more dangerous on net. Moreover, as I address in the piece, Aschenbrenner fails to provide any convincing track record of success of national securitisation, whilst his use of historical analogies (Szilard, Oppenheimer and Teller), all indicate he is pursuing a course of action that probably isn’t safe. Whilst of course I didn’t go through every argument, I think Section 1 provides arguments that national securitisation isn’t inevitable, Section 2 provides the argument that, at least from historical case studies, humanity macrosecuritisation is safer than national securitisation. The other sections show why I think Aschenbrenner’s argument is dangerous rather than just wrong, and how he ignores important other factors.
The core of Aschenbrenner’s argument is that national securitisation is desirable and thus we ought to promote and embrace it (‘see you in the desert’). Yet he fails to engage with the generally poor track record of national securitisation at promoting existential safety, or fails to provide a legitimate counter-argument. He also, as we both acknowledge, fails to adequate deal with possibilities for international collaboration. His argument for why we need national securitisation seems to be premised on three main ideas: it is inevitable (/there are no alternatives), the values of the USA ‘winning’ the future is our most important concern (whilst alignment is important, I do think it is secondary to Aschenbrenner to this), the US natsec establishment is the way to ensure that we get a maximally good future. I think Aschenbrenner is wrong on the first point (and certainly, fails to adequeatly justify it). On the second point, he overestimates the importance of the US winning compared to the difficulty of alignment, and certainly, I think his argument for this fails to deal with many of the thorny questions here (what about non-humans? how does this freedom remain in a world of AGI etc?). On the third point, I think he goes some way to justify why the US natsec establishment would be more likely to ‘win’ a race, but fails to show why such a race would be safe (particularly given its track record). He also fails to argue that natsec would allow for the values we care about to be preserved (US natsec doesn’t have the best track record with reference to freedom, human rights etc).
On the point around the instability of international agreements. I do think this is the strongest argument against my model of humanity macrosecuritisation leading to a regime that stops the development of AGI. However, as I allude to in the essay, this isn’t the only alternative to national securitisation. Since publishing the piece this is the biggest mistake in reasoning (and I’m happy to call it that) that I see people making. The chain of logic that goes ‘humanity macrosecuritisation leading to an agreement would be unstable therefore promoting national securitisation is the best course of action’ is flawed; one needs to show that the plethora of other alternatives (depolitical/political/riskified decisionmaking, or humanity macrosecuritisation but without an agreement) are not viable—Aschenbrenner doesn’t address this at all. I also, as I think you do, see Aschenbrenner’s argument against an agreement as containing very little substance—I don’t mean to say its obviously wrong, but he hardly even argues for it.
I do think stronger arguments for the need to nationally securitise AI could be provided, and I also think they are probably wrong. Similarly, I think stronger arguments than mine can be provided with regards to why we need to humanity macrosecuritise superintelligence and how international collaboration on controlling AI development (I am working on something like this) that can address some of the concerns that one may have. But the point of this piece is to engage with the narratives and arguments in Aschenbrenner’s piece. I think he fails to justify national securitisation whilst also taking action that endangers us (and I’m hearing from people connected to US politics that the impact of this piece may actually be worse than I feared).
On the stable totalitarianism point, I also think its useful to note that it is not at all obvious that the risk of stable totalitarianism is more under some form of global collaboration than it is under a nationally securitised race.
I do think we have to argue that national securitisation is more dangerous than humanity securitisation, or non-securitised alternatives. I think its important to note that whilst I explicitly discuss humanity macrosecuritisation, there are other alternatives as well that Aschenbrenner’s national securitisation compromises, as I briefly argue in the piece.
Of course, I have not and was not intending to provide an entire and complete argument for this (it is only 6,000 words) , although I think I do go further to proving this than you give me credit for here. As I summarise in the piece, the Sears (2023) thesis provides a convincing argument from empirical examples that national securitisation (and a failure of humanity macrosecuritisation) is the most common factor in the failure of Great Powers to adequately combat existential threats (eg the failure of the Baruch Plan/international control of nuclear energy, the promotion of technology competition around AI vs arms agreements with the threat of nuclear winter, BWC, montreal protocol). Given this limited but still significant data that I draw on, I do think it is unfair to suggest that I haven’t provided an argument that national securitisation isn’t more dangerous on net. Moreover, as I address in the piece, Aschenbrenner fails to provide any convincing track record of success of national securitisation, whilst his use of historical analogies (Szilard, Oppenheimer and Teller), all indicate he is pursuing a course of action that probably isn’t safe. Whilst of course I didn’t go through every argument, I think Section 1 provides arguments that national securitisation isn’t inevitable, Section 2 provides the argument that, at least from historical case studies, humanity macrosecuritisation is safer than national securitisation. The other sections show why I think Aschenbrenner’s argument is dangerous rather than just wrong, and how he ignores important other factors.
The core of Aschenbrenner’s argument is that national securitisation is desirable and thus we ought to promote and embrace it (‘see you in the desert’). Yet he fails to engage with the generally poor track record of national securitisation at promoting existential safety, or fails to provide a legitimate counter-argument. He also, as we both acknowledge, fails to adequate deal with possibilities for international collaboration. His argument for why we need national securitisation seems to be premised on three main ideas: it is inevitable (/there are no alternatives), the values of the USA ‘winning’ the future is our most important concern (whilst alignment is important, I do think it is secondary to Aschenbrenner to this), the US natsec establishment is the way to ensure that we get a maximally good future. I think Aschenbrenner is wrong on the first point (and certainly, fails to adequeatly justify it). On the second point, he overestimates the importance of the US winning compared to the difficulty of alignment, and certainly, I think his argument for this fails to deal with many of the thorny questions here (what about non-humans? how does this freedom remain in a world of AGI etc?). On the third point, I think he goes some way to justify why the US natsec establishment would be more likely to ‘win’ a race, but fails to show why such a race would be safe (particularly given its track record). He also fails to argue that natsec would allow for the values we care about to be preserved (US natsec doesn’t have the best track record with reference to freedom, human rights etc).
On the point around the instability of international agreements. I do think this is the strongest argument against my model of humanity macrosecuritisation leading to a regime that stops the development of AGI. However, as I allude to in the essay, this isn’t the only alternative to national securitisation. Since publishing the piece this is the biggest mistake in reasoning (and I’m happy to call it that) that I see people making. The chain of logic that goes ‘humanity macrosecuritisation leading to an agreement would be unstable therefore promoting national securitisation is the best course of action’ is flawed; one needs to show that the plethora of other alternatives (depolitical/political/riskified decisionmaking, or humanity macrosecuritisation but without an agreement) are not viable—Aschenbrenner doesn’t address this at all. I also, as I think you do, see Aschenbrenner’s argument against an agreement as containing very little substance—I don’t mean to say its obviously wrong, but he hardly even argues for it.
I do think stronger arguments for the need to nationally securitise AI could be provided, and I also think they are probably wrong. Similarly, I think stronger arguments than mine can be provided with regards to why we need to humanity macrosecuritise superintelligence and how international collaboration on controlling AI development (I am working on something like this) that can address some of the concerns that one may have. But the point of this piece is to engage with the narratives and arguments in Aschenbrenner’s piece. I think he fails to justify national securitisation whilst also taking action that endangers us (and I’m hearing from people connected to US politics that the impact of this piece may actually be worse than I feared).
On the stable totalitarianism point, I also think its useful to note that it is not at all obvious that the risk of stable totalitarianism is more under some form of global collaboration than it is under a nationally securitised race.