Thanks for writing this, it’s clearly valuable to advance a dialogue on these incredibly important issues.
I feel an important shortcoming of this critique is that it frames the choice between national securitization vs. macrosecuritization in terms of a choice between narratives, without considering incentives. I think Leopold gives more consideration to alternatives than you give him credit for, but argues that macrosecuritization is too unstable of an equilibrium:
Some hope for some sort of international treaty on safety. This seems fanciful to me. The world where both the CCP and USG are AGI-pilled enough to take safety risk seriously is also the world in which both realize that international economic and military predominance is at stake, that being months behind on AGI could mean being permanently left behind. If the race is tight, any arms control equilibrium, at least in the early phase around superintelligence, seems extremely unstable. In short, ”breakout” is too easy: the incentive (and the fear that others will act on this incentive) to race ahead with an intelligence explosion, to reach superintelligence and the decisive advantage, too great.
I also think you underplay the extent to which Leopold’s focus on national security is instrumental to his goal of safeguarding humanity’s future. You write: “It is true that Aschenbrenner doesn’t always see himself as purely protecting America, but the free world as a whole, and probably by his own views, this means he is protecting the whole world. He isn’t, seemingly, motivated by pure nationalism, but rather a belief that American values must ‘win’ the future.” (emphasis mine.)
First, I think you’re too quick to dismiss Leopold’s views as you state them. But what’s more, Leopold specifically disavows the specific framing you attribute to him:
To be clear, I don’t just worry about dictators getting superintelligence because “our values are better.” I believe in freedom and democracy, strongly, because I don’t know what the right values are [...] I hope, dearly, that we can instead rely on the wisdom of the Framers—letting radically different values flourish, and preserving the raucous plurality that has defined the American experiment.
Both of these claims—that international cooperation or a pause is an unstable equilibrium, and that the West maintaining an AI lead is more likely to lead to a future with free expression and political experimentation—are empirical. Maybe you’d disagree with them, but then I think you need to argue that this model is wrong, not that he’s just chosen the wrong narrative.
Thanks for this reply Stephen, and sorry for my late reply, I was away.
I think its true that Aschenbrenner gives (marginally) more consideration than I gave him credit for—not actually sure how I missed that paragraph to be honest! Even then, whilst there is some merit to that argument, I think he needs to much better justify his dismissal of an international treaty (along similar lines to your shortform piece). As I argue in the essay, I think that such lack of stability requires a particular reading of how states acts—for example, I argue if we buy a form of defensive realism, states may in fact be more inclined to reach a stable equilibrium/. Moreover, as I argue, I think Aschenbrenner fails to acknowledge how his ideas on this may well become a self-fulfilling prophecy.
I actually think I just disagree with your characterisation of my second point, although it could well be a flaw in my communication, and if so I apologise. My argument isn’t even that values of freedom and democracy, or even a narrower form of ‘American values’ wouldn’t be better for the future (see below for more discussion on that), its that national securitisation has a bad track record at promoting collaboration and dealing with extreme risk and we have good reason to think it may be bad in the case of AI. So even if Aschenbrenner doesn’t frame it as national securitisation for the sake of nationalism, but rather national securitisation for the sake of all humanity, the impacts will be the same. The point of that paragraph was simply to preempt a critique that is exactly what you say. I also think its clear that Aschenbrenner in his piece is happy to conflate those values with ‘American nationalism/dominance’ (eg ‘America must win’), so I’m not sure him making this distinction actually matters.
I also probably am much less bearish on American dominance than Aschenbrenner is. I’m not sure the American national security establishment actually has a good track record of preserving a ‘raucous plurality’, and if (as Aschenbrenner wants) we expect superintelligence to be developed through that institution, I’m not overly confident in how good it will be. Whilst I am no friend of dictatorships, I’m also unconvinced that if one cares about raucous pluralism that US dominance, or certainly to the extent that Aschenbrenner envisions, is necessarily a good thing. Moreover, even in American democracy, the vast majority of moral patients aren’t represented at all. I’m essentially unconvinced that the benefits of America ‘winning’ a nationally securitised AI race anywhere near oughtweigh the geopolitical risk, misalignment risk, and most importantly the risk of not taking our time to construct a mutually beneficial future for all sentient beings. I think I have put this paragraph quite crudely, and would be happy to elaborate further, although it isn’t actually central to my argument.
I think its wrong to say that my argument doesn’t work without significant argument against those two premises. Firstly, my argument was that Aschenbrenner was ‘dangerous’, which required highlighting why the narrative choice was problematic. Secondly, yes, there is more to do on those points, but given Aschenbrenner’s failure to give in depth argumentation on those points, I thought that they would be better to deal with as their own pieces (which I may or may not right). In my view, the most important aspect of the piece was Aschenbrenner’s claim that national securitisation is necessary to secure the safest outcomes, and I do feel the piece was broadly successful at arguing that this is a dangerous narrative to propogate. I do think if you hold Aschenbrenner’s assumptions strongly, namely cooperation is very difficult, alignment is easy-ish and the most important thing is for an American AI lead as this leads to a maximally good future by maximising free expression and political expression, then my argument is not convincing. I do, however, think this model is based on some rather controversial assumptions, and given the dangers involved, woefully insufficiently justified by Aschenbrenner in his essay.
One final point is that it is still entirely non-obvious, as I mention in the essay, that national securitisation is the best frame even if a pause is impossible, or even weaker, if it is an unstable equilibrium.
Thanks for this, really helpful! For what it’s worth, I also think Leopold is far too dismissive of international cooperation.
You’ve written there that “my argument was that Aschenbrenner was ‘dangerous’”. I definitely agree that securitisation (and technology competition) often raises risks.[1] I think we have to argue further, though, that securitisation is more dangerous on net than the alternative: a pursuit of international cooperation that may, or may not, be unstable. That, too, may raise some risks, e.g. proliferation and stable authoritarianism.
I do think we have to argue that national securitisation is more dangerous than humanity securitisation, or non-securitised alternatives. I think its important to note that whilst I explicitly discuss humanity macrosecuritisation, there are other alternatives as well that Aschenbrenner’s national securitisation compromises, as I briefly argue in the piece.
Of course, I have not and was not intending to provide an entire and complete argument for this (it is only 6,000 words) , although I think I do go further to proving this than you give me credit for here. As I summarise in the piece, the Sears (2023) thesis provides a convincing argument from empirical examples that national securitisation (and a failure of humanity macrosecuritisation) is the most common factor in the failure of Great Powers to adequately combat existential threats (eg the failure of the Baruch Plan/international control of nuclear energy, the promotion of technology competition around AI vs arms agreements with the threat of nuclear winter, BWC, montreal protocol). Given this limited but still significant data that I draw on, I do think it is unfair to suggest that I haven’t provided an argument that national securitisation isn’t more dangerous on net. Moreover, as I address in the piece, Aschenbrenner fails to provide any convincing track record of success of national securitisation, whilst his use of historical analogies (Szilard, Oppenheimer and Teller), all indicate he is pursuing a course of action that probably isn’t safe. Whilst of course I didn’t go through every argument, I think Section 1 provides arguments that national securitisation isn’t inevitable, Section 2 provides the argument that, at least from historical case studies, humanity macrosecuritisation is safer than national securitisation. The other sections show why I think Aschenbrenner’s argument is dangerous rather than just wrong, and how he ignores important other factors.
The core of Aschenbrenner’s argument is that national securitisation is desirable and thus we ought to promote and embrace it (‘see you in the desert’). Yet he fails to engage with the generally poor track record of national securitisation at promoting existential safety, or fails to provide a legitimate counter-argument. He also, as we both acknowledge, fails to adequate deal with possibilities for international collaboration. His argument for why we need national securitisation seems to be premised on three main ideas: it is inevitable (/there are no alternatives), the values of the USA ‘winning’ the future is our most important concern (whilst alignment is important, I do think it is secondary to Aschenbrenner to this), the US natsec establishment is the way to ensure that we get a maximally good future. I think Aschenbrenner is wrong on the first point (and certainly, fails to adequeatly justify it). On the second point, he overestimates the importance of the US winning compared to the difficulty of alignment, and certainly, I think his argument for this fails to deal with many of the thorny questions here (what about non-humans? how does this freedom remain in a world of AGI etc?). On the third point, I think he goes some way to justify why the US natsec establishment would be more likely to ‘win’ a race, but fails to show why such a race would be safe (particularly given its track record). He also fails to argue that natsec would allow for the values we care about to be preserved (US natsec doesn’t have the best track record with reference to freedom, human rights etc).
On the point around the instability of international agreements. I do think this is the strongest argument against my model of humanity macrosecuritisation leading to a regime that stops the development of AGI. However, as I allude to in the essay, this isn’t the only alternative to national securitisation. Since publishing the piece this is the biggest mistake in reasoning (and I’m happy to call it that) that I see people making. The chain of logic that goes ‘humanity macrosecuritisation leading to an agreement would be unstable therefore promoting national securitisation is the best course of action’ is flawed; one needs to show that the plethora of other alternatives (depolitical/political/riskified decisionmaking, or humanity macrosecuritisation but without an agreement) are not viable—Aschenbrenner doesn’t address this at all. I also, as I think you do, see Aschenbrenner’s argument against an agreement as containing very little substance—I don’t mean to say its obviously wrong, but he hardly even argues for it.
I do think stronger arguments for the need to nationally securitise AI could be provided, and I also think they are probably wrong. Similarly, I think stronger arguments than mine can be provided with regards to why we need to humanity macrosecuritise superintelligence and how international collaboration on controlling AI development (I am working on something like this) that can address some of the concerns that one may have. But the point of this piece is to engage with the narratives and arguments in Aschenbrenner’s piece. I think he fails to justify national securitisation whilst also taking action that endangers us (and I’m hearing from people connected to US politics that the impact of this piece may actually be worse than I feared).
On the stable totalitarianism point, I also think its useful to note that it is not at all obvious that the risk of stable totalitarianism is more under some form of global collaboration than it is under a nationally securitised race.
Thanks for writing this, it’s clearly valuable to advance a dialogue on these incredibly important issues.
I feel an important shortcoming of this critique is that it frames the choice between national securitization vs. macrosecuritization in terms of a choice between narratives, without considering incentives. I think Leopold gives more consideration to alternatives than you give him credit for, but argues that macrosecuritization is too unstable of an equilibrium:
I also think you underplay the extent to which Leopold’s focus on national security is instrumental to his goal of safeguarding humanity’s future. You write: “It is true that Aschenbrenner doesn’t always see himself as purely protecting America, but the free world as a whole, and probably by his own views, this means he is protecting the whole world. He isn’t, seemingly, motivated by pure nationalism, but rather a belief that American values must ‘win’ the future.” (emphasis mine.)
First, I think you’re too quick to dismiss Leopold’s views as you state them. But what’s more, Leopold specifically disavows the specific framing you attribute to him:
Both of these claims—that international cooperation or a pause is an unstable equilibrium, and that the West maintaining an AI lead is more likely to lead to a future with free expression and political experimentation—are empirical. Maybe you’d disagree with them, but then I think you need to argue that this model is wrong, not that he’s just chosen the wrong narrative.
Thanks for this reply Stephen, and sorry for my late reply, I was away.
I think its true that Aschenbrenner gives (marginally) more consideration than I gave him credit for—not actually sure how I missed that paragraph to be honest! Even then, whilst there is some merit to that argument, I think he needs to much better justify his dismissal of an international treaty (along similar lines to your shortform piece). As I argue in the essay, I think that such lack of stability requires a particular reading of how states acts—for example, I argue if we buy a form of defensive realism, states may in fact be more inclined to reach a stable equilibrium/. Moreover, as I argue, I think Aschenbrenner fails to acknowledge how his ideas on this may well become a self-fulfilling prophecy.
I actually think I just disagree with your characterisation of my second point, although it could well be a flaw in my communication, and if so I apologise. My argument isn’t even that values of freedom and democracy, or even a narrower form of ‘American values’ wouldn’t be better for the future (see below for more discussion on that), its that national securitisation has a bad track record at promoting collaboration and dealing with extreme risk and we have good reason to think it may be bad in the case of AI. So even if Aschenbrenner doesn’t frame it as national securitisation for the sake of nationalism, but rather national securitisation for the sake of all humanity, the impacts will be the same. The point of that paragraph was simply to preempt a critique that is exactly what you say. I also think its clear that Aschenbrenner in his piece is happy to conflate those values with ‘American nationalism/dominance’ (eg ‘America must win’), so I’m not sure him making this distinction actually matters.
I also probably am much less bearish on American dominance than Aschenbrenner is. I’m not sure the American national security establishment actually has a good track record of preserving a ‘raucous plurality’, and if (as Aschenbrenner wants) we expect superintelligence to be developed through that institution, I’m not overly confident in how good it will be. Whilst I am no friend of dictatorships, I’m also unconvinced that if one cares about raucous pluralism that US dominance, or certainly to the extent that Aschenbrenner envisions, is necessarily a good thing. Moreover, even in American democracy, the vast majority of moral patients aren’t represented at all. I’m essentially unconvinced that the benefits of America ‘winning’ a nationally securitised AI race anywhere near oughtweigh the geopolitical risk, misalignment risk, and most importantly the risk of not taking our time to construct a mutually beneficial future for all sentient beings. I think I have put this paragraph quite crudely, and would be happy to elaborate further, although it isn’t actually central to my argument.
I think its wrong to say that my argument doesn’t work without significant argument against those two premises. Firstly, my argument was that Aschenbrenner was ‘dangerous’, which required highlighting why the narrative choice was problematic. Secondly, yes, there is more to do on those points, but given Aschenbrenner’s failure to give in depth argumentation on those points, I thought that they would be better to deal with as their own pieces (which I may or may not right). In my view, the most important aspect of the piece was Aschenbrenner’s claim that national securitisation is necessary to secure the safest outcomes, and I do feel the piece was broadly successful at arguing that this is a dangerous narrative to propogate. I do think if you hold Aschenbrenner’s assumptions strongly, namely cooperation is very difficult, alignment is easy-ish and the most important thing is for an American AI lead as this leads to a maximally good future by maximising free expression and political expression, then my argument is not convincing. I do, however, think this model is based on some rather controversial assumptions, and given the dangers involved, woefully insufficiently justified by Aschenbrenner in his essay.
One final point is that it is still entirely non-obvious, as I mention in the essay, that national securitisation is the best frame even if a pause is impossible, or even weaker, if it is an unstable equilibrium.
Thanks for this, really helpful! For what it’s worth, I also think Leopold is far too dismissive of international cooperation.
You’ve written there that “my argument was that Aschenbrenner was ‘dangerous’”. I definitely agree that securitisation (and technology competition) often raises risks.[1] I think we have to argue further, though, that securitisation is more dangerous on net than the alternative: a pursuit of international cooperation that may, or may not, be unstable. That, too, may raise some risks, e.g. proliferation and stable authoritarianism.
Anyone interested can read far more than they probably want to here.
I do think we have to argue that national securitisation is more dangerous than humanity securitisation, or non-securitised alternatives. I think its important to note that whilst I explicitly discuss humanity macrosecuritisation, there are other alternatives as well that Aschenbrenner’s national securitisation compromises, as I briefly argue in the piece.
Of course, I have not and was not intending to provide an entire and complete argument for this (it is only 6,000 words) , although I think I do go further to proving this than you give me credit for here. As I summarise in the piece, the Sears (2023) thesis provides a convincing argument from empirical examples that national securitisation (and a failure of humanity macrosecuritisation) is the most common factor in the failure of Great Powers to adequately combat existential threats (eg the failure of the Baruch Plan/international control of nuclear energy, the promotion of technology competition around AI vs arms agreements with the threat of nuclear winter, BWC, montreal protocol). Given this limited but still significant data that I draw on, I do think it is unfair to suggest that I haven’t provided an argument that national securitisation isn’t more dangerous on net. Moreover, as I address in the piece, Aschenbrenner fails to provide any convincing track record of success of national securitisation, whilst his use of historical analogies (Szilard, Oppenheimer and Teller), all indicate he is pursuing a course of action that probably isn’t safe. Whilst of course I didn’t go through every argument, I think Section 1 provides arguments that national securitisation isn’t inevitable, Section 2 provides the argument that, at least from historical case studies, humanity macrosecuritisation is safer than national securitisation. The other sections show why I think Aschenbrenner’s argument is dangerous rather than just wrong, and how he ignores important other factors.
The core of Aschenbrenner’s argument is that national securitisation is desirable and thus we ought to promote and embrace it (‘see you in the desert’). Yet he fails to engage with the generally poor track record of national securitisation at promoting existential safety, or fails to provide a legitimate counter-argument. He also, as we both acknowledge, fails to adequate deal with possibilities for international collaboration. His argument for why we need national securitisation seems to be premised on three main ideas: it is inevitable (/there are no alternatives), the values of the USA ‘winning’ the future is our most important concern (whilst alignment is important, I do think it is secondary to Aschenbrenner to this), the US natsec establishment is the way to ensure that we get a maximally good future. I think Aschenbrenner is wrong on the first point (and certainly, fails to adequeatly justify it). On the second point, he overestimates the importance of the US winning compared to the difficulty of alignment, and certainly, I think his argument for this fails to deal with many of the thorny questions here (what about non-humans? how does this freedom remain in a world of AGI etc?). On the third point, I think he goes some way to justify why the US natsec establishment would be more likely to ‘win’ a race, but fails to show why such a race would be safe (particularly given its track record). He also fails to argue that natsec would allow for the values we care about to be preserved (US natsec doesn’t have the best track record with reference to freedom, human rights etc).
On the point around the instability of international agreements. I do think this is the strongest argument against my model of humanity macrosecuritisation leading to a regime that stops the development of AGI. However, as I allude to in the essay, this isn’t the only alternative to national securitisation. Since publishing the piece this is the biggest mistake in reasoning (and I’m happy to call it that) that I see people making. The chain of logic that goes ‘humanity macrosecuritisation leading to an agreement would be unstable therefore promoting national securitisation is the best course of action’ is flawed; one needs to show that the plethora of other alternatives (depolitical/political/riskified decisionmaking, or humanity macrosecuritisation but without an agreement) are not viable—Aschenbrenner doesn’t address this at all. I also, as I think you do, see Aschenbrenner’s argument against an agreement as containing very little substance—I don’t mean to say its obviously wrong, but he hardly even argues for it.
I do think stronger arguments for the need to nationally securitise AI could be provided, and I also think they are probably wrong. Similarly, I think stronger arguments than mine can be provided with regards to why we need to humanity macrosecuritise superintelligence and how international collaboration on controlling AI development (I am working on something like this) that can address some of the concerns that one may have. But the point of this piece is to engage with the narratives and arguments in Aschenbrenner’s piece. I think he fails to justify national securitisation whilst also taking action that endangers us (and I’m hearing from people connected to US politics that the impact of this piece may actually be worse than I feared).
On the stable totalitarianism point, I also think its useful to note that it is not at all obvious that the risk of stable totalitarianism is more under some form of global collaboration than it is under a nationally securitised race.