I’ve been interested in this area for some time (though my focus is somewhat different) and I’m really glad more people are working on this.
In the article, Stephen puts the probability that a totalitarian regime will control the world indefinitely at about 1 in 30,000. My probability on a totalitarian regime controlling a non-trivial fraction of humanity’s future is considerably higher (though I haven’t thought much about this).
One point of disagreement may be the following. Stephen writes:
There’s also the fact that the rise of a stable totalitarian superpower would be bad for everyone else in the world. That means that most other countries are strongly incentivized to work against this problem.
This is not clear to me. Stephen most likely understands the relevant topics way more than myself but I worry that autocratic regimes often seem to cooperate. This has happened historically—e.g., Nazi Germany, fascist Italy, and Imperial Japan—and also seems to be happening today. My sense is that Russia, China, Venezuela, Iran, and North Korea seem to have formed some type of loose alliance, at least to some extent (see also Anne Applebaum’s Autocracy Inc.). Perhaps, this doesn’t apply to strictly totalitarian regimes (though it did so for Germany, Italy and Japan in the 1940s).
Autocratic regimes control a non-trivial fraction (like 20-25%?) of World GDP. A naive extrapolation could thus suggest that some type of coalition of autocratic regimes will control 20-25% of humanity’s future (assuming these regimes won’t reform themselves).
Depending on the offense-defense balance (and depending on how people trade off reducing suffering/injustive against other values such as national sovereignty, non-interference, isolationism, personal costs to themselves, etc.), this arrangement may very well persist.
It’s unclear how much suffering such regimes would create—perhaps there would be fairly little; e.g. in China, ignoring political prisoners, the Uyghurs, etc., most people are probably doing fairly well (though a lot of people in, say, Iran aren’t doing too well, see more below). But it’s not super unlikely there would exist enormous amounts of suffering.
So, even though I agree that it’s very unlikely that a totalitarian regime will control all or even the majority of humanity’s future, it seems considerably more likely to me (perhaps even more than 1%) that a totalitarian regime—or a regime that follows some type of fanatical ideology—will control a non-trivial fraction of the universe and cause astronomical amounts of suffering indefinitely. (E.g., religious fanatics often have extremely retributive tendencies and may value the suffering of dissidents or non-believers. In a pilot, I found that 22% of religious participants at least tentatively agreed with the statement “if hell didn’t exist, we should create hell in order to punish all the sinners”. Senior officials in Iran have ordered raping female prisoners so that they would end up in hell, or at least prevented from going to heaven (IHRDC, 2011; IranWire, 2023). One might argue that religious fanatics (with access to AGI) will surely change their irrational beliefs once it’s clear they are wrong. Maybe. I don’t find it implausible that at least some people (and especially religious or political fanatics) will decide that giving up their beliefs is the greatest possible evil and decide to use their AGIs to align reality with their beliefs, rather than vice versa.)
To be clear, all of this is much more important from a s-risk focused perspective than from an upside-focused perspective.
Another disagreement may be related to the tractability / how easy it is to contribute:
For example, we mentioned above that the three ways totalitarian regimes have been brought down in the past are through war, resistance movements, and the deaths of dictators. Most of the people reading this article probably aren’t in a position to influence any of those forces (and even if they could, it would be seriously risky to do so, to say the least!).
Most EAs may not be able to directly work on these topics but there are various options that allow you to do something indirectly:
- working in (foreign) policy or politics (or working on financial reforms that make illegal money laundering harder for autocratic states like Russia (again, cf. Autocracy Inc.). - becoming a journalist and writing about such topics (e.g., doing investigative journalism on the corruption in autocratic regimes), generally moving the discussion towards more important topics and away from currently trendy but less important topics - working at think thanks that protect democratic institutions (Stephen Clare lists several) - working on AI governance (e.g., info sec, export controls) to reduce autocratic regimes gaining access to AI. (Again, Stephen Clare already lists this area). - probably several more career paths that we haven’t thought of
In general, it doesn’t seem harder to have an impactful career in this area than in, say, AI risk. Depending on your background and skills, it may even be a lot easier; e.g., in order to do valuable work on AI policy, you often need to understand policy/politics and technical fields like computer science & machine learning. Of course, the area is arguably more crowded (though AI is becoming more crowded every day).
I do think this loose alliance of authoritarian states.[1] - Russia, Iran, North Korea, etc. - poses some meaningful challenge to democracies, especially insofar as the authoritarian states coordinate to undermine the democratic ones, e.g., through information warfare that increases polarization.
However, I’d emphasize “loose” here, given they share no ideology. That makes them different vs. what binds together the free world [2] or what held together the Cold War’s communist bloc. Such a loose coalition is merely opportunistic and transactional, and likely to dissolve if the opportunity dissipates, i.e., if the U.S. retreats from its role as the global police. Perhaps an apt historical example is how the victors in WWII splintered into NATO and the Warsaw Pact once Nazi Germany was defeated.
Thanks Mike. I agree that the alliance is fortunately rather loose in the sense that most of these countries share no ideology. (In fact, some of them should arguably be ideological enemies, e.g., Islamic theocrats in Iran and Maoist communists in China).
But I worry that this alliance is held together by a hatred of (or ressentiment in general) Western secular democratic principles for ideological and (geo-)political reasons. Hatred can be an extremely powerful and unifying force. (Many political/ideological movements are arguably primarily defined, united, and motivated by what they hate, e.g., Nazism by the hatred of Jews, communism by the hatred of capitalists, racists hate other ethnicities, Democrats hate Trump and racists, Republicans hate the woke and communists, etc.)
So I worry that as long as Western democracies to influence international affairs, this alliance will continue to exist. And I certainly hope that Western democracies will continue to be powerful and worry that the world (and the future) will become a worse place if not.
Thanks, David. I mostly agree with @Stephen Clare’s points, notwithstanding that I also generally agree with your critique. (The notion of a future dominated by religious fanaticism always brings to mind Frank Herbert’s Dune saga.)
The biggest issue I have is, to echo David, odds of 1 in 30K strike me as far too low.
Looking at Stephen’s math...
But let’s say there’s:
A 10% chance that, at some point, an AI system is invented which gives whoever controls it a decisive edge over their rivals
A 3% chance that a totalitarian state is the first to invent it, or that the first state to invent it becomes totalitarian
A 1% chance that the state is able to use advanced AI to entrench its rule perpetually
That leaves about a 0.3% chance we see a totalitarian regime with unprecedented power (10% x 3%) and a 0.003% (1 in 30,000) chance it’s able to persist in perpetuity.
… I agree with emphasizing the potential for AI value lock-in, but I have a few questions:
Does the eventual emergence of an AI-empowered totalitarian state most likely involve someone to have a “decisive edge” in AI at the outset?
Could an AI-empowered global totalitarian state start as a democracy or even a corporation, rather than being totalitarian at the moment it acquires powerful AI capabilities?
How do we justify the estimation that the conditioned odds of perpetual lock-in are only 1%?
I just read Stephen Clare’s 80k excellent article about the risks of stable totalitarianism.
I’ve been interested in this area for some time (though my focus is somewhat different) and I’m really glad more people are working on this.
In the article, Stephen puts the probability that a totalitarian regime will control the world indefinitely at about 1 in 30,000. My probability on a totalitarian regime controlling a non-trivial fraction of humanity’s future is considerably higher (though I haven’t thought much about this).
One point of disagreement may be the following. Stephen writes:
This is not clear to me. Stephen most likely understands the relevant topics way more than myself but I worry that autocratic regimes often seem to cooperate. This has happened historically—e.g., Nazi Germany, fascist Italy, and Imperial Japan—and also seems to be happening today. My sense is that Russia, China, Venezuela, Iran, and North Korea seem to have formed some type of loose alliance, at least to some extent (see also Anne Applebaum’s Autocracy Inc.). Perhaps, this doesn’t apply to strictly totalitarian regimes (though it did so for Germany, Italy and Japan in the 1940s).
Autocratic regimes control a non-trivial fraction (like 20-25%?) of World GDP. A naive extrapolation could thus suggest that some type of coalition of autocratic regimes will control 20-25% of humanity’s future (assuming these regimes won’t reform themselves).
Depending on the offense-defense balance (and depending on how people trade off reducing suffering/injustive against other values such as national sovereignty, non-interference, isolationism, personal costs to themselves, etc.), this arrangement may very well persist.
It’s unclear how much suffering such regimes would create—perhaps there would be fairly little; e.g. in China, ignoring political prisoners, the Uyghurs, etc., most people are probably doing fairly well (though a lot of people in, say, Iran aren’t doing too well, see more below). But it’s not super unlikely there would exist enormous amounts of suffering.
So, even though I agree that it’s very unlikely that a totalitarian regime will control all or even the majority of humanity’s future, it seems considerably more likely to me (perhaps even more than 1%) that a totalitarian regime—or a regime that follows some type of fanatical ideology—will control a non-trivial fraction of the universe and cause astronomical amounts of suffering indefinitely. (E.g., religious fanatics often have extremely retributive tendencies and may value the suffering of dissidents or non-believers. In a pilot, I found that 22% of religious participants at least tentatively agreed with the statement “if hell didn’t exist, we should create hell in order to punish all the sinners”. Senior officials in Iran have ordered raping female prisoners so that they would end up in hell, or at least prevented from going to heaven (IHRDC, 2011; IranWire, 2023). One might argue that religious fanatics (with access to AGI) will surely change their irrational beliefs once it’s clear they are wrong. Maybe. I don’t find it implausible that at least some people (and especially religious or political fanatics) will decide that giving up their beliefs is the greatest possible evil and decide to use their AGIs to align reality with their beliefs, rather than vice versa.)
To be clear, all of this is much more important from a s-risk focused perspective than from an upside-focused perspective.
Another disagreement may be related to the tractability / how easy it is to contribute:
Most EAs may not be able to directly work on these topics but there are various options that allow you to do something indirectly:
- working in (foreign) policy or politics (or working on financial reforms that make illegal money laundering harder for autocratic states like Russia (again, cf. Autocracy Inc.).
- becoming a journalist and writing about such topics (e.g., doing investigative journalism on the corruption in autocratic regimes), generally moving the discussion towards more important topics and away from currently trendy but less important topics
- working at think thanks that protect democratic institutions (Stephen Clare lists several)
- working on AI governance (e.g., info sec, export controls) to reduce autocratic regimes gaining access to AI. (Again, Stephen Clare already lists this area).
- probably several more career paths that we haven’t thought of
In general, it doesn’t seem harder to have an impactful career in this area than in, say, AI risk. Depending on your background and skills, it may even be a lot easier; e.g., in order to do valuable work on AI policy, you often need to understand policy/politics and technical fields like computer science & machine learning. Of course, the area is arguably more crowded (though AI is becoming more crowded every day).
I do think this loose alliance of authoritarian states.[1] - Russia, Iran, North Korea, etc. - poses some meaningful challenge to democracies, especially insofar as the authoritarian states coordinate to undermine the democratic ones, e.g., through information warfare that increases polarization.
However, I’d emphasize “loose” here, given they share no ideology. That makes them different vs. what binds together the free world [2] or what held together the Cold War’s communist bloc. Such a loose coalition is merely opportunistic and transactional, and likely to dissolve if the opportunity dissipates, i.e., if the U.S. retreats from its role as the global police. Perhaps an apt historical example is how the victors in WWII splintered into NATO and the Warsaw Pact once Nazi Germany was defeated.
Full disclosure: I’ve not (yet) read Applebaum’s Autocracy Inc.
What comes to mind is Kant, et al.’s democratic peace theory.
Thanks Mike. I agree that the alliance is fortunately rather loose in the sense that most of these countries share no ideology. (In fact, some of them should arguably be ideological enemies, e.g., Islamic theocrats in Iran and Maoist communists in China).
But I worry that this alliance is held together by a hatred of (or ressentiment in general) Western secular democratic principles for ideological and (geo-)political reasons. Hatred can be an extremely powerful and unifying force. (Many political/ideological movements are arguably primarily defined, united, and motivated by what they hate, e.g., Nazism by the hatred of Jews, communism by the hatred of capitalists, racists hate other ethnicities, Democrats hate Trump and racists, Republicans hate the woke and communists, etc.)
So I worry that as long as Western democracies to influence international affairs, this alliance will continue to exist. And I certainly hope that Western democracies will continue to be powerful and worry that the world (and the future) will become a worse place if not.
Thanks, David. I mostly agree with @Stephen Clare’s points, notwithstanding that I also generally agree with your critique. (The notion of a future dominated by religious fanaticism always brings to mind Frank Herbert’s Dune saga.)
The biggest issue I have is, to echo David, odds of 1 in 30K strike me as far too low.
Looking at Stephen’s math...
… I agree with emphasizing the potential for AI value lock-in, but I have a few questions:
Does the eventual emergence of an AI-empowered totalitarian state most likely involve someone to have a “decisive edge” in AI at the outset?
Could an AI-empowered global totalitarian state start as a democracy or even a corporation, rather than being totalitarian at the moment it acquires powerful AI capabilities?
How do we justify the estimation that the conditioned odds of perpetual lock-in are only 1%?