Totalitarianism is an all-embracing system of government that exercises virtually complete control over every aspect of individual life. Robust totalitarianism may be defined as a type of totalitarianism particularly effective at enforcing its ideological vision and preventing internal and external threats to its authority.
Characteristics
Benito Mussolini famously characterized totalitarianism as “all within the state, nothing outside the state, none against the state.”[1] Contemporary scholars have listed several distinctive features of totalitarian regimes. These features include a radical official ideology, usually exclusionary and future-oriented; a single party, typically led by one man; a monopoly of the means of both persuasion and coercion; a centrally planned economy, in which most professional activities are part of the state; and extreme polarization and widespread use of terror in all spheres of life.[2][3][4] Totalitarian regimes are estimated to have been responsible for the deaths of over 125 million people in the 20th century, mostly in the Soviet Union, Nazi Germany, and communist China.[5] To this tragic loss of life needs to be added the major loss of quality of life experienced by those living under such regimes.
Robust totalitarianism as a catastrophic and existential risk
Because of its scale, the threat of robust totalitarianism constitutes a global catastrophic risk. If the totalitarian regime has the potential to be both global and stable, it could also constitute an existential risk—specifically a risk of an unrecoverable dystopia.
Advances in artificial intelligence in areas such as lie detection, social persuasion and deception, autonomous weapons, and ubiquitous surveillance could entrench existing totalitarian regimes. These developments may also cause democracies to slide into totalitarianism.[6] On the other hand, AI could conceivably destabilize totalitarian systems or protect against their emergence.[7] To this date, no detailed analysis exists of the potential impact of artificial intelligence on the risk of robust totalitarianism. The literature on robust totalitarianism in general is itself very small.[8]
Evaluation
80,000 Hours rates risks of robust totalitarianism a “potential highest priority area”: an issue that, if more thoroughly examined, could rank as a top global challenge.[10]
Further reading
Aird, Michael (2020) Collection of sources related to dystopias and “robust totalitarianism”, Effective Altruism Forum, March 30.
Many additional resources on this topic.
Caplan, Bryan (2008) The totalitarian threat, in Nick Bostrom & Milan M. Ćirković (eds.) Global Catastrophic Risks, Oxford: Oxford University Press, pp. 504–519.
Related entries
- ^
Mussolini, Benito (1932) ‘La dottrina del fascismo’, in Enciclopedia italiana di scienze, lettere ed arti, Roma: Istituto della Enciclopedia Italiana.
- ^
Friedrich, Carl J. & Zbigniew K. Brzezinski (1965) Totalitarian Dictatorship and Autocracy, 2nd ed., Cambridge: Harvard University Press, p. 22.
- ^
Aron, Raymond (1965) Démocratie et totalitarisme, Paris: Gallimard, ch. 15.
- ^
Holmes, Leslie (2001) Totalitarianism, in Neil J. Smelser & Paul B. Baltes (eds.) International Encyclopedia of the Social & Behavioral Sciences, Amsterdam: Elsevier, pp. 15788–15791.
- ^
Bernholz, Peter (2000) Totalitarianism, in Charles K. Rowley & Friedrich Schneider (eds.) The Encyclopedia of Public Choice, Boston: Springer, pp. 565–569, p. 568.
- ^
Dafoe, Allan (2018) AI governance: A research agenda, Future of Humanity Institute, University of Oxford, section 4.1.
- ^
Adamczewski, Tom (2019) A shift in arguments for AI risk, Fragile Credences, May 25, section ‘Robust totalitarianism’.
- ^
Caplan, Bryan (2008) The totalitarian threat, in Nick Bostrom & Milan M. Ćirković (eds.) Global Catastrophic Risks, Oxford: Oxford University Press, pp. 504–519.
- ^
Koehler, Arden (2020) Problem areas beyond 80,000 Hours’ current priorities, Effective Altruism Forum, June 22, section ‘Risks of stable totalitarianism’.
- ^
80,000 Hours (2022) Our current list of pressing world problems, 80,000 Hours.
I think it’d be good to rename this tag “authoritarianism”. I have the impression that EAs/longtermists have often focused more on totalitarianism than authoritarianism, or have used the terms as if they were somewhat interchangeable. But it seems to me that totalitarianism is best considered as a subtype of authoritarianism, and that other types of authoritarian regime also have the potential to cause problems in similar ways. So I think it’d be best to default to the more inclusive term “authoritarianism”, except when a person really has a specific reason to focus on totalitarianism specifically.
(I elaborate on this a bit here)
I don’t have a strong opinion on this one. I think I may have a slight preference for “totalitarianism” since to me paradigmatically totalitarian regimes better resemble the scenarios that most worry EAs and longtermists. I will copy below the rough draft I had for this article, in case it helps decide on whether the name should be changed (please excuse the lack of proper formatting).
--
** Characteristics
Benito Mussolini famously characterized totalitarianism as “all within the state, nothing outside the state, none against the state.” [fn:1] Contemporary scholars have listed several distinctive features of totalitarian regimes. These features include a radical official ideology, usually exclusionary and future-oriented; a single party, typically led by one man; a monopoly of the means of both persuasion and coercion; a centrally planned economy, in which most professional activities are part of the state; and extreme politization and widespread use of terror in all spheres of life (Friedrich & Brzezinski 1965: 22; Aron 1965: ch. 15; Holmes 2001). Totalitarian regimes are estimated to have been responsible for the deaths of over 125 million people in the 20th century (Bernholz 2000: 568). To this tragic loss of life needs to be added the major loss of quality of life experienced by those living under such regimes.
** Robust totalitarianism as a catastrophic and existential risk
Because of its scale, the threat of robust totalitarianism constitutes a [[*Global catastrophic risk][global catastrophic risk]]. If the totalitarian regime has the potential to be both global and stable, it could also constitute an [[*Existential risk][existential risk]]—specifically a risk of an unrecoverable [[*Dystopia][dystopia]].
Advances in [[*Artificial intelligence][artificial intelligence]] in areas such as lie detection, social persuasion and deception, autonomous weapons and ubiquitous surveillance could entrench existing totalitarian regimes. These developments may also cause democracies to slide into totalitarianism (Dafoe 2018: sect. 4.1). On the other hand, AI could conceivably destabilize totalitarian systems or protect against their emergence (Adamczewski 2019: sect. ‘Robust totalitarianism’). To this date, no detailed analysis exists of the potential impact of artificial intelligence on the risk of robust totalitarianism. The literature on robust totalitarianism in general is itself very small (Caplan 2008). Research in this area thus appears to be of high expected value (Koehler 2020: sect. ‘Risks of stable totalitarianism’).
Another possibility is to have separate articles on each concept. I don’t know if this is a good idea—just mentioning it as something to keep in mind.
To me, I think it’s very plausible that the main ways in which totalitarian regimes would be bad for the long-term future would also be mostly shared by authoritarian regimes, and essentially just have to do with making premature or bad lock-in more likely and a long reflection less likely. Both systems seem less conducive to developing new ideas and correcting errors (including moral catastrophes) over time than e.g. democratic societies.
But I don’t think there’s been any detailed writeup arguing either for this position I’m saying or for the position you suggest. So I guess this is a tricky case where there’s been very little work on a topic, such that the wiki kind-of has to take its own stance, or where the only way to not take its own stance is to stick closely to the tiny amount of existing work and thereby (in my view) kind-of replicate its oversights.
---
I guess maybe if I publish my research agenda and a small, quickly written post outlining associated thoughts, then the entry could reference that and draw on the scope and ideas in that?
Or in situations like this, would it make more sense for the wiki to deviate from Wikipedia by kind-of containing “original research” (via me adding or changing some paragraphs in a way that isn’t based on published work I’ve seen but rather on my own thoughts—though I think those thoughts are things various people have thought themselves as well)?
Obviously a third option is just to stick closer to the “totalitarianism” framing and not put that much emphasis on what a single person—me—is saying here.
---
It seems like the EA Wiki is in a bit of a different situation from Wikipedia here, since (1) a substantial fraction of the users of the Wiki will themselves be a substantial fraction of the people generating the research that some articles are based on, and (2) a substantial fraction of the sources cited would count as “self-published” sources of the sort Wikipedia doesn’t generally approve of citing. So it seems worth thinking about whether and how we might want to deviate from the “no original research” policy Wikipedia has.
I think the two topics get a small enough fraction of total EA attention and overlap in what they are and why they matter to a sufficient extent that it’s probably not currently worth having an entry for each topic.
I think another option is an article on authoritarianism that actually mostly discusses totalitarianism, but makes it clear that this is a subtype and that other types might matter too.