Totalitarianism is an all-embracing system of government that exercises virtually complete control over every aspect of individual life. Robust totalitarianism may be defined as a type of totalitarianism particularly effective at enforcing its ideological vision and preventing internal and external threats to its authority.
Benito Mussolini famously characterized totalitarianism as “all within the state, nothing outside the state, none against the state.” (Mussolini 1956) Contemporary scholars have listed several distinctive features of totalitarian regimes. These features include a radical official ideology, usually exclusionary and future-oriented; a single party, typically led by one man; a monopoly of the means of both persuasion and coercion; a centrally planned economy, in which most professional activities are part of the state; and extreme polarization and widespread use of terror in all spheres of life (Friedrich & Brzezinski 1965: 22; Aron 1965: ch. 15; Holmes 2001). Totalitarian regimes are estimated to have been responsible for the deaths of over 125 million people in the 20th century, mostly in the Soviet Union, Nazi Germany, and communist China (Bernholz 2000: 568). To this tragic loss of life needs to be added the major loss of quality of life experienced by those living under such regimes.
Robust totalitarianism as a catastrophic and existential risk
Because of its scale, the threat of robust totalitarianism constitutes a global catastrophic risk. If the totalitarian regime has the potential to be both global and stable, it could also constitute an existential risk—specifically a risk of an unrecoverable dystopia.
Advances in artificial intelligence in areas such as lie detection, social persuasion and deception, autonomous weapons and ubiquitous surveillance could entrench existing totalitarian regimes. These developments may also cause democracies to slide into totalitarianism (Dafoe 2018: sect. 4.1). On the other hand, AI could conceivably destabilize totalitarian systems or protect against their emergence (Adamczewski 2019: sect. ‘Robust totalitarianism’). To this date, no detailed analysis exists of the potential impact of artificial intelligence on the risk of robust totalitarianism. The literature on robust totalitarianism in general is itself very small (Caplan 2008). Research in this area is thus of high expected value (Koehler 2020: sect. ‘Risks of stable totalitarianism’).
Adamczewski, Tom (2019) A shift in arguments for AI risk, Fragile Credences, May 25.
Aird, Michael (2020) Collection of sources related to dystopias and “robust totalitarianism”, Effective Altruism Forum, March 30.
Many additional resources on this topic.
Aron, Raymond (1965) Démocratie et totalitarisme, Paris: Gallimard.
Bernholz, Peter (2000) Totalitarianism, in Charles K. Rowley & Friedrich Schneider (eds.) The Encyclopedia of Public Choice, Boston: Springer, pp. 892–897.
Caplan, Bryan (2008) The totalitarian threat, in Nick Bostrom & Milan M. Ćirković (eds.) Global Catastrophic Risks, Oxford: Oxford University Press, pp. 504–519.
Dafoe, Allan (2018) AI governance: A research agenda, Future of Humanity Institute, University of Oxford.
Friedrich, Carl J. & Zbigniew K. Brzezinski (1965) Totalitarian Dictatorship and Autocracy, 2nd ed., Cambridge: Harvard University Press.
Holmes, Leslie (2001) Totalitarianism, in Neil J. Smelser & Paul B. Baltes (eds.) International Encyclopedia of the Social & Behavioral Sciences, Amsterdam: Elsevier, pp. 15788–15791.
Koehler, Arden (2020) Problem areas beyond 80,000 Hours’ current priorities, Effective Altruism Forum, June 22.
Mussolini, Benito (1932) ‘La dottrina del fascismo’, in Enciclopedia italiana di scienze, lettere ed arti, Roma: Istituto della Enciclopedia Italiana.