RSS

Totalitarianism

TagLast edit: 16 Jun 2022 19:23 UTC by Pablo

Totalitarianism is an all-embracing system of government that exercises virtually complete control over every aspect of individual life. Robust totalitarianism may be defined as a type of totalitarianism particularly effective at enforcing its ideological vision and preventing internal and external threats to its authority.

Characteristics

Benito Mussolini famously characterized totalitarianism as “all within the state, nothing outside the state, none against the state.”[1] Contemporary scholars have listed several distinctive features of totalitarian regimes. These features include a radical official ideology, usually exclusionary and future-oriented; a single party, typically led by one man; a monopoly of the means of both persuasion and coercion; a centrally planned economy, in which most professional activities are part of the state; and extreme polarization and widespread use of terror in all spheres of life.[2][3][4] Totalitarian regimes are estimated to have been responsible for the deaths of over 125 million people in the 20th century, mostly in the Soviet Union, Nazi Germany, and communist China.[5] To this tragic loss of life needs to be added the major loss of quality of life experienced by those living under such regimes.

Robust totalitarianism as a catastrophic and existential risk

Because of its scale, the threat of robust totalitarianism constitutes a global catastrophic risk. If the totalitarian regime has the potential to be both global and stable, it could also constitute an existential risk—specifically a risk of an unrecoverable dystopia.

Advances in artificial intelligence in areas such as lie detection, social persuasion and deception, autonomous weapons, and ubiquitous surveillance could entrench existing totalitarian regimes. These developments may also cause democracies to slide into totalitarianism.[6] On the other hand, AI could conceivably destabilize totalitarian systems or protect against their emergence.[7] To this date, no detailed analysis exists of the potential impact of artificial intelligence on the risk of robust totalitarianism. The literature on robust totalitarianism in general is itself very small.[8]

Evaluation

80,000 Hours rates risks of robust totalitarianism a “potential highest priority area”: an issue that, if more thoroughly examined, could rank as a top global challenge.[10]

Further reading

Aird, Michael (2020) Collection of sources related to dystopias and “robust totalitarianism”, Effective Altruism Forum, March 30.
Many additional resources on this topic.

Caplan, Bryan (2008) The totalitarian threat, in Nick Bostrom & Milan M. Ćirković (eds.) Global Catastrophic Risks, Oxford: Oxford University Press, pp. 504–519.

Related entries

dystopia | global governance

  1. ^

    Mussolini, Benito (1932) ‘La dottrina del fascismo’, in Enciclopedia italiana di scienze, lettere ed arti, Roma: Istituto della Enciclopedia Italiana.

  2. ^

    Friedrich, Carl J. & Zbigniew K. Brzezinski (1965) Totalitarian Dictatorship and Autocracy, 2nd ed., Cambridge: Harvard University Press, p. 22.

  3. ^

    Aron, Raymond (1965) Démocratie et totalitarisme, Paris: Gallimard, ch. 15.

  4. ^

    Holmes, Leslie (2001) Totalitarianism, in Neil J. Smelser & Paul B. Baltes (eds.) International Encyclopedia of the Social & Behavioral Sciences, Amsterdam: Elsevier, pp. 15788–15791.

  5. ^

    Bernholz, Peter (2000) Totalitarianism, in Charles K. Rowley & Friedrich Schneider (eds.) The Encyclopedia of Public Choice, Boston: Springer, pp. 565–569, p. 568.

  6. ^

    Dafoe, Allan (2018) AI governance: A research agenda, Future of Humanity Institute, University of Oxford, section 4.1.

  7. ^

    Adamczewski, Tom (2019) A shift in arguments for AI risk, Fragile Credences, May 25, section ‘Robust totalitarianism’.

  8. ^

    Caplan, Bryan (2008) The totalitarian threat, in Nick Bostrom & Milan M. Ćirković (eds.) Global Catastrophic Risks, Oxford: Oxford University Press, pp. 504–519.

  9. ^

    Koehler, Arden (2020) Problem areas beyond 80,000 Hours’ current priorities, Effective Altruism Forum, June 22, section ‘Risks of stable totalitarianism’.

  10. ^

    80,000 Hours (2022) Our current list of pressing world problems, 80,000 Hours.

Stable to­tal­i­tar­i­anism: an overview

80000_Hours29 Oct 2024 16:07 UTC
35 points
1 comment20 min readEA link
(80000hours.org)

Cause Area: Hu­man Rights in North Korea

Dawn Drescher20 Nov 2017 20:52 UTC
64 points
12 comments20 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
341 points
93 comments37 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

Jack10 Aug 2020 10:24 UTC
76 points
12 comments42 min readEA link

Prob­lem ar­eas be­yond 80,000 Hours’ cur­rent pri­ori­ties

Arden Koehler22 Jun 2020 12:49 UTC
280 points
62 comments15 min readEA link

Some his­tory top­ics it might be very valuable to investigate

MichaelA🔸8 Jul 2020 2:40 UTC
91 points
34 comments6 min readEA link

[Re­view and notes] How Democ­racy Ends—David Runciman

Ben13 Feb 2020 22:30 UTC
31 points
1 comment5 min readEA link

Ben Garfinkel: The fu­ture of surveillance

EA Global8 Jun 2018 7:51 UTC
18 points
0 comments11 min readEA link
(www.youtube.com)

[Question] What are some effec­tive/​im­pact­ful char­i­ties in the do­main of hu­man rights and anti-au­thor­i­tar­i­anism?

mbb298283828 Aug 2021 6:49 UTC
22 points
4 comments1 min readEA link

[Question] Books on au­thor­i­tar­i­anism, Rus­sia, China, NK, demo­cratic back­slid­ing, etc.?

MichaelA🔸2 Feb 2021 3:52 UTC
14 points
21 comments1 min readEA link

Is Democ­racy a Fad?

bgarfinkel13 Mar 2021 12:40 UTC
165 points
36 comments18 min readEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
83 points
8 comments4 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngo19 Jan 2021 0:07 UTC
36 points
2 comments8 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA🔸15 Jul 2020 12:28 UTC
81 points
7 comments7 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes28 Jul 2021 13:23 UTC
96 points
5 comments30 min readEA link

World fed­er­al­ism and EA

Eevee🔹14 Jul 2021 5:53 UTC
47 points
4 comments1 min readEA link

[Question] Do you worry about to­tal­i­tar­ian regimes us­ing AI Align­ment tech­nol­ogy to cre­ate AGI that sub­scribe to their val­ues?

diodio_yang28 Feb 2023 18:12 UTC
25 points
12 comments2 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMath15 Jul 2021 16:26 UTC
23 points
0 comments1 min readEA link
(anchor.fm)

Effec­tive means to com­bat autocracies

Junius Brutus28 Aug 2022 17:35 UTC
32 points
2 comments5 min readEA link

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey Miller23 Aug 2022 17:15 UTC
39 points
17 comments1 min readEA link

Wikipe­dia is not so great, and what can be done about it.

Rey Bueno12 Dec 2022 20:06 UTC
15 points
1 comment16 min readEA link
(www.reddit.com)

[Cause Ex­plo­ra­tion Prizes] Dy­namic democ­racy to guard against au­thor­i­tar­ian lock-in

Open Philanthropy24 Aug 2022 10:53 UTC
12 points
1 comment12 min readEA link

How democ­racy ends: a re­view and reevaluation

richard_ngo24 Nov 2018 17:41 UTC
27 points
2 comments6 min readEA link
(thinkingcomplete.blogspot.com)

Sum­mary of and thoughts on “Dark Sk­ies” by Daniel Deudney

Cody_Fenwick31 Dec 2022 20:28 UTC
38 points
1 comment5 min readEA link

The to­tal­i­tar­ian im­pli­ca­tions of Effec­tive Altruism

Ed_Talks14 Jun 2022 16:52 UTC
11 points
14 comments1 min readEA link
(edtalks.substack.com)

One, per­haps un­der­rated, AI risk.

Alex (Αλέξανδρος)28 Nov 2024 10:34 UTC
5 points
1 comment3 min readEA link