RSS

Dystopia

TagLast edit: 1 Apr 2023 22:00 UTC by Pablo

A dystopia is a civilization devoid of most value.

In Toby Ord’s typology, unrecoverable dystopias constitute one of the three main types of existential catastrophe.[1]

Ord further subdivides unrecoverable dystopias into three types. A dystopia may be desired by either none, some, or most actors living under it. Ord calls these undesired dystopias, enforced dystopias, and desired dystopias, respectively.[1]

Enforced dystopias

Enforced dystopias are the most familiar type of dystopia. In fiction, they are most prominently represented by George Orwell’s Nineteen Eighty-Four: “If you want a picture of the future, imagine a boot stamping on a human face— for ever.”[2] Outside of fiction, North Korea arguably offers an example of a stable local dystopia.[3][4] Fictional and real enforced dystopias often assume the form of robust totalitarianism, though this need not be so.

Undesired dystopias

Superficially, undesired dystopias may appear unlikely. If no one desires a world, why should we expect it to exist? The answer relates to the mismatch that can sometimes occur between individual and collective preferences: what is rational for each person may be irrational for all people. It may be best for each individual to consume resources without restraint, regardless of what the other individuals do; but if everyone acts in this manner, the result may be resource depletion, which is worse for everyone than an alternative in which everyone moderates their consumption. Scott Alexander offers a toy example of a possible undesired dystopia.[5] Imagine a society governed by two simple rules: first, every person must spend eight hours a day giving themselves strong electric shocks; second, if anyone fails to follow either rule, everyone must unite to kill this person. The result is a world in which everyone gives themselves electric shocks, since they know they will be killed otherwise. As Alexander summarizes, “Every single citizen hates the system, but for lack of a good coordination mechanism it endures.”[5]

Desired dystopias

Just as one may wonder why undesired dystopias would exist, one may wonder why desired dystopias would be dystopian. Here a relevant example has been provided by Nick Bostrom.[6][7] Mass outsourcing to either digital uploads or AI agents could eventually result in a world entirely devoid of phenomenal consciousness. This could happen if it turned out that conscious states could not be instantiated in silico. It could also happen if, in this radically new environment, consciousness was selected against due to strong evolutionary pressures. It may, for instance, be more computationally efficient to represent an agent’s utility function explicitly rather than to rely on a hedonic reward architecture. On a wide range of theories, wellbeing requires consciousness (although it may not reduce to consciousness), so such a world would be devoid of moral patients, no matter how thriving it may appear to outside observers or how much the world’s inhabitants may insist that they are conscious or that their lives are worth living. Bostrom describes an imagined “technologically highly advanced society, containing many sorts of complex structures, some of which are much smarter and more intricate than anything that exists today, in which there would nevertheless be a complete absence of any type of being whose welfare has moral significance. In a sense, this would be an uninhabited society. All the kinds of being that we care even remotely about would have vanished.”[6] Aspects of this possible dystopian future may be observed today in the lives of some non-human animals bred for human consumption.[8]

Dystopias and moral value

Since the concept of a dystopia is defined in terms of the value absent from the world so characterized, whether something is or is not a dystopia may vary depending on the moral theory under consideration. On classical utilitarianism, for example, there is an enormous difference in value between worlds optimized for positive experience and a seemingly desirable world where everyone enjoys the quality of life of the most privileged citizens of today’s most prosperous nations. The permanent entrenchment of the latter type of world may thus, on that theory, count as a dystopia, in the sense that most attainable value would have failed to be realized. Conversely, although Bostrom’s “unconscious outsourcers” dystopian scenario would be catastrophic on many plausible moral theories, it may not be so from the perspective of suffering-focused ethics.

Further reading

Aird, Michael (2020) Collection of sources related to dystopias and “robust totalitarianism”, Effective Altruism Forum, March 30.
Many additional resources on this topic.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, pp. 153–158.

Related entries

civilizational collapse | existential catastrophe | flourishing futures | global governance | human extinction | totalitarianism | value lock-in

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, fig. 5.2.

  2. ^

    Orwell, George (1949) Nineteen Eighty-Four: A Novel, London: Secker & Warburg, ch. 3.

  3. ^

    Drescher, Denis (2017) Cause area: Human rights in North Korea, Effective Altruism Forum, November 20.

  4. ^

    Drescher, Denis (2020) Self-study directions 2020, Impartial Priorities, June 27.

  5. ^

    Alexander, Scott (2014) Meditations on Moloch, Slate Star Codex, July 30.

  6. ^

    Bostrom, Nick (2004) The future of human evolution in Charles Tandy (ed.) Death and Anti-Death: Two Hundred Years after Kant, Fifty Years after Turing, vol. 2, Palo Alto, California: Ria University Press, pp. 339–371.

  7. ^

    Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press, pp. 172-173.

  8. ^

    Liu, Yuxi (2019) Evolution “failure mode”: chickens, LessWrong, April 26.

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
341 points
93 comments37 min readEA link

Cause Area: Hu­man Rights in North Korea

Dawn Drescher20 Nov 2017 20:52 UTC
64 points
12 comments20 min readEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
83 points
8 comments4 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

Jack10 Aug 2020 10:24 UTC
76 points
12 comments42 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

Winston24 Oct 2023 19:40 UTC
125 points
6 comments6 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphire20 Aug 2020 20:23 UTC
51 points
0 comments3 min readEA link

The Precipice: a risky re­view by a non-EA

Fernando Moreno 🔸8 Aug 2020 14:40 UTC
14 points
1 comment18 min readEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngo3 Aug 2020 9:40 UTC
43 points
3 comments1 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

Ben Garfinkel: The fu­ture of surveillance

EA Global8 Jun 2018 7:51 UTC
18 points
0 comments11 min readEA link
(www.youtube.com)

Notes on Hen­rich’s “The WEIRDest Peo­ple in the World” (2020)

MichaelA25 Mar 2021 5:04 UTC
44 points
4 comments3 min readEA link

Some his­tory top­ics it might be very valuable to investigate

MichaelA8 Jul 2020 2:40 UTC
91 points
34 comments6 min readEA link

A rel­a­tively athe­o­ret­i­cal per­spec­tive on as­tro­nom­i­cal waste

Nick_Beckstead6 Aug 2014 0:55 UTC
9 points
8 comments8 min readEA link

New Cause Area: De­mo­graphic Collapse

Malcolm Collins30 Jun 2022 19:38 UTC
−31 points
27 comments34 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
75 points
10 comments48 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA15 Jul 2020 12:28 UTC
81 points
7 comments7 min readEA link

An Ar­gu­ment for Why the Fu­ture May Be Good

Ben_West🔸19 Jul 2017 22:03 UTC
50 points
30 comments4 min readEA link

[Question] Books on au­thor­i­tar­i­anism, Rus­sia, China, NK, demo­cratic back­slid­ing, etc.?

MichaelA2 Feb 2021 3:52 UTC
14 points
21 comments1 min readEA link

Ide­olog­i­cal en­g­ineer­ing and so­cial con­trol: A ne­glected topic in AI safety re­search?

Geoffrey Miller1 Sep 2017 18:52 UTC
17 points
8 comments2 min readEA link

Clas­sify­ing sources of AI x-risk

Sam Clarke8 Aug 2022 18:18 UTC
41 points
4 comments3 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMath15 Jul 2021 16:26 UTC
23 points
0 comments1 min readEA link
(anchor.fm)

[Re­view and notes] How Democ­racy Ends—David Runciman

Ben13 Feb 2020 22:30 UTC
31 points
1 comment5 min readEA link

[Question] In­fo­haz­ards: The Fu­ture Is Dis­be­liev­ing Facts?

Prof.Weird22 Nov 2020 7:26 UTC
2 points
0 comments1 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will Bradshaw28 Oct 2019 15:32 UTC
31 points
8 comments1 min readEA link

The fu­ture of humanity

Dem0sthenes1 Sep 2022 22:34 UTC
1 point
0 comments8 min readEA link

A web­site you can share with Chris­ti­ans to get them on board with reg­u­lat­ing AI

JonCefalu8 Apr 2023 13:36 UTC
−4 points
8 comments1 min readEA link
(jesus-the-antichrist.com)

How democ­racy ends: a re­view and reevaluation

richard_ngo24 Nov 2018 17:41 UTC
27 points
2 comments6 min readEA link
(thinkingcomplete.blogspot.com)