[Question] Odds of recovering values after collapse?

Question

Let’s say we roll the dice 100 times with respect to values. In other words, let’s say civilization collapses in 100 worlds, each very similar to our current world, and let’s say full tech recovery follows collapse in all 100 of these worlds.

In how many of these 100 worlds do you think that, relative to pre-collapse humanity, the post-recovery version of humanity has:

  • worse values?

  • similar values?

  • better values?

I encourage the reader to try answering the question before looking at the comments section, so as to not become anchored.

Context

Components of recovery

It seems, to me, that there are two broad components to recovery following civilizational collapse:

  1. P(Tech Recovery|Collapse)

    • i.e., probability of tech recovery given collapse

    • where I define “tech recovery” as scientific, technological, and economic recovery

  2. P(Values Recovery|Tech Recovery)

    • i.e., probability of values recovery given tech recovery

    • where I define “values recovery” as recovery of political systems and values systems

      • (where “good” on the values axis would be things like democracy, individualism, equality, and secularism, and “bad” would be things like totalitarianism)

It also seems to me that P(Tech Recovery|Collapse) ≈ 1, which is why the question I’ve asked is essentially “P(Values Recovery|Tech Recovery) = ?”, just in a little more detail.

Existing discussion

I ask this question on values recovery because there’s less discussion on this than I would expect. Toby Ord, in The Precipice, mentions values only briefly, in his “Dystopian Scenarios” section:

A second kind of unrecoverable dystopia is a stable civilization that is desired by few (if any) people. [...] Well-known examples include market forces creating a race to the bottom, Malthusian population dynamics pushing down the average quality of life, or evolution optimizing us toward the spreading of our genes, regardless of the effects on what we value. These are all dynamics that push humanity toward a new equilibrium, where these forces are finally in balance. But there is no guarantee this equilibrium will be good. (p. 152)

[...]

The third possibility is the “desired dystopia.” [...] Some plausible examples include: [...] worlds that forever fail to recognize some key form of harm or injustice (and thus perpetuate it blindly), worlds that lock in a single fundamentalist religion, and worlds where we deliberately replace ourselves with something that we didn’t realize was much less valuable (such as machines incapable of feeling). (pp. 153-154)

Luisa Rodriguez, who has produced arguably the best work on civilizational collapse (see “What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?”), also only very briefly touches on values:

Values is the other one. Yeah. Making sure that if we do last for a really long time, we don’t do so with really horrible values or that we at least don’t miss out on some amazing ones. (Rodriguez, Wiblin & Harris, 2021, 2:55:00-2:55:10)

Nick Beckstead and Michael Aird come the closest, as far as I’ve seen, to pointing to the question of values recovery. Beckstead (2015):

  • Negative cultural trajectory: It seems possible that just as some societies reinforce openness, toleration, and equality, other societies might reinforce alternative sets of values. [...] Especially if culture continues to become increasingly global, it may become easier for one kind of culture to dominate the world. A culture opposed to open society values, or otherwise problematic for utilitarian-type values, could permanently take root. Or, given certain starting points, cultural development might not inevitably follow an upward path, but instead explore a (from a utilitarian-type perspective) suboptimal region of the space of possible cultures. Even if civilization reaches technological maturity and colonizes the stars, this kind of failure could limit humanity’s long-term potential.

Aird (2021):

  • My main reasons for concern about such [civilizational collapse] events was in any case not that they might fairly directly lead to extinction

    • Rather, it was that such events might:

      • [...]

      • Lead to “unrecoverable dystopia”

        • Meaning any scenario in which humanity survives and regains industrial civilization, but with substantially less good outcomes than could’ve been achieved. One of many ways this could occur is negative changes in values.

(emphasis added)

Clarifications

  • Although collapse is difficult to define, I’d like to avoid a Loki’s Wager and so for the purposes of this question, I’ll set “90% of the world’s population dies” as what’s meant by collapse.

    • (I’m aware that collapse can also be defined, for example, in terms of disaggregation of civilization, or in terms of x probability of dipping below the minimum viable population y years after the collapse event.)

  • The question of worse/​similar/​better values is an overall, net effect question. For instance, maybe there’d be a positive effect on value x but a negative effect on value y: I’m setting “values” to mean the overall sum of values, where I leave it up to the reader to decide which values they count and how much weight they give to each.

  • What I’m really pointing to with the question is something like, “How likely is it that civilizational collapse would change the long-run potential of humanity, and in which direction?”. Which is to say, the question is intended to be answered through a long-run lens. If the reader thinks, for example, that in world z values in the couple hundred centuries after collapse would be worse, but would converge to the pre-collapse default trajectory in the long-run, then the reader’s answer for world z should be “similar values”.

Acknowledgements

This question was inspired by conversations with Haydn Belfield and Hannah Erlebach (though I’m not certain both would endorse the full version of my question).