RSS

Long reflection

TagLast edit: 13 Jul 2022 19:06 UTC by Leo

The long reflection is a hypothesized period of time during which humanity works out how best to realize its long-term potential.

Some effective altruists, including Toby Ord and William MacAskill, have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project of arranging the universe’s resources in accordance to its values, but ought instead to spend considerable time— “centuries (or more)”;[1] “perhaps tens of thousands of years”;[2] “thousands or millions of years”;[3] “[p]erhaps… a million years”[4]—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity’s potential is fully realized.[1]

Criticism

The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be discussed during the long reflection.[5][6] Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity’s long-term strategic picture as one consisting of two distinct stages, with one taking precedence over the other.

Further reading

Aird, Michael (2020) Collection of sources that are highly relevant to the idea of the Long Reflection, Effective Altruism Forum, June 20.
Many additional resources on this topic.

Wiblin, Robert & Keiran Harris (2018) Our descendants will probably see us as moral monsters. what should we do about that?, 80,000 Hours, January 19.
Interview with William MacAskill about the long reflection and other topics.

Related entries

dystopia | existential risk | existential security | long-term future | longtermism | longtermist institutional reform | moral uncertainty | normative ethics | value lock-in

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  2. ^

    Greaves, Hilary et al. (2019) A research agenda for the Global Priorities Institute, Oxford.

  3. ^

    Dai, Wei (2019) The argument from philosophical difficulty, LessWrong, February 9.

  4. ^

    William MacAskill, in Perry, Lucas (2018) AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill, AI Alignment podcast, September 17.

  5. ^

    Stocker, Felix (2020) Reflecting on the long reflection, Felix Stocker’s Blog, August 14.

  6. ^

    Hanson, Robin (2021) ‘Long reflection’ is crazy bad idea, Overcoming Bias, October 20.

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
75 points
10 comments49 min readEA link

Quotes about the long reflection

MichaelA5 Mar 2020 7:48 UTC
55 points
14 comments13 min readEA link

Cru­cial ques­tions for longtermists

MichaelA29 Jul 2020 9:39 UTC
102 points
17 comments14 min readEA link

New book: Mo­ral Uncer­tainty by MacAskill, Ord & Bykvist

frankieaw10 Sep 2020 16:22 UTC
85 points
2 comments1 min readEA link

Re­sponse to Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfield8 Mar 2021 18:09 UTC
137 points
73 comments5 min readEA link

Robin Han­son on the Long Reflection

Stefan_Schubert3 Oct 2021 16:42 UTC
73 points
29 comments1 min readEA link
(www.overcomingbias.com)

Re­search vs. non-re­search work to im­prove the world: In defense of more re­search and re­flec­tion [linkpost]

Magnus Vinding20 May 2022 16:25 UTC
31 points
2 comments1 min readEA link
(magnusvinding.com)

The Long Reflec­tion as the Great Stag­na­tion

Larks1 Sep 2022 20:55 UTC
43 points
2 comments8 min readEA link

What the Mo­ral Truth might be makes no differ­ence to what will happen

Jim Buhler9 Apr 2023 17:43 UTC
40 points
9 comments3 min readEA link

In­ves­ti­gat­ing the Long Reflection

Yannick_Muehlhaeuser24 Jul 2023 16:26 UTC
29 points
3 comments12 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

Winston24 Oct 2023 19:40 UTC
122 points
6 comments6 min readEA link

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

Owen Cotton-Barratt15 Mar 2024 22:22 UTC
30 points
2 comments7 min readEA link

AI strat­egy given the need for good reflection

Owen Cotton-Barratt18 Mar 2024 0:48 UTC
31 points
1 comment5 min readEA link

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
72 points
6 comments13 min readEA link