The long reflection is a hypothesized period of time during which humanity works out how best to realize its long-term potential.
Some effective altruists, including Toby Ord and William MacAskill, have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project (such as space colonization) of arranging the universe’s resources in accordance to its values, but ought instead to spend considerable time— “centuries (or more)” (Ord 2020), “perhaps tens of thousands of years” (Greaves et al. 2019), “thousands or millions of years” (Dai 2019), “[p]erhaps… a million years” (MacAskill, in Perry 2018)—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity’s potential is fully realized (Ord 2020).
Aird, Michael (2020) Collection of sources that are highly relevant to the idea of the Long Reflection, Effective Altruism Forum, June 20.
Many additional resources on this topic.
Dai, Wei (2019) The argument from philosophical difficulty, LessWrong, February 9.
Greaves, Hilary et al. (2019) A research agenda for the Global Priorities Institute, Oxford.
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.
Perry, Lucas (2018) AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill, AI Alignment podcast, September 17.
Interview with William MacAskill about moral uncertainty and other topics.
Wiblin, Robert & Keiran Harris (2018) Our descendants will probably see us as moral monsters. what should we do about that?, 80,000 Hours, January 19.
Interview with William MacAskill about the long reflection and other topics.