Flourishing futures, utopias, ideal futures, or simply (highly) positive futures are expressions used to describe the extremely good forms that the long-term future could assume.
It could be important to consider what types of flourishing future are possible, how good each would be, how likely each is, and what would make these futures more or less likely. Reasons why this might be important include the following:
A better understanding of how positive the future might be or is likely to be is relevant to the question of how much to prioritise reducing existential risks.
A better understanding of how good and likely various flourishing futures are, and what would make them more or less likely, could aid in generating, prioritising among, and implementing longtermist interventions.
Having clearer pictures of how the future might go extremely well could aid in building support for work to reduce existential risks.
A better understanding of what futures should be steered towards might aid in working out which scenarios might constitute unrecoverable dystopias or unrecoverable collapses (i.e., existential catastrophes other than extinction).
Further reading
Bostrom, Nick (2008) Letter from utopia, Studies in Ethics, Law, and Technology, vol. 2.
Cotton-Barratt, Owen & Toby Ord (2015) Existential risk and existential hope: Definitions, Technical Report #2015-1, Future of Humanity Institute, University of Oxford.
LessWrong (2009) Fun theory, LessWrong Wiki, June 25.
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, chapter 8, London: Bloomsbury Publishing.
Pearce, David (1995) The Hedonistic Imperative, BLTC Research (updated 2007).
Sandberg, Anders (2020) Post scarcity civilizations & cognitive enhancement, Foresight Institute, September 4.
Wiblin, Robert & Keiran Harris (2018) The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks, 80,000 Hours, February 27.
Related entries
dystopia | existential security | Future of Humanity Institute | Future of Life Institute | hedonium | hellish existential catastrophe | Invincible Wellbeing | long reflection | long-term future | longtermism | motivational | transhumanism | welfare biology
Things that could maybe be done in future:
Expand this entry by drawing on posts tagged Fun Theory on LessWrong, and/or crosspost some here and give them this tag
Expand this entry by drawing on the “AI Ideal Goverance” section of the GovAI research agenda, and/or crosspost that agenda here and give it this tag
Expand this entry by drawing on the Bostrom and Ord sources mentioned in Further reading
Draw on this shortform of mine, and particularly the following paragraph
Also drawing on the paper Existential Risk and Existential Hope: Definitions and/or other discussion related to “existential hope”. But that specific term seems to often be used in a quite different way from what the paper meant, such that I’d favour either avoiding the term (just discussing the concepts and citing the paper) or explicitly noting that there are those two distinct uses occurring in different places.
EDIT: One more related concept is “global upside possibilities”.
I see what I’ve put here as a starting point. There are various reasons one might want to change it, such as:
Maybe the bullet point style isn’t what the EA Wiki should aim for
Maybe a different name would be better
What I’ve got here is like my own take, based loosely on various sources but not really modelled super directly on them
You can see my original thinking for this entry here.
Thank you for creating this! As a general observation, I think it’s perfectly fine to go ahead and add a new tag or content even if there are still uncertainties one would like to resolve or improvements one would like to make. In other words, “starting point” entries are welcome.
Yeah, that policy definitely sounds good, and I already assumed it was the case. I guess what I should’ve said is that here I’m more uncertain what the right name, scope, and content would be than I am for the average entry I create.
So sort of “more starting-point-y than average” in terms of whether the current content should be there in the current way. (Though many other entries I make are more starting-point-y than this one in terms of them having almost no content.)