RSS

Flour­ish­ing futures

TagLast edit: 27 Jul 2022 14:34 UTC by Leo

Flourishing futures, utopias, ideal futures, or simply (highly) positive futures are expressions used to describe the extremely good forms that the long-term future could assume.

It could be important to consider what types of flourishing future are possible, how good each would be, how likely each is, and what would make these futures more or less likely. Reasons why this might be important include the following:

Further reading

Bostrom, Nick (2008) Letter from utopia, Studies in Ethics, Law, and Technology, vol. 2.

Cotton-Barratt, Owen & Toby Ord (2015) Existential risk and existential hope: Definitions, Technical Report #2015-1, Future of Humanity Institute, University of Oxford.

LessWrong (2009) Fun theory, LessWrong Wiki, June 25.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, chapter 8, London: Bloomsbury Publishing.

Pearce, David (1995) The Hedonistic Imperative, BLTC Research (updated 2007).

Sandberg, Anders (2020) Post scarcity civilizations & cognitive enhancement, Foresight Institute, September 4.

Wiblin, Robert & Keiran Harris (2018) The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks, 80,000 Hours, February 27.

Related entries

dystopia | existential security | Future of Humanity Institute | Future of Life Institute | hedonium | hellish existential catastrophe | Invincible Wellbeing | long reflection | long-term future | longtermism | motivational | transhumanism | welfare biology

Ac­tu­ally pos­si­ble: thoughts on Utopia

Joe_Carlsmith18 Jan 2021 8:27 UTC
86 points
5 comments13 min readEA link

Char­ac­ter­is­ing utopia

richard_ngo2 Jan 2020 0:24 UTC
50 points
3 comments22 min readEA link

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
92 points
7 comments14 min readEA link

FLI launches Wor­ld­build­ing Con­test with $100,000 in prizes

ggilgallon17 Jan 2022 13:54 UTC
87 points
55 comments6 min readEA link

My vi­sion of a good fu­ture, part I

Jeffrey Ladish6 Jul 2022 1:23 UTC
34 points
3 comments9 min readEA link

In­creas­ing ex­is­ten­tial hope as an effec­tive cause?

Owen Cotton-Barratt10 Jan 2015 19:55 UTC
10 points
15 comments1 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

David Pearce: Abo­li­tion­ist bioethics

EA Global28 Aug 2015 16:14 UTC
17 points
0 comments1 min readEA link
(www.youtube.com)

Con­test—A New Term For “Eu­catas­tro­phe”

Davidmanheim17 Feb 2022 20:48 UTC
21 points
10 comments1 min readEA link

Beyond a bet­ter world

Davidmanheim14 Dec 2022 10:21 UTC
11 points
6 comments4 min readEA link
(progressforum.org)

A full syl­labus on longtermism

jtm5 Mar 2021 22:57 UTC
110 points
13 comments8 min readEA link

[Question] Which are the best ex­am­ples/​re­sources of pos­i­tive fu­tures you can think of? E.g some­thing like Max Teg­mark’s utopias in Life 3.0

elteerkers1 Nov 2022 17:13 UTC
17 points
8 comments2 min readEA link

Ex­is­ten­tial Hope Day, satel­lite to EAG Bay Area, Fe­bru­ary 27, 2023, in SF

elteerkers10 Jan 2023 9:44 UTC
9 points
0 comments1 min readEA link

Govern­ments Might Pre­fer Bring­ing Re­sources Back to the So­lar Sys­tem Rather than Space Set­tle­ment in Order to Main­tain Con­trol, Given that Govern­ing In­ter­stel­lar Set­tle­ments Looks Al­most Im­pos­si­ble

David Mathers🔸29 May 2023 11:16 UTC
36 points
4 comments5 min readEA link

FLI pod­cast se­ries, “Imag­ine A World”, about as­pira­tional fu­tures with AGI

Jackson Wagner13 Oct 2023 16:03 UTC
18 points
0 comments4 min readEA link

Hu­man­i­ties Re­search Ideas for Longtermists

Lizka9 Jun 2021 4:39 UTC
151 points
13 comments13 min readEA link

My Or­di­nary Life: Im­prove­ments Since the 1990s

gwern28 Apr 2018 20:46 UTC
38 points
2 comments4 min readEA link

[Fic­tion] Im­proved Gover­nance on the Crit­i­cal Path to AI Align­ment by 2045.

Jackson Wagner18 May 2022 15:50 UTC
20 points
1 comment12 min readEA link

Reflec­tions on Star Trek Strange New Wor­lds S1 Epi­sode 1

ben.smith2 Jun 2022 4:22 UTC
14 points
1 comment4 min readEA link

Here are the fi­nal­ists from FLI’s $100K Wor­ld­build­ing Contest

Jackson Wagner6 Jun 2022 18:42 UTC
44 points
5 comments2 min readEA link

Eric Drexler: Pare­to­topian goal alignment

EA Global15 Mar 2019 14:51 UTC
14 points
0 comments10 min readEA link
(www.youtube.com)

[Question] The pos­i­tive case for a fo­cus on achiev­ing safe AI?

vipulnaik25 Jun 2021 4:01 UTC
41 points
1 comment1 min readEA link

“Mu­sic we lack the ears to hear”

Ben19 Apr 2020 14:23 UTC
43 points
1 comment3 min readEA link

Nick Bostrom’s new book, “Deep Utopia”, is out today

peterhartree27 Mar 2024 11:23 UTC
105 points
6 comments1 min readEA link
(nickbostrom.com)

Su­per­in­tel­li­gent AI is nec­es­sary for an amaz­ing fu­ture, but far from sufficient

So8res31 Oct 2022 21:16 UTC
35 points
5 comments1 min readEA link

Chain­ing Retroac­tive Fun­ders to Bor­row Against Un­likely Utopias

Dawn Drescher19 Apr 2022 18:25 UTC
24 points
4 comments9 min readEA link
(impactmarkets.substack.com)

[3-hour pod­cast]: Joseph Car­l­smith on longter­mism, utopia, the com­pu­ta­tional power of the brain, meta-ethics, illu­sion­ism and meditation

Gus Docker27 Jul 2021 13:18 UTC
34 points
2 comments1 min readEA link

[Question] Please Share Your Per­spec­tives on the De­gree of So­cietal Im­pact from Trans­for­ma­tive AI Outcomes

Kiliank15 Apr 2022 1:23 UTC
3 points
3 comments1 min readEA link

Pos­si­ble di­rec­tions in AI ideal gov­er­nance research

RoryG10 Aug 2022 8:36 UTC
5 points
0 comments3 min readEA link

What if AI de­vel­op­ment goes well?

RoryG3 Aug 2022 8:57 UTC
25 points
7 comments12 min readEA link

Dario Amodei — Machines of Lov­ing Grace

Matrice Jacobine11 Oct 2024 21:39 UTC
66 points
0 comments1 min readEA link
(darioamodei.com)

[Feed­back Re­quest] Hyper­text Fic­tion Piece on Ex­is­ten­tial Hope

Miranda_Zhang30 May 2021 15:44 UTC
35 points
2 comments1 min readEA link

Max Teg­mark — The AGI En­tente Delusion

Matrice Jacobine13 Oct 2024 17:42 UTC
0 points
1 comment1 min readEA link
(www.lesswrong.com)

[Creative Writ­ing Con­test] An Au­tumn’s Day, An Angel

Conor McCammon30 Oct 2021 4:57 UTC
1 point
0 comments14 min readEA link

In­tro­duc­ing the Ba­sic Post-scarcity Map

postscarcitymap9 Oct 2022 6:53 UTC
10 points
0 comments1 min readEA link

The case to abol­ish the biol­ogy of suffer­ing as a longter­mist action

Gaetan_Selle21 May 2022 8:51 UTC
37 points
8 comments4 min readEA link