RSS

Long-term future

TagLast edit: 14 Jul 2022 16:43 UTC by Pablo

The long-term future focuses on possible ways in which the future of humanity may unfold over long timescales.

Bostrom’s typology of possible scenarios

Nick Bostrom has identified four broad possibilities for the future of humanity.[1]

First, humans may go prematurely extinct. Since the universe will eventually become inhospitable, extinction is inevitable in the very long run. However, it is also plausible that people will die out far before this deadline.

Second, human civilization may plateau, reaching a level of technological advancement beyond which no further advancement is feasible.

Third, human civilization may experience recurrent collapse, undergoing repeated declines or catastrophes that prevent it from moving beyond a certain level of advancement.

Fourth, human civilization may advance so significantly as to become nearly unrecognizable. Bostrom conceptualizes this scenario as a “posthuman” era where people have developed significantly different cognitive abilities, population sizes, body types, sensory or emotional experiences, or life expectancies.

Further reading

Baum, Seth D. et al. (2019) Long-term trajectories of human civilization, Foresight, vol. 21, pp. 53–83.

Bostrom, Nick (2009) The future of humanity, in Jan Kyrre Berg Olsen, Evan Selinger & Søren Riis (eds.) New Waves in Philosophy of Technology, London: Palgrave Macmillan, pp. 186–215.

Hanson, Robin (1998) Long-term growth as a sequence of exponential modes, working paper, George Mason University (updated December 2000).

Roodman, David (2020) Modeling the human trajectory, Open Philanthropy, June 15.

Related entries

longtermism | non-humans and the long-term future | space colonization

  1. ^

    Bostrom, Nick (2009) The future of humanity, in Jan Kyrre Berg Olsen, Evan Selinger & Søren Riis (eds.) New Waves in Philosophy of Technology, London: Palgrave Macmillan, pp. 186–215.

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA🔸4 Apr 2021 3:09 UTC
79 points
27 comments2 min readEA link
(globalprioritiesinstitute.org)

Cru­cial ques­tions for longtermists

MichaelA🔸29 Jul 2020 9:39 UTC
104 points
17 comments19 min readEA link

Wild an­i­mal welfare in the far future

saulius8 Jul 2022 14:02 UTC
122 points
11 comments26 min readEA link

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
142 points
118 comments34 min readEA link
(www.sentienceinstitute.org)

Re­think Pri­ori­ties’ Cross-Cause Cost-Effec­tive­ness Model: In­tro­duc­tion and Overview

Derek Shiller3 Nov 2023 12:26 UTC
224 points
93 comments13 min readEA link

How bad would hu­man ex­tinc­tion be?

arvomm23 Oct 2023 12:01 UTC
132 points
25 comments18 min readEA link

EA read­ing list: fu­tur­ism and transhumanism

richard_ngo4 Aug 2020 14:29 UTC
20 points
2 comments1 min readEA link

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maxime29 Mar 2021 18:10 UTC
116 points
23 comments11 min readEA link

Will we even­tu­ally be able to colonize other stars? Notes from a pre­limi­nary review

Nick_Beckstead22 Jun 2014 18:19 UTC
30 points
7 comments32 min readEA link

Ob­jec­tives of longter­mist policy making

Henrik Øberg Myhre10 Feb 2021 18:26 UTC
54 points
7 comments22 min readEA link

Could it be a (bad) lock-in to re­place fac­tory farm­ing with al­ter­na­tive pro­tein?

Fai10 Sep 2022 16:24 UTC
85 points
38 comments9 min readEA link

Longter­mist (es­pe­cially x-risk) ter­minol­ogy has bi­as­ing assumptions

Arepo30 Oct 2022 16:26 UTC
70 points
13 comments7 min readEA link

A pro­posed hi­er­ar­chy of longter­mist concepts

Arepo30 Oct 2022 16:26 UTC
38 points
13 comments4 min readEA link

Why I am prob­a­bly not a longtermist

Denise_Melchin23 Sep 2021 17:24 UTC
230 points
47 comments8 min readEA link

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumway7 Aug 2020 6:27 UTC
40 points
18 comments16 min readEA link

Thoughts on whether we’re liv­ing at the most in­fluen­tial time in history

Buck3 Nov 2020 4:07 UTC
178 points
66 comments9 min readEA link

This Can’t Go On

Holden Karnofsky3 Aug 2021 15:53 UTC
129 points
35 comments10 min readEA link

[Link post] Are we ap­proach­ing the sin­gu­lar­ity?

John G. Halstead13 Feb 2021 11:04 UTC
48 points
7 comments1 min readEA link

What Would A Longter­mist Flag Look Like?

Cullen 🔸24 Mar 2021 5:40 UTC
33 points
52 comments1 min readEA link

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

Fai4 Mar 2022 19:59 UTC
134 points
29 comments16 min readEA link

What If 99% of Hu­man­ity Van­ished? (A Hap­pier World video)

Jeroen Willems🔸16 Feb 2023 17:10 UTC
16 points
1 comment3 min readEA link

Effects of anti-ag­ing re­search on the long-term future

Matthew_Barnett27 Feb 2020 22:42 UTC
61 points
33 comments4 min readEA link

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_Carlsmith22 Mar 2021 8:21 UTC
102 points
13 comments12 min readEA link

A dis­cus­sion of Holden Karnofsky’s “Most Im­por­tant Cen­tury” se­ries (Thurs­day 21 Oc­to­ber, 19:00 UK)

peterhartree16 Oct 2021 20:50 UTC
13 points
0 comments1 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

The Im­por­tance of Ar­tifi­cial Sentience

Jamie_Harris3 Mar 2021 17:17 UTC
70 points
10 comments11 min readEA link
(www.sentienceinstitute.org)

De­liber­a­tion May Im­prove De­ci­sion-Making

Neil_Dullaghan🔹 5 Nov 2019 0:34 UTC
66 points
12 comments39 min readEA link

Longter­mism and an­i­mal advocacy

Tobias_Baumann11 Nov 2020 17:44 UTC
99 points
8 comments4 min readEA link
(centerforreducingsuffering.org)

What is ex­is­ten­tial se­cu­rity?

MichaelA🔸1 Sep 2020 9:40 UTC
34 points
1 comment6 min readEA link

Op­ti­mistic Re­s­olu­tion of the Fermi Para­dox: Eter­nity in Six Hours & Grabby Aliens

steve632011 Feb 2021 4:28 UTC
18 points
2 comments9 min readEA link

The Case for Strong Longtermism

Global Priorities Institute3 Sep 2019 1:17 UTC
14 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

Helping an­i­mals or sav­ing hu­man lives in high in­come coun­tries is ar­guably bet­ter than sav­ing hu­man lives in low in­come coun­tries?

Vasco Grilo🔸21 Mar 2024 9:05 UTC
12 points
10 comments12 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸8 Jul 2023 8:40 UTC
8 points
0 comments2 min readEA link

Prior prob­a­bil­ity of this be­ing the most im­por­tant century

Vasco Grilo🔸15 Jul 2023 7:18 UTC
8 points
2 comments2 min readEA link

Why might the fu­ture be good?

Paul_Christiano27 Feb 2013 5:00 UTC
5 points
0 comments9 min readEA link

In­creased Availa­bil­ity and Willing­ness for De­ploy­ment of Re­sources for Effec­tive Altru­ism and Long-Termism

Evan_Gaensbauer29 Dec 2021 20:20 UTC
46 points
1 comment2 min readEA link

Shap­ing Hu­man­ity’s Longterm Trajectory

Toby_Ord18 Jul 2023 10:09 UTC
171 points
57 comments2 min readEA link
(files.tobyord.com)

Ac­tu­ally pos­si­ble: thoughts on Utopia

Joe_Carlsmith18 Jan 2021 8:27 UTC
86 points
5 comments13 min readEA link

The ex­pected value of ex­tinc­tion risk re­duc­tion is positive

JanB9 Dec 2018 8:00 UTC
66 points
22 comments61 min readEA link

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA Global2 Jun 2017 8:48 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

[Question] If some­one iden­ti­fies as a longter­mist, should they donate to Founders Pledge’s top cli­mate char­i­ties than to GiveWell’s top char­i­ties?

BrianTan26 Nov 2020 7:54 UTC
25 points
26 comments2 min readEA link

Should We Pri­ori­tize Long-Term Ex­is­ten­tial Risk?

MichaelDickens20 Aug 2020 2:23 UTC
28 points
17 comments3 min readEA link

Chris­tian Tarsney: Can we pre­dictably im­prove the far fu­ture?

EA Global18 Oct 2019 7:40 UTC
10 points
0 comments1 min readEA link
(www.youtube.com)

An Ar­gu­ment for Why the Fu­ture May Be Good

Ben_West🔸19 Jul 2017 22:03 UTC
50 points
30 comments4 min readEA link

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickens3 Sep 2020 1:15 UTC
96 points
18 comments25 min readEA link

More global warm­ing might be good to miti­gate the food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸29 Apr 2023 8:24 UTC
46 points
39 comments13 min readEA link

Off-Earth Governance

EdoArad6 Sep 2019 19:26 UTC
18 points
3 comments2 min readEA link

EA read­ing list: longter­mism and ex­is­ten­tial risks

richard_ngo3 Aug 2020 9:52 UTC
35 points
3 comments1 min readEA link

Govern­ments Might Pre­fer Bring­ing Re­sources Back to the So­lar Sys­tem Rather than Space Set­tle­ment in Order to Main­tain Con­trol, Given that Govern­ing In­ter­stel­lar Set­tle­ments Looks Al­most Im­pos­si­ble

David Mathers🔸29 May 2023 11:16 UTC
36 points
4 comments5 min readEA link

Ori­ent­ing to­wards the long-term fu­ture (Joseph Car­l­smith)

EA Global3 Nov 2017 7:43 UTC
17 points
0 comments10 min readEA link
(www.youtube.com)

Famine deaths due to the cli­matic effects of nu­clear war

Vasco Grilo🔸14 Oct 2023 12:05 UTC
40 points
21 comments66 min readEA link

What Does a Marginal Grant at LTFF Look Like? Fund­ing Pri­ori­ties and Grant­mak­ing Thresh­olds at the Long-Term Fu­ture Fund

Linch10 Aug 2023 20:11 UTC
175 points
22 comments8 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

80000_Hours5 May 2021 19:38 UTC
7 points
0 comments113 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
30 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DC29 Nov 2020 0:26 UTC
22 points
3 comments2 min readEA link

How to think about an un­cer­tain fu­ture: les­sons from other sec­tors & mis­takes of longter­mist EAs

weeatquince5 Sep 2020 12:51 UTC
63 points
31 comments14 min readEA link

X-risks to all life v. to humans

RobertHarling3 Jun 2020 15:40 UTC
78 points
33 comments4 min readEA link

[Question] How to find *re­li­able* ways to im­prove the fu­ture?

Sjlver18 Aug 2022 12:47 UTC
53 points
35 comments2 min readEA link

Link: Longter­mist In­sti­tu­tional Reform

Dale30 Jul 2020 20:36 UTC
29 points
3 comments1 min readEA link

Sav­ing lives in nor­mal times is bet­ter to im­prove the longterm fu­ture than do­ing so in catas­tro­phes?

Vasco Grilo🔸20 Apr 2024 8:37 UTC
11 points
25 comments9 min readEA link

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
92 points
7 comments14 min readEA link

A Gen­tle In­tro­duc­tion to Risk Frame­works Beyond Forecasting

pending_survival11 Apr 2024 9:15 UTC
81 points
4 comments27 min readEA link

How can we in­fluence the long-term fu­ture?

Tobias_Baumann6 Mar 2019 15:31 UTC
11 points
1 comment4 min readEA link
(s-risks.org)

[Question] The last

Visa Om19 Oct 2021 10:41 UTC
24 points
12 comments1 min readEA link

On the Value of Ad­vanc­ing Progress

Toby_Ord11 Jul 2024 11:20 UTC
119 points
39 comments9 min readEA link

Re: Some thoughts on veg­e­tar­i­anism and veganism

Fai25 Feb 2022 20:43 UTC
46 points
3 comments8 min readEA link

Toby Ord: Selected quo­ta­tions on ex­is­ten­tial risk

Aaron Gertler 🔸6 Aug 2020 17:41 UTC
32 points
0 comments39 min readEA link
(theprecipice.com)

Robin Han­son’s Grabby Aliens model ex­plained—part 1

Writer22 Sep 2021 18:50 UTC
57 points
7 comments8 min readEA link
(youtu.be)

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

Joseph Car­l­smith: Shap­ing the far fu­ture (light­ning talk)

EA Global3 Nov 2017 7:43 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Eric Drexler: Pare­to­topian goal alignment

EA Global15 Mar 2019 14:51 UTC
12 points
0 comments10 min readEA link
(www.youtube.com)

Mo­ral plu­ral­ism and longter­mism | Sunyshore

Eevee🔹17 Apr 2021 0:14 UTC
26 points
0 comments5 min readEA link
(sunyshore.substack.com)

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA🔸3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

Longter­mism ne­glects anti-age­ing research

freedomandutility12 Aug 2022 22:52 UTC
13 points
0 comments1 min readEA link

Hinges and crises

Jan_Kulveit17 Mar 2022 13:43 UTC
72 points
6 comments3 min readEA link

What is go­ing on in the world?

Katja_Grace18 Jan 2021 4:47 UTC
103 points
26 comments3 min readEA link
(meteuphoric.com)

The emerg­ing school of pa­tient longtermism

80000_Hours7 Aug 2020 16:28 UTC
64 points
11 comments3 min readEA link

In­ter­na­tional Crim­i­nal Law and the Fu­ture of Hu­man­ity: A The­ory of the Crime of Omnicide

philosophytorres22 Mar 2021 12:19 UTC
−3 points
1 comment1 min readEA link

[Question] How does EA as­sess the far fu­ture?

Peter Sølling6 Aug 2020 12:42 UTC
3 points
0 comments1 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA🔸25 Aug 2020 9:53 UTC
29 points
4 comments2 min readEA link
(www.openphilanthropy.org)

Some EA Fo­rum Posts I’d like to write

Linch23 Feb 2021 5:27 UTC
100 points
10 comments5 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

Pablo6 May 2021 10:39 UTC
26 points
6 comments1 min readEA link
(80000hours.org)

Stu­art Arm­strong: The far fu­ture of in­tel­li­gent life across the universe

EA Global8 Jun 2018 7:15 UTC
19 points
0 comments12 min readEA link
(www.youtube.com)

Past and Fu­ture Tra­jec­tory Changes

N N28 Mar 2022 20:04 UTC
32 points
5 comments12 min readEA link
(goodoptics.wordpress.com)

Char­ac­ter­is­ing utopia

richard_ngo2 Jan 2020 0:24 UTC
50 points
3 comments22 min readEA link

Model­ing the Hu­man Tra­jec­tory (Open Philan­thropy)

Aaron Gertler 🔸16 Jun 2020 9:27 UTC
50 points
4 comments2 min readEA link
(www.openphilanthropy.org)

Database of ex­is­ten­tial risk estimates

MichaelA🔸15 Apr 2020 12:43 UTC
130 points
37 comments5 min readEA link

[Question] How large can the so­lar sys­tem’s econ­omy get?

WilliamKiely1 Jul 2021 2:29 UTC
9 points
7 comments1 min readEA link

Long-Term Fu­ture Fund: Novem­ber 2020 grant recommendations

Habryka3 Dec 2020 12:57 UTC
76 points
5 comments14 min readEA link
(app.effectivealtruism.org)

Is Democ­racy a Fad?

bgarfinkel13 Mar 2021 12:40 UTC
165 points
36 comments18 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA🔸2 May 2021 18:00 UTC
30 points
21 comments2 min readEA link

The sense of a start

Gavin28 Sep 2022 13:37 UTC
53 points
0 comments5 min readEA link
(www.gleech.org)

Kar­da­shev for Kindness

Mary Stowers11 Jun 2021 22:22 UTC
30 points
5 comments3 min readEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
83 points
8 comments4 min readEA link

Does Eco­nomic His­tory Point Toward a Sin­gu­lar­ity?

bgarfinkel2 Sep 2020 12:48 UTC
140 points
55 comments3 min readEA link

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
42 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Re­duc­ing the neart­erm risk of hu­man ex­tinc­tion is not as­tro­nom­i­cally cost-effec­tive?

Vasco Grilo🔸9 Jun 2024 8:02 UTC
20 points
37 comments8 min readEA link

Three jour­neys for effec­tive altruism

Zachary Robinson🔸22 Oct 2024 15:45 UTC
120 points
3 comments13 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will Bradshaw28 Oct 2019 15:32 UTC
31 points
8 comments1 min readEA link

13 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan 2021 up­date)

HaydnBelfield8 Feb 2021 12:42 UTC
7 points
2 comments10 min readEA link

[Question] How should EAs man­age their copy­rights?

Eevee🔹9 Mar 2021 18:42 UTC
15 points
5 comments2 min readEA link

Per­ma­nent So­cietal Im­prove­ments

Larks6 Sep 2015 1:30 UTC
11 points
10 comments4 min readEA link

Beyond Astro­nom­i­cal Waste

Wei Dai27 Dec 2018 9:27 UTC
25 points
2 comments1 min readEA link
(www.lesswrong.com)

Helping fu­ture re­searchers to bet­ter un­der­stand long-term forecasting

gabriel_wagner25 Nov 2020 18:55 UTC
2 points
1 comment2 min readEA link

What are the best (brief) re­sources to in­tro­duce EA & longter­mism?

Akash19 Dec 2021 21:16 UTC
5 points
4 comments1 min readEA link

Sce­nario Map­ping Ad­vanced AI Risk: Re­quest for Par­ti­ci­pa­tion with Data Collection

Kiliank27 Mar 2022 11:44 UTC
14 points
0 comments5 min readEA link

Tr­ish’s Quick takes

Trish26 Oct 2022 20:08 UTC
2 points
5 comments1 min readEA link

AGI and Lock-In

Lukas Finnveden29 Oct 2022 1:56 UTC
146 points
20 comments10 min readEA link
(docs.google.com)

Effec­tive vs Altruism

Liat Zvi16 Sep 2022 9:37 UTC
2 points
1 comment2 min readEA link

Aus­trali­ans are pes­simistic about longterm fu­ture (n=1050)

OscarD🔸8 Oct 2022 4:33 UTC
29 points
3 comments1 min readEA link

The Mo­ral Value of the Far Future

Holden Karnofsky3 Jul 2014 12:43 UTC
2 points
0 comments8 min readEA link
(www.openphilanthropy.org)

Stu­art Rus­sell Hu­man Com­pat­i­ble AI Roundtable with Allan Dafoe, Rob Re­ich, & Ma­ri­etje Schaake

Mahendra Prasad11 Feb 2021 7:43 UTC
16 points
0 comments1 min readEA link

Value of Query­ing 100+ Peo­ple About Hu­man­ity’s Future

QubitSwarm998 Nov 2022 0:41 UTC
5 points
0 comments1 min readEA link

New 3-hour pod­cast with An­ders Sand­berg about Grand Futures

Gus Docker6 Oct 2020 10:47 UTC
21 points
1 comment1 min readEA link

Vi­su­al­iza­tions of the sig­nifi­cance—per­sis­tence—con­tin­gency framework

Jakob2 Sep 2022 18:22 UTC
27 points
0 comments7 min readEA link

Heuris­tics for clue­less agents: how to get away with ig­nor­ing what mat­ters most in or­di­nary de­ci­sion-making

Global Priorities Institute31 May 2020 13:35 UTC
4 points
0 comments3 min readEA link
(globalprioritiesinstitute.org)

Long-Term Fu­ture Fund: Ask Us Any­thing!

AdamGleave3 Dec 2020 13:44 UTC
89 points
153 comments1 min readEA link

In­tro­duc­tory video on safe­guard­ing the long-term future

JulianHazell7 Mar 2022 12:52 UTC
23 points
3 comments1 min readEA link

Longter­mism in 1888: fermi es­ti­mate of heaven’s size.

Jackson Wagner25 Dec 2021 4:48 UTC
108 points
4 comments2 min readEA link

Longter­mism, risk, and extinction

Richard Pettigrew4 Aug 2022 15:25 UTC
73 points
12 comments41 min readEA link

Robin Han­son’s Grabby Aliens model ex­plained—part 2

Writer9 Nov 2021 17:43 UTC
29 points
1 comment13 min readEA link
(youtu.be)

An­nounc­ing the Space Fu­tures Initiative

Carson Ezell12 Sep 2022 12:37 UTC
71 points
3 comments2 min readEA link

Re­sponse to Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfield8 Mar 2021 18:09 UTC
138 points
73 comments5 min readEA link

Test Your Knowl­edge of the Long-Term Future

AndreFerretti10 Dec 2022 11:01 UTC
22 points
0 comments1 min readEA link

The ap­pli­ca­bil­ity of transsen­tien­tist crit­i­cal path analysis

Peter Sølling11 Aug 2020 11:26 UTC
0 points
2 comments32 min readEA link
(www.optimalaltruism.com)

Deon­tol­ogy, the Paral­y­sis Ar­gu­ment and al­tru­is­tic longtermism

William D'Alessandro2 Sep 2022 3:23 UTC
33 points
4 comments14 min readEA link

The asym­me­try, un­cer­tainty, and the long term

Global Priorities Institute30 Sep 2019 13:37 UTC
13 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

Sum­mary of Deep Time Reck­on­ing by Vin­cent Ialenti

vinegar10@gmail.com31 Oct 2022 20:00 UTC
10 points
1 comment10 min readEA link

The Case for Space: A Longter­mist Alter­na­tive to Ex­is­ten­tial Threat Reduction

Giga18 Nov 2020 13:09 UTC
9 points
5 comments2 min readEA link

An open let­ter to my great grand kids’ great grand kids

Locke10 Aug 2022 15:07 UTC
1 point
0 comments13 min readEA link

Le­gal Pri­ori­ties Re­search: A Re­search Agenda

jonasschuett6 Jan 2021 21:47 UTC
58 points
4 comments1 min readEA link

A vi­sion of the fu­ture (fic­tional short-story)

EffAlt15 Oct 2022 12:38 UTC
12 points
0 comments2 min readEA link

[Question] Ex­er­cise for ‘What could the fu­ture hold? And why care?’

EA Handbook18 May 2022 3:52 UTC
9 points
9 comments3 min readEA link

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

𝕮𝖎𝖓𝖊𝖗𝖆13 Nov 2022 19:40 UTC
35 points
6 comments1 min readEA link

Fair Col­lec­tive Effec­tive Altruism

Jobst Heitzig (vodle.it)28 Nov 2022 13:35 UTC
6 points
1 comment5 min readEA link
(www.lesswrong.com)

Good v. Op­ti­mal Futures

RobertHarling11 Dec 2020 16:38 UTC
38 points
10 comments6 min readEA link

Longter­mists Should Work on AI—There is No “AI Neu­tral” Sce­nario

simeon_c7 Aug 2022 16:43 UTC
42 points
62 comments6 min readEA link

[Question] For those work­ing on longter­mist pro­jects, how do you stay mo­ti­vated in the short-term?

warrenjordan8 Aug 2020 19:21 UTC
13 points
2 comments1 min readEA link

Vignettes Work­shop (AI Im­pacts)

kokotajlod15 Jun 2021 11:02 UTC
43 points
5 comments1 min readEA link

Re­spect for oth­ers’ risk at­ti­tudes and the long-run fu­ture (An­dreas Mo­gensen)

Global Priorities Institute2 Dec 2022 9:51 UTC
4 points
1 comment3 min readEA link

The Ter­minol­ogy of Ar­tifi­cial Sentience

Janet Pauketat28 Nov 2021 7:52 UTC
29 points
0 comments1 min readEA link
(www.sentienceinstitute.org)

How to Sur­vive the End of the Universe

avturchin28 Nov 2019 12:40 UTC
54 points
11 comments33 min readEA link

Op­ti­mal Allo­ca­tion of Spend­ing on Ex­is­ten­tial Risk Re­duc­tion over an In­finite Time Hori­zon (in a too sim­plis­tic model)

Yassin Alaya12 Aug 2021 20:14 UTC
13 points
4 comments1 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link

[Question] What anal­y­sis has been done of space coloniza­tion as a cause area?

Eli Rose9 Oct 2019 20:33 UTC
14 points
8 comments1 min readEA link

Data Publi­ca­tion for the 2021 Ar­tifi­cial In­tel­li­gence, Mo­ral­ity, and Sen­tience (AIMS) Sur­vey

Janet Pauketat24 Mar 2022 15:43 UTC
21 points
0 comments3 min readEA link
(www.sentienceinstitute.org)

[Cross­post] Rel­a­tivis­tic Colonization

itaibn31 Dec 2020 2:30 UTC
8 points
7 comments4 min readEA link

How should we value var­i­ous pos­si­ble long-run out­comes rel­a­tive to each other—an­swer­ing Holden Karnofsky’s ques­tion?

Omnizoid27 Feb 2022 3:52 UTC
4 points
0 comments13 min readEA link

An­nounc­ing Fu­ture Fo­rum—Ap­ply Now

isaakfreeman6 Jul 2022 17:35 UTC
88 points
11 comments4 min readEA link

The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

DannyBressler20 May 2021 0:39 UTC
66 points
6 comments2 min readEA link

Solv­ing al­ign­ment isn’t enough for a flour­ish­ing future

mic2 Feb 2024 18:22 UTC
27 points
0 comments22 min readEA link
(papers.ssrn.com)

Amer­ica & the Shape of the Far Future

Aidan Fitzsimons9 Jan 2023 6:03 UTC
8 points
1 comment12 min readEA link

Stable to­tal­i­tar­i­anism: an overview

80000_Hours29 Oct 2024 16:07 UTC
35 points
1 comment20 min readEA link
(80000hours.org)

[Question] A $1 mil­lion dol­lar prize to in­cen­tivize global par­ti­ci­pa­tion in a video com­pe­ti­tion to crowd­source an at­trac­tive plan for the long-term de­vel­op­ment of civilization

Nathaniel Ryan28 Jan 2023 6:10 UTC
−1 points
0 comments1 min readEA link

Fore­cast­ing Our World in Data: The Next 100 Years

AlexLeader1 Feb 2023 22:13 UTC
97 points
8 comments66 min readEA link
(www.metaculus.com)

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk830 Nov 2023 10:15 UTC
−9 points
0 comments12 min readEA link

Agnes Cal­lard on our fu­ture, the hu­man quest, and find­ing purpose

Tobias Häberli22 Mar 2023 12:29 UTC
2 points
0 comments21 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
63 points
31 comments8 min readEA link

What val­ues will con­trol the Fu­ture? Overview, con­clu­sion, and di­rec­tions for fu­ture work

Jim Buhler18 Jul 2023 16:11 UTC
25 points
0 comments2 min readEA link

In­ves­ti­gat­ing the Long Reflection

Yannick_Muehlhaeuser24 Jul 2023 16:26 UTC
38 points
3 comments12 min readEA link

Fu­ture tech­nolog­i­cal progress does NOT cor­re­late with meth­ods that in­volve less suffering

Jim Buhler1 Aug 2023 9:30 UTC
60 points
12 comments4 min readEA link

Pre­dict­ing what fu­ture peo­ple value: A terse in­tro­duc­tion to Ax­iolog­i­cal Futurism

Jim Buhler24 Mar 2023 19:15 UTC
62 points
10 comments2 min readEA link

What the Mo­ral Truth might be makes no differ­ence to what will happen

Jim Buhler9 Apr 2023 17:43 UTC
40 points
9 comments3 min readEA link

Tempi eccezionali

EA Italy17 Jan 2023 14:58 UTC
1 point
0 comments3 min readEA link
(altruismoefficace.it)

[Opz­ionale] Perché prob­a­bil­mente non sono una lungoterminista

EA Italy17 Jan 2023 18:12 UTC
1 point
0 comments8 min readEA link

[Opz­ionale] Il lun­goter­minismo e l’at­tivismo per gli animali

EA Italy17 Jan 2023 20:17 UTC
1 point
0 comments4 min readEA link

Eser­cizio per ‘Che cosa potrebbe riser­vare il fu­turo? E perché dovrebbe im­portarci?’ (45 min.)

EA Italy17 Jan 2023 20:22 UTC
1 point
0 comments3 min readEA link

A Coun­ter­ar­gu­ment to the Ar­gu­ment of Astro­nom­i­cal Waste

Markus Bredberg24 Apr 2023 17:09 UTC
12 points
0 comments4 min readEA link

Call for sub­mis­sions: Choice of Fu­tures sur­vey questions

c.trout30 Apr 2023 6:59 UTC
11 points
0 comments1 min readEA link

Im­pli­ca­tions of ev­i­den­tial co­op­er­a­tion in large worlds

Lukas Finnveden23 Aug 2023 0:43 UTC
79 points
1 comment1 min readEA link
(lukasfinnveden.substack.com)

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret1 Dec 2023 18:01 UTC
39 points
2 comments42 min readEA link

The Grabby Values Selec­tion Th­e­sis: What val­ues do space-far­ing civ­i­liza­tions plau­si­bly have?

Jim Buhler6 May 2023 19:28 UTC
47 points
12 comments4 min readEA link

Mo­ral Spillover in Hu­man-AI Interaction

Katerina Manoli5 Jun 2023 15:20 UTC
17 points
1 comment13 min readEA link

What We Owe The Fu­ture: A Buried Es­say

haven_worsham20 Jun 2023 17:49 UTC
19 points
0 comments16 min readEA link

Yip Fai Tse on an­i­mal welfare & AI safety and long termism

Karthik Palakodeti22 Jun 2023 12:48 UTC
47 points
0 comments1 min readEA link

Sum­mary: Tiny Prob­a­bil­ities and the Value of the Far Fu­ture (Pe­tra Koso­nen)

Nicholas Kruus🔸17 Feb 2024 14:11 UTC
7 points
1 comment4 min readEA link

IFRC cre­ative com­pe­ti­tion: product or ser­vice from fu­ture au­tonomous weapons sys­tems and emerg­ing digi­tal risks

Devin Lam21 Jul 2024 13:08 UTC
9 points
0 comments1 min readEA link
(solferinoacademy.com)

Is the Far Fu­ture Ir­rele­vant for Mo­ral De­ci­sion-Mak­ing?

Tristan D1 Oct 2024 7:42 UTC
35 points
31 comments2 min readEA link
(www.sciencedirect.com)

#181 – The sci­ence that could keep us healthy in our 80s and be­yond (Laura Dem­ing on the 80,000 Hours Pod­cast)

80000_Hours6 Mar 2024 20:05 UTC
10 points
0 comments12 min readEA link

Tiny hu­mans: the most promis­ing new cause can­di­date?

akash 🔸2 Apr 2024 8:46 UTC
24 points
7 comments4 min readEA link

Ex­tinc­tion risk and longter­mism: a broader cri­tique of Thorstad

Matthew Rendall21 Apr 2024 13:55 UTC
25 points
5 comments3 min readEA link

Differ­en­tial knowl­edge interconnection

Roman Leventov12 Oct 2024 12:52 UTC
3 points
1 comment1 min readEA link

New Work­ing Paper Series of the Le­gal Pri­ori­ties Project

Legal Priorities Project18 Oct 2021 10:30 UTC
60 points
0 comments9 min readEA link