RSS

Long-term future

TagLast edit: 14 Jul 2022 16:43 UTC by Pablo

The long-term future focuses on possible ways in which the future of humanity may unfold over long timescales.

Bostrom’s typology of possible scenarios

Nick Bostrom has identified four broad possibilities for the future of humanity.[1]

First, humans may go prematurely extinct. Since the universe will eventually become inhospitable, extinction is inevitable in the very long run. However, it is also plausible that people will die out far before this deadline.

Second, human civilization may plateau, reaching a level of technological advancement beyond which no further advancement is feasible.

Third, human civilization may experience recurrent collapse, undergoing repeated declines or catastrophes that prevent it from moving beyond a certain level of advancement.

Fourth, human civilization may advance so significantly as to become nearly unrecognizable. Bostrom conceptualizes this scenario as a “posthuman” era where people have developed significantly different cognitive abilities, population sizes, body types, sensory or emotional experiences, or life expectancies.

Further reading

Baum, Seth D. et al. (2019) Long-term trajectories of human civilization, Foresight, vol. 21, pp. 53–83.

Bostrom, Nick (2009) The future of humanity, in Jan Kyrre Berg Olsen, Evan Selinger & Søren Riis (eds.) New Waves in Philosophy of Technology, London: Palgrave Macmillan, pp. 186–215.

Hanson, Robin (1998) Long-term growth as a sequence of exponential modes, working paper, George Mason University (updated December 2000).

Roodman, David (2020) Modeling the human trajectory, Open Philanthropy, June 15.

Related entries

longtermism | non-humans and the long-term future | space colonization

  1. ^

    Bostrom, Nick (2009) The future of humanity, in Jan Kyrre Berg Olsen, Evan Selinger & Søren Riis (eds.) New Waves in Philosophy of Technology, London: Palgrave Macmillan, pp. 186–215.

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA4 Apr 2021 3:09 UTC
75 points
28 comments1 min readEA link
(globalprioritiesinstitute.org)

Cru­cial ques­tions for longtermists

MichaelA29 Jul 2020 9:39 UTC
86 points
17 comments14 min readEA link

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
129 points
118 comments32 min readEA link
(www.sentienceinstitute.org)

Wild an­i­mal welfare in the far future

saulius8 Jul 2022 14:02 UTC
113 points
11 comments26 min readEA link

EA read­ing list: fu­tur­ism and transhumanism

richard_ngo4 Aug 2020 14:29 UTC
20 points
2 comments1 min readEA link

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maxime29 Mar 2021 18:10 UTC
116 points
23 comments11 min readEA link

Will we even­tu­ally be able to colonize other stars? Notes from a pre­limi­nary review

Nick_Beckstead22 Jun 2014 18:19 UTC
29 points
7 comments32 min readEA link

Ob­jec­tives of longter­mist policy making

Henrik Øberg Myhre10 Feb 2021 18:26 UTC
54 points
7 comments22 min readEA link

[Cross­post] Re­duc­ing Risks of Astro­nom­i­cal Suffer­ing: A Ne­glected Priority

Bob Jacobs14 Sep 2016 15:21 UTC
46 points
1 comment12 min readEA link
(longtermrisk.org)

Could it be a (bad) lock-in to re­place fac­tory farm­ing with al­ter­na­tive pro­tein?

Fai10 Sep 2022 16:24 UTC
83 points
37 comments9 min readEA link

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumway7 Aug 2020 6:27 UTC
40 points
18 comments16 min readEA link

Thoughts on whether we’re liv­ing at the most in­fluen­tial time in history

Buck3 Nov 2020 4:07 UTC
179 points
65 comments9 min readEA link

[Link post] Are we ap­proach­ing the sin­gu­lar­ity?

John G. Halstead13 Feb 2021 11:04 UTC
48 points
7 comments1 min readEA link

What Would A Longter­mist Flag Look Like?

Cullen_OKeefe24 Mar 2021 5:40 UTC
32 points
53 comments2 min readEA link

This Can’t Go On

Holden Karnofsky3 Aug 2021 15:53 UTC
115 points
29 comments10 min readEA link

A pro­posed hi­er­ar­chy of longter­mist concepts

Arepo30 Oct 2022 16:26 UTC
33 points
13 comments4 min readEA link

Longter­mist ter­minol­ogy has bi­as­ing assumptions

Arepo30 Oct 2022 16:26 UTC
60 points
13 comments7 min readEA link

De­liber­a­tion May Im­prove De­ci­sion-Making

Neil_Dullaghan5 Nov 2019 0:34 UTC
64 points
12 comments43 min readEA link

Effects of anti-ag­ing re­search on the long-term future

Matthew_Barnett27 Feb 2020 22:42 UTC
61 points
33 comments4 min readEA link

The Im­por­tance of Ar­tifi­cial Sentience

Jamie_Harris3 Mar 2021 17:17 UTC
64 points
10 comments12 min readEA link
(www.sentienceinstitute.org)

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_Carlsmith22 Mar 2021 8:21 UTC
101 points
13 comments12 min readEA link

Why I am prob­a­bly not a longtermist

Denise_Melchin23 Sep 2021 17:24 UTC
186 points
48 comments8 min readEA link

A dis­cus­sion of Holden Karnofsky’s “Most Im­por­tant Cen­tury” se­ries (Thurs­day 21 Oc­to­ber, 19:00 UK)

peterhartree16 Oct 2021 20:50 UTC
13 points
1 comment1 min readEA link

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

Fai4 Mar 2022 19:59 UTC
107 points
22 comments16 min readEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
75 points
8 comments4 min readEA link

Database of ex­is­ten­tial risk estimates

MichaelA15 Apr 2020 12:43 UTC
120 points
36 comments5 min readEA link

Off-Earth Governance

EdoArad6 Sep 2019 19:26 UTC
18 points
3 comments2 min readEA link

EA read­ing list: longter­mism and ex­is­ten­tial risks

richard_ngo3 Aug 2020 9:52 UTC
35 points
3 comments1 min readEA link

The ex­pected value of ex­tinc­tion risk re­duc­tion is positive

JanBrauner9 Dec 2018 8:00 UTC
55 points
22 comments39 min readEA link

Link: Longter­mist In­sti­tu­tional Reform

Dale30 Jul 2020 20:36 UTC
29 points
3 comments1 min readEA link

[Question] How does EA as­sess the far fu­ture?

Peter Sølling6 Aug 2020 12:42 UTC
3 points
0 comments1 min readEA link

Toby Ord: Selected quo­ta­tions on ex­is­ten­tial risk

Aaron Gertler6 Aug 2020 17:41 UTC
31 points
0 comments39 min readEA link
(theprecipice.com)

The emerg­ing school of pa­tient longtermism

80000_Hours7 Aug 2020 16:28 UTC
64 points
11 comments3 min readEA link

Should We Pri­ori­tize Long-Term Ex­is­ten­tial Risk?

MichaelDickens20 Aug 2020 2:23 UTC
28 points
17 comments3 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA25 Aug 2020 9:53 UTC
29 points
4 comments2 min readEA link
(www.openphilanthropy.org)

What is ex­is­ten­tial se­cu­rity?

MichaelA1 Sep 2020 9:40 UTC
31 points
1 comment6 min readEA link

Does Eco­nomic His­tory Point Toward a Sin­gu­lar­ity?

Ben Garfinkel2 Sep 2020 12:48 UTC
137 points
58 comments3 min readEA link

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickens3 Sep 2020 1:15 UTC
94 points
18 comments25 min readEA link

An Ar­gu­ment for Why the Fu­ture May Be Good

Ben_West19 Jul 2017 22:03 UTC
33 points
30 comments4 min readEA link

How to think about an un­cer­tain fu­ture: les­sons from other sec­tors & mis­takes of longter­mist EAs

weeatquince5 Sep 2020 12:51 UTC
56 points
32 comments14 min readEA link

Longter­mism and an­i­mal advocacy

Tobias_Baumann11 Nov 2020 17:44 UTC
84 points
8 comments4 min readEA link
(centerforreducingsuffering.org)

[Question] If some­one iden­ti­fies as a longter­mist, should they donate to Founders Pledge’s top cli­mate char­i­ties than to GiveWell’s top char­i­ties?

BrianTan26 Nov 2020 7:54 UTC
24 points
26 comments2 min readEA link

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DonyChristie29 Nov 2020 0:26 UTC
22 points
3 comments2 min readEA link

Long-Term Fu­ture Fund: Novem­ber 2020 grant recommendations

Habryka3 Dec 2020 12:57 UTC
75 points
7 comments14 min readEA link
(app.effectivealtruism.org)

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

What is go­ing on in the world?

Katja_Grace18 Jan 2021 4:47 UTC
101 points
25 comments3 min readEA link
(meteuphoric.com)

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
39 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

How can we in­fluence the long-term fu­ture?

Tobias_Baumann6 Mar 2019 15:31 UTC
11 points
1 comment4 min readEA link
(s-risks.org)

Op­ti­mistic Re­s­olu­tion of the Fermi Para­dox: Eter­nity in Six Hours & Grabby Aliens

steve632011 Feb 2021 4:28 UTC
19 points
0 comments9 min readEA link

Char­ac­ter­is­ing utopia

richard_ngo2 Jan 2020 0:24 UTC
40 points
3 comments21 min readEA link

Some EA Fo­rum Posts I’d like to write

Linch23 Feb 2021 5:27 UTC
98 points
10 comments5 min readEA link

Stu­art Arm­strong: The far fu­ture of in­tel­li­gent life across the universe

EA Global8 Jun 2018 7:15 UTC
18 points
0 comments12 min readEA link
(www.youtube.com)

Is Democ­racy a Fad?

Ben Garfinkel13 Mar 2021 12:40 UTC
146 points
36 comments7 min readEA link

In­ter­na­tional Crim­i­nal Law and the Fu­ture of Hu­man­ity: A The­ory of the Crime of Omnicide

philosophytorres22 Mar 2021 12:19 UTC
−3 points
1 comment1 min readEA link

Chris­tian Tarsney: Can we pre­dictably im­prove the far fu­ture?

EA Global18 Oct 2019 7:40 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Ori­ent­ing to­wards the long-term fu­ture (Joseph Car­l­smith)

EA Global3 Nov 2017 7:43 UTC
16 points
0 comments11 min readEA link
(www.youtube.com)

Joseph Car­l­smith: Shap­ing the far fu­ture (light­ning talk)

EA Global3 Nov 2017 7:43 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA Global2 Jun 2017 8:48 UTC
10 points
0 comments1 min readEA link
(www.youtube.com)

The Case for Strong Longtermism

Global Priorities Institute3 Sep 2019 1:17 UTC
14 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

Why might the fu­ture be good?

Paul_Christiano27 Feb 2013 5:00 UTC
5 points
0 comments9 min readEA link

Model­ing the Hu­man Tra­jec­tory (Open Philan­thropy)

Aaron Gertler16 Jun 2020 9:27 UTC
50 points
4 comments2 min readEA link
(www.openphilanthropy.org)

Mo­ral plu­ral­ism and longter­mism | Sunyshore

BrownHairedEevee17 Apr 2021 0:14 UTC
26 points
0 comments6 min readEA link
(sunyshore.substack.com)

Ac­tu­ally pos­si­ble: thoughts on Utopia

Joe_Carlsmith18 Jan 2021 8:27 UTC
70 points
3 comments13 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA2 May 2021 18:00 UTC
30 points
21 comments2 min readEA link

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

Eric Drexler: Pare­to­topian goal alignment

EA Global15 Mar 2019 14:51 UTC
6 points
0 comments10 min readEA link
(www.youtube.com)

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

Pablo6 May 2021 10:39 UTC
26 points
6 comments1 min readEA link
(80000hours.org)

Kar­da­shev for Kindness

Mary Stowers11 Jun 2021 22:22 UTC
30 points
5 comments3 min readEA link

Chris­tian Tarsney on fu­ture bias and a pos­si­ble solu­tion to moral fanaticism

80000_Hours5 May 2021 19:38 UTC
7 points
0 comments114 min readEA link

[Question] How large can the so­lar sys­tem’s econ­omy get?

WilliamKiely1 Jul 2021 2:29 UTC
8 points
8 comments1 min readEA link

Robin Han­son’s Grabby Aliens model ex­plained—part 1

Writer22 Sep 2021 18:50 UTC
50 points
7 comments8 min readEA link
(youtu.be)

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
22 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

[Question] The last

Visa Om19 Oct 2021 10:41 UTC
24 points
14 comments1 min readEA link

In­creased Availa­bil­ity and Willing­ness for De­ploy­ment of Re­sources for Effec­tive Altru­ism and Long-Termism

Evan_Gaensbauer29 Dec 2021 20:20 UTC
45 points
1 comment2 min readEA link

Re: Some thoughts on veg­e­tar­i­anism and veganism

Fai25 Feb 2022 20:43 UTC
47 points
3 comments8 min readEA link

Hinges and crises

Jan_Kulveit17 Mar 2022 13:43 UTC
72 points
5 comments3 min readEA link

Past and Fu­ture Tra­jec­tory Changes

N N28 Mar 2022 20:04 UTC
32 points
5 comments12 min readEA link
(goodoptics.wordpress.com)

[3] The Edges of Our Uni­verse (Ord, 2021)

Will Aldred9 May 2022 6:50 UTC
23 points
0 comments2 min readEA link
(arxiv.org)

Longter­mism ne­glects anti-age­ing research

freedomandutility12 Aug 2022 22:52 UTC
13 points
0 comments1 min readEA link

[Question] How to find *re­li­able* ways to im­prove the fu­ture?

Sjlver18 Aug 2022 12:47 UTC
53 points
35 comments2 min readEA link

The sense of a start

Gavin28 Sep 2022 13:37 UTC
52 points
0 comments5 min readEA link
(www.gleech.org)

[Question] For those work­ing on longter­mist pro­jects, how do you stay mo­ti­vated in the short-term?

warrenjordan8 Aug 2020 19:21 UTC
13 points
2 comments1 min readEA link

The ap­pli­ca­bil­ity of transsen­tien­tist crit­i­cal path analysis

Peter Sølling11 Aug 2020 11:26 UTC
0 points
2 comments32 min readEA link
(www.optimalaltruism.com)

New 3-hour pod­cast with An­ders Sand­berg about Grand Futures

Gus Docker6 Oct 2020 10:47 UTC
21 points
1 comment1 min readEA link

The Case for Space: A Longter­mist Alter­na­tive to Ex­is­ten­tial Threat Reduction

Giga18 Nov 2020 13:09 UTC
8 points
5 comments2 min readEA link

Helping fu­ture re­searchers to bet­ter un­der­stand long-term forecasting

gabriel_wagner25 Nov 2020 18:55 UTC
2 points
1 comment2 min readEA link

Long-Term Fu­ture Fund: Ask Us Any­thing!

AdamGleave3 Dec 2020 13:44 UTC
89 points
154 comments1 min readEA link

Good v. Op­ti­mal Futures

RobertHarling11 Dec 2020 16:38 UTC
32 points
10 comments6 min readEA link

[Cross­post] Rel­a­tivis­tic Colonization

itaibn31 Dec 2020 2:30 UTC
7 points
7 comments4 min readEA link

Le­gal Pri­ori­ties Re­search: A Re­search Agenda

jonasschuett6 Jan 2021 21:47 UTC
58 points
4 comments1 min readEA link

13 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan 2021 up­date)

HaydnBelfield8 Feb 2021 12:42 UTC
7 points
2 comments10 min readEA link

Stu­art Rus­sell Hu­man Com­pat­i­ble AI Roundtable with Allan Dafoe, Rob Re­ich, & Ma­ri­etje Schaake

Mahendra Prasad11 Feb 2021 7:43 UTC
16 points
0 comments1 min readEA link

Re­sponse to Phil Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfield8 Mar 2021 18:09 UTC
130 points
77 comments5 min readEA link

[Question] How should EAs man­age their copy­rights?

BrownHairedEevee9 Mar 2021 18:42 UTC
15 points
5 comments2 min readEA link

[Question] What anal­y­sis has been done of space coloniza­tion as a cause area?

Eli Rose9 Oct 2019 20:33 UTC
14 points
8 comments1 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will Bradshaw28 Oct 2019 15:32 UTC
24 points
8 comments1 min readEA link

How to Sur­vive the End of the Universe

avturchin28 Nov 2019 12:40 UTC
47 points
11 comments33 min readEA link

Beyond Astro­nom­i­cal Waste

Wei_Dai27 Dec 2018 9:27 UTC
23 points
2 comments1 min readEA link
(www.lesswrong.com)

Per­ma­nent So­cietal Im­prove­ments

Larks6 Sep 2015 1:30 UTC
11 points
10 comments4 min readEA link

The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

DannyBressler20 May 2021 0:39 UTC
66 points
6 comments2 min readEA link

Vignettes Work­shop (AI Im­pacts)

kokotajlod15 Jun 2021 11:02 UTC
43 points
5 comments1 min readEA link

The Mo­ral Value of the Far Future

Holden Karnofsky3 Jul 2014 12:43 UTC
2 points
0 comments8 min readEA link
(www.openphilanthropy.org)

Op­ti­mal Allo­ca­tion of Spend­ing on Ex­is­ten­tial Risk Re­duc­tion over an In­finite Time Hori­zon (in a too sim­plis­tic model)

Yassin Alaya12 Aug 2021 20:14 UTC
13 points
4 comments1 min readEA link

Heuris­tics for clue­less agents: how to get away with ig­nor­ing what mat­ters most in or­di­nary de­ci­sion-making

Global Priorities Institute31 May 2020 13:35 UTC
3 points
0 comments3 min readEA link
(globalprioritiesinstitute.org)

The asym­me­try, un­cer­tainty, and the long term

Global Priorities Institute30 Sep 2019 13:37 UTC
5 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

New Work­ing Paper Series of the Le­gal Pri­ori­ties Project

Legal Priorities Project18 Oct 2021 10:30 UTC
60 points
0 comments9 min readEA link

Robin Han­son’s Grabby Aliens model ex­plained—part 2

Writer9 Nov 2021 17:43 UTC
24 points
1 comment13 min readEA link
(youtu.be)

The Ter­minol­ogy of Ar­tifi­cial Sentience

Janet Pauketat28 Nov 2021 7:52 UTC
29 points
0 comments1 min readEA link
(www.sentienceinstitute.org)

What are the best (brief) re­sources to in­tro­duce EA & longter­mism?

Akash19 Dec 2021 21:16 UTC
5 points
4 comments1 min readEA link

Longter­mism in 1888: fermi es­ti­mate of heaven’s size.

Jackson Wagner25 Dec 2021 4:48 UTC
107 points
3 comments2 min readEA link

How should we value var­i­ous pos­si­ble long-run out­comes rel­a­tive to each other—an­swer­ing Holden Karnofsky’s ques­tion?

Omnizoid27 Feb 2022 3:52 UTC
4 points
0 comments13 min readEA link

In­tro­duc­tory video on safe­guard­ing the long-term future

JulianHazell7 Mar 2022 12:52 UTC
23 points
3 comments1 min readEA link

Data Publi­ca­tion for the 2021 Ar­tifi­cial In­tel­li­gence, Mo­ral­ity, and Sen­tience (AIMS) Sur­vey

Janet Pauketat24 Mar 2022 15:43 UTC
21 points
0 comments3 min readEA link
(www.sentienceinstitute.org)

Sce­nario Map­ping Ad­vanced AI Risk: Re­quest for Par­ti­ci­pa­tion with Data Collection

Kiliank27 Mar 2022 11:44 UTC
14 points
1 comment5 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link

[9] The Fu­ture of Hu­man Evolu­tion (Bostrom, 2004)

Will Aldred9 May 2022 23:01 UTC
13 points
0 comments1 min readEA link
(www.nickbostrom.com)

[2] The aes­ti­va­tion hy­poth­e­sis for re­solv­ing Fermi’s para­dox (Sand­berg, Arm­strong & Cirkovic, 2017)

Will Aldred10 May 2022 7:40 UTC
15 points
0 comments1 min readEA link
(arxiv.org)

[8] In­ter­galac­tic spread­ing of in­tel­li­gent life and sharp­en­ing the Fermi para­dox (Arm­strong & Sand­berg, 2012)

Will Aldred10 May 2022 19:53 UTC
18 points
0 comments1 min readEA link
(www.aleph.se)

An­nounc­ing Fu­ture Fo­rum—Ap­ply Now

isaakfreeman6 Jul 2022 17:35 UTC
92 points
11 comments4 min readEA link

[Question] Ex­er­cise for ‘What could the fu­ture hold? And why care?’

EA Handbook18 May 2022 3:52 UTC
2 points
0 comments3 min readEA link

Longter­mism, risk, and extinction

Richard Pettigrew4 Aug 2022 15:25 UTC
55 points
12 comments41 min readEA link

Longter­mists Should Work on AI—There is No “AI Neu­tral” Sce­nario

simeon_c7 Aug 2022 16:43 UTC
43 points
62 comments6 min readEA link

An open let­ter to my great grand kids’ great grand kids

Locke10 Aug 2022 15:07 UTC
1 point
0 comments13 min readEA link

Deon­tol­ogy, the Paral­y­sis Ar­gu­ment and al­tru­is­tic longtermism

William D'Alessandro2 Sep 2022 3:23 UTC
21 points
3 comments14 min readEA link

Vi­su­al­iza­tions of the sig­nifi­cance—per­sis­tence—con­tin­gency framework

Jakob2 Sep 2022 18:22 UTC
26 points
0 comments6 min readEA link

An­nounc­ing the Space Fu­tures Initiative

Carson Ezell12 Sep 2022 12:37 UTC
70 points
3 comments2 min readEA link

Effec­tive vs Altruism

Liat Zvi16 Sep 2022 9:37 UTC
2 points
1 comment2 min readEA link

Aus­trali­ans are pes­simistic about longterm fu­ture (n=1050)

Oscar Delaney8 Oct 2022 4:33 UTC
29 points
3 comments1 min readEA link

A vi­sion of the fu­ture (fic­tional short-story)

EffAlt15 Oct 2022 12:38 UTC
12 points
0 comments2 min readEA link

AGI and Lock-In

Lukas_Finnveden29 Oct 2022 1:56 UTC
121 points
24 comments10 min readEA link
(docs.google.com)

Tr­ish’s Shortform

Trish26 Oct 2022 20:08 UTC
2 points
5 comments1 min readEA link

Sum­mary of Deep Time Reck­on­ing by Vin­cent Ialenti

vinegar10@gmail.com31 Oct 2022 20:00 UTC
4 points
0 comments10 min readEA link

Value of Query­ing 100+ Peo­ple About Hu­man­ity’s Future

rodeo_flagellum8 Nov 2022 0:41 UTC
5 points
0 comments1 min readEA link

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

𝕮𝖎𝖓𝖊𝖗𝖆13 Nov 2022 19:40 UTC
32 points
6 comments1 min readEA link

Suc­cess Max­i­miza­tion: An Alter­na­tive to Ex­pected Utility The­ory and a Gen­er­al­iza­tion of Max­ipok to Mo­ral Uncertainty

Mahendra Prasad26 Nov 2022 1:53 UTC
13 points
3 comments2 min readEA link

Fair Col­lec­tive Effi­cient Altruism

Jobst Heitzig (vodle.it)28 Nov 2022 13:35 UTC
3 points
1 comment5 min readEA link
(www.lesswrong.com)

Re­spect for oth­ers’ risk at­ti­tudes and the long-run fu­ture (An­dreas Mo­gensen)

Global Priorities Institute2 Dec 2022 9:51 UTC
3 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)