RSS

Ex­is­ten­tial risk

Core TagLast edit: 16 Jun 2021 9:12 UTC by EA Wiki assistant

An existential risk is the risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential (Bostrom 2012; Ord 2020a). Existential risks include natural risks such as those posed by asteroids or supervolcanoes as well as anthropogenic risks like mishaps resulting from synthetic biology or artificial intelligence.

A number of authors have argued that existential risks are especially important because the long-run future of humanity matters a great deal (Beckstead 2013; Bostrom 2013; Greaves & MacAskill 2019; Ord 2020a). Many believe that there is no intrinsic moral difference between the importance of a life today and one in a hundred years. However, there may be many more people in the future than there are now. They argue, therefore, that it is overwhelmingly important to preserve that potential, even if the risks to humanity are small.

One objection to this argument is that people have a special responsibility to other people currently alive that they do not have to people who have not yet been born (Roberts 2009). Another objection is that, although it would in principle be important to manage, the risks are currently so unlikely and poorly understood that existential risk reduction is less cost-effective than work on other promising areas.

Bibliography

Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, PhD thesis, Rutgers University.

Bostrom, Nick (2002) Existential risks: analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology, vol. 9.
A paper surveying a wide range of non-extinction existential risks.

Bostrom, Nick (2012) Frequently asked questions, Existential Risk: Threats to Humanity’s Future (updated 2013).
This FAQ introduces readers to existential risk.

Bostrom, Nick (2013) Existential risk prevention as global priority, Global Policy, vol. 4, pp. 15–31.
An academic paper making the case for existential risk work.

Greaves, Hilary & William Macaskill (2019) The case for strong longtermism, GPI working paper No. 7-2019, Working paper Global Priorities Institute, Oxford University.

Karnofsky, Holden (2014) The moral value of the far future, Open Philanthropy, July 3.

Matheny, Jason Gaverick (2007) Reducing the risk of human extinction, Risk Analysis, vol. 27, pp. 1335–1344.
A paper exploring the cost-effectiveness of extinction risk reduction.

Ord, Toby (2020a) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Ord, Toby (2020b) Existential risks to humanity in Pedro Conceição (ed.) The 2020 Human Development Report: The Next Frontier: Human Development and the Anthropocene, New York: United Nations Development Programme, pp. 106–111.

Roberts, M. A. (2009) The nonidentity problem, Stanford Encyclopedia of Philosophy, July 21 (updated 1 December 2020).

Tomasik, Brian (2019) Risks of astronomical future suffering, Ceter on Long-Term Risk, July 2.
An article exploring ways in which a future full of Earth-originating life might be bad.

Whittlestone, Jess (2017) The long-term future, Effective Altruism, November 16.

Related entries

civilizational collapse | dystopia | estimation of existential risks | existential catastrophe | existential risk factor | existential security | global catastrophic risk | hinge of history | longtermism | moral perspectives on existential risk reduction | Toby Ord | rationality community | Russell–Einstein Manifesto | s-risks

Database of ex­is­ten­tial risk estimates

MichaelA15 Apr 2020 12:43 UTC
87 points
35 comments5 min readEA link

Ex­is­ten­tial risks are not just about humanity

MichaelA28 Apr 2020 0:09 UTC
18 points
0 comments5 min readEA link

X-risks to all life v. to humans

RobertHarling3 Jun 2020 15:40 UTC
56 points
33 comments4 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA15 Jul 2020 12:28 UTC
60 points
2 comments7 min readEA link

The Im­por­tance of Un­known Ex­is­ten­tial Risks

MichaelDickens23 Jul 2020 19:09 UTC
66 points
11 comments12 min readEA link

Quan­tify­ing the prob­a­bil­ity of ex­is­ten­tial catas­tro­phe: A re­ply to Beard et al.

MichaelA10 Aug 2020 5:56 UTC
21 points
3 comments3 min readEA link
(gcrinstitute.org)

What is ex­is­ten­tial se­cu­rity?

MichaelA1 Sep 2020 9:40 UTC
26 points
1 comment6 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
262 points
65 comments37 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
146 points
28 comments62 min readEA link

Causal di­a­grams of the paths to ex­is­ten­tial catastrophe

MichaelA1 Mar 2020 14:08 UTC
40 points
10 comments13 min readEA link

Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catastrophes

MichaelA24 Apr 2020 13:27 UTC
24 points
3 comments7 min readEA link

Some thoughts on Toby Ord’s ex­is­ten­tial risk estimates

MichaelA7 Apr 2020 2:19 UTC
54 points
31 comments9 min readEA link

[Question] How Much Does New Re­search In­form Us About Ex­is­ten­tial Cli­mate Risk?

zdgroff22 Jul 2020 23:47 UTC
60 points
5 comments1 min readEA link

Miti­gat­ing x-risk through modularity

Toby Newberry17 Dec 2020 19:54 UTC
83 points
4 comments14 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #1 - Sum­mary of the Paper

Alex HT21 Jun 2020 9:22 UTC
56 points
7 comments9 min readEA link

In­for­ma­tion se­cu­rity ca­reers for GCR reduction

ClaireZabel20 Jun 2019 23:56 UTC
158 points
34 comments8 min readEA link

Eight high-level un­cer­tain­ties about global catas­trophic and ex­is­ten­tial risk

SiebeRozendal28 Nov 2019 14:47 UTC
80 points
9 comments5 min readEA link

Ex­is­ten­tial Risk and Eco­nomic Growth

leopold3 Sep 2019 13:23 UTC
121 points
30 comments1 min readEA link

Book Re­view: The Precipice

Aaron Gertler9 Apr 2020 21:21 UTC
39 points
0 comments17 min readEA link
(slatestarcodex.com)

Im­prov­ing dis­aster shelters to in­crease the chances of re­cov­ery from a global catastrophe

Nick_Beckstead19 Feb 2014 22:17 UTC
14 points
5 commentsEA link

The timing of labour aimed at re­duc­ing ex­is­ten­tial risk

Toby_Ord24 Jul 2014 4:08 UTC
19 points
6 commentsEA link

Cru­cial ques­tions for longtermists

MichaelA29 Jul 2020 9:39 UTC
76 points
16 comments19 min readEA link

Giv­ing Now vs. Later for Ex­is­ten­tial Risk: An Ini­tial Approach

MichaelDickens29 Aug 2020 1:04 UTC
12 points
2 comments30 min readEA link

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickens3 Sep 2020 1:15 UTC
73 points
7 comments30 min readEA link

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA Global3 Sep 2020 18:11 UTC
28 points
0 comments24 min readEA link
(www.youtube.com)

AI Gover­nance: Op­por­tu­nity and The­ory of Impact

Allan Dafoe17 Sep 2020 6:30 UTC
160 points
14 comments13 min readEA link

Ob­jec­tives of longter­mist policy making

Henrik Øberg Myhre10 Feb 2021 18:26 UTC
48 points
7 comments22 min readEA link

Some global catas­trophic risk estimates

Tamay10 Feb 2021 19:32 UTC
99 points
12 comments1 min readEA link

My per­sonal cruxes for fo­cus­ing on ex­is­ten­tial risks /​ longter­mism /​ any­thing other than just video games

MichaelA13 Apr 2021 5:50 UTC
48 points
28 comments3 min readEA link

Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_Carlsmith28 Apr 2021 21:41 UTC
76 points
33 comments1 min readEA link

Long-Term Fu­ture Fund: April 2019 grant recommendations

Habryka23 Apr 2019 7:00 UTC
144 points
242 comments46 min readEA link

Which World Gets Saved

trammell9 Nov 2018 18:08 UTC
95 points
25 commentsEA link

Will the Treaty on the Pro­hi­bi­tion of Nu­clear Weapons af­fect nu­clear de­pro­lifer­a­tion through le­gal chan­nels?

Luisa_Rodriguez6 Dec 2019 10:38 UTC
98 points
5 comments32 min readEA link

Which nu­clear wars should worry us most?

Luisa_Rodriguez16 Jun 2019 23:31 UTC
93 points
12 comments6 min readEA link

How bad would nu­clear win­ter caused by a US-Rus­sia nu­clear ex­change be?

Luisa_Rodriguez20 Jun 2019 1:48 UTC
88 points
11 comments43 min readEA link

How many peo­ple would be kil­led as a di­rect re­sult of a US-Rus­sia nu­clear ex­change?

Luisa_Rodriguez30 Jun 2019 3:00 UTC
86 points
17 comments52 min readEA link

Long-Term Fu­ture Fund: Au­gust 2019 grant recommendations

Habryka3 Oct 2019 18:46 UTC
79 points
70 comments64 min readEA link

Would US and Rus­sian nu­clear forces sur­vive a first strike?

Luisa_Rodriguez18 Jun 2019 0:28 UTC
74 points
4 comments24 min readEA link

Bioinfohazards

Fin17 Sep 2019 2:41 UTC
79 points
10 comments18 min readEA link

Key points from The Dead Hand, David E. Hoffman

Kit9 Aug 2019 13:59 UTC
71 points
8 comments7 min readEA link

Tech­ni­cal AGI safety re­search out­side AI

richard_ngo18 Oct 2019 15:02 UTC
81 points
5 comments3 min readEA link

Long-Term Fu­ture Fund AMA

HelenToner19 Dec 2018 4:10 UTC
39 points
30 commentsEA link

AMA: Toby Ord, au­thor of “The Precipice” and co-founder of the EA movement

Toby_Ord17 Mar 2020 2:39 UTC
66 points
82 comments1 min readEA link

Crit­i­cal Re­view of ‘The Precipice’: A Re­assess­ment of the Risks of AI and Pandemics

Fods1211 May 2020 11:11 UTC
80 points
32 comments26 min readEA link

[Question] Pro­jects tack­ling nu­clear risk?

Sanjay29 May 2020 22:41 UTC
29 points
4 comments1 min readEA link

Bot­tle­necks and Solu­tions for the X-Risk Ecosystem

FlorentBerthet8 Oct 2018 12:47 UTC
42 points
14 commentsEA link

[Question] Is some kind of min­i­mally-in­va­sive mass surveillance re­quired for catas­trophic risk pre­ven­tion?

casebash1 Jul 2020 23:32 UTC
23 points
6 comments1 min readEA link

‘The Precipice’ Book Review

Matt g27 Jul 2020 22:10 UTC
19 points
1 comment4 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

Jack10 Aug 2020 10:24 UTC
55 points
11 comments41 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

deluks91720 Aug 2020 20:23 UTC
42 points
0 comments3 min readEA link

Fore­cast­ing Thread: Ex­is­ten­tial Risk

amandango22 Sep 2020 20:51 UTC
24 points
4 comments2 min readEA link
(www.lesswrong.com)

The end of the Bronze Age as an ex­am­ple of a sud­den col­lapse of civilization

FJehn28 Oct 2020 12:55 UTC
45 points
7 comments7 min readEA link

Nu­clear war is un­likely to cause hu­man extinction

landfish7 Nov 2020 5:39 UTC
36 points
23 comments11 min readEA link

ALLFED 2020 Highlights

AronM19 Nov 2020 22:06 UTC
48 points
5 comments26 min readEA link

Del­e­gated agents in prac­tice: How com­pa­nies might end up sel­l­ing AI ser­vices that act on be­half of con­sumers and coal­i­tions, and what this im­plies for safety research

remmelt26 Nov 2020 16:39 UTC
11 points
0 comments4 min readEA link

An­nounc­ing AXRP, the AI X-risk Re­search Podcast

DanielFilan23 Dec 2020 20:10 UTC
30 points
1 comment1 min readEA link

What is the like­li­hood that civ­i­liza­tional col­lapse would di­rectly lead to hu­man ex­tinc­tion (within decades)?

Luisa_Rodriguez24 Dec 2020 22:10 UTC
198 points
29 comments50 min readEA link

Assess­ing Cli­mate Change’s Con­tri­bu­tion to Global Catas­trophic Risk

HaydnBelfield19 Feb 2021 16:26 UTC
22 points
8 comments37 min readEA link

[Question] What do you make of the dooms­day ar­gu­ment?

niklas19 Mar 2021 6:30 UTC
12 points
8 comments1 min readEA link

In­tro­duc­ing The Non­lin­ear Fund: AI Safety re­search, in­cu­ba­tion, and funding

Kat Woods18 Mar 2021 14:07 UTC
64 points
32 comments5 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA4 Apr 2021 3:09 UTC
59 points
28 comments2 min readEA link
(globalprioritiesinstitute.org)

‘Are We Doomed?’ Memos

Miranda_Zhang19 May 2021 13:51 UTC
14 points
0 comments15 min readEA link

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawford2 Jun 2021 18:47 UTC
96 points
35 comments3 min readEA link

Does cli­mate change de­serve more at­ten­tion within EA?

Louis_Dixon17 Apr 2019 6:50 UTC
114 points
66 comments15 min readEA link

Con­cern­ing the Re­cent 2019-Novel Coron­avirus Outbreak

Matthew_Barnett27 Jan 2020 5:47 UTC
104 points
140 comments3 min readEA link

Age-Weighted Voting

William_MacAskill12 Jul 2019 15:21 UTC
63 points
39 comments6 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

Cor­po­rate Global Catas­trophic Risks (C-GCRs)

HaukeHillebrandt30 Jun 2019 16:53 UTC
63 points
17 comments10 min readEA link

How x-risk pro­jects are differ­ent from startups

Jan_Kulveit5 Apr 2019 7:35 UTC
50 points
9 comments1 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDello20 May 2020 23:01 UTC
21 points
10 comments6 min readEA link

Sur­viv­ing Global Catas­tro­phe in Nu­clear Sub­marines as Refuges

turchin5 Apr 2017 8:06 UTC
14 points
5 commentsEA link

21 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Sep 2019 up­date)

HaydnBelfield5 Nov 2019 14:26 UTC
31 points
3 comments13 min readEA link

Defin­ing Meta Ex­is­ten­tial Risk

rhys_lindmark9 Jul 2019 18:16 UTC
12 points
3 comments4 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port Oc­to­ber 2019 - Jan­uary 2020

HaydnBelfield8 Apr 2020 13:28 UTC
8 points
0 comments17 min readEA link

19 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan, Feb & Mar 2020 up­date)

HaydnBelfield8 Apr 2020 13:19 UTC
13 points
0 comments12 min readEA link

16 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Nov & Dec 2019 up­date)

HaydnBelfield15 Jan 2020 12:07 UTC
21 points
0 comments8 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port April—Septem­ber 2019

HaydnBelfield30 Sep 2019 19:20 UTC
14 points
1 comment16 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port: Novem­ber 2018 - April 2019

HaydnBelfield1 May 2019 15:34 UTC
10 points
16 comments15 min readEA link

CSER Spe­cial Is­sue: ‘Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk’

HaydnBelfield2 Oct 2018 17:18 UTC
9 points
1 commentEA link

Cen­tre for the Study of Ex­is­ten­tial Risk: Six Month Re­port May-Oc­to­ber 2018

HaydnBelfield30 Nov 2018 20:32 UTC
26 points
2 commentsEA link

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecas7 Jun 2020 19:52 UTC
2 points
15 comments3 min readEA link

ALLFED 2019 An­nual Re­port and Fundrais­ing Appeal

AronM23 Nov 2019 2:05 UTC
37 points
12 comments21 min readEA link

Differ­en­tial tech­nolog­i­cal de­vel­op­ment

velutvulpes25 Jun 2020 10:54 UTC
28 points
7 comments5 min readEA link

Civ­i­liza­tion Re-Emerg­ing After a Catas­trophic Collapse

MichaelA27 Jun 2020 3:22 UTC
30 points
17 comments2 min readEA link
(www.youtube.com)

Prevent­ing hu­man extinction

Peter Singer19 Aug 2013 21:07 UTC
12 points
8 commentsEA link

FLI AI Align­ment pod­cast: Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

evhub1 Jul 2020 20:59 UTC
13 points
2 comments1 min readEA link
(futureoflife.org)

[Question] Are there su­perfore­casts for ex­is­ten­tial risk?

Alex HT7 Jul 2020 7:39 UTC
24 points
13 comments1 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #2 - A Crit­i­cal Look at Model Conclusions

Ben_Snodin18 Aug 2020 10:25 UTC
57 points
8 comments17 min readEA link

Carl Ro­bichaud: Fac­ing the risk of nu­clear war in the 21st century

EA Global15 Jul 2020 17:17 UTC
12 points
0 comments12 min readEA link
(www.youtube.com)

A list of good heuris­tics that the case for AI X-risk fails

Aaron Gertler16 Jul 2020 9:56 UTC
23 points
9 comments2 min readEA link
(www.alignmentforum.org)

Mike Hue­mer on The Case for Tyranny

casebash16 Jul 2020 9:57 UTC
24 points
4 comments1 min readEA link
(fakenous.net)

Im­prov­ing the fu­ture by in­fluenc­ing ac­tors’ benev­olence, in­tel­li­gence, and power

MichaelA20 Jul 2020 10:00 UTC
56 points
15 comments17 min readEA link

Up­date on civ­i­liza­tional col­lapse research

landfish10 Feb 2020 23:40 UTC
52 points
7 comments3 min readEA link

Toby Ord: Fireside Chat and Q&A

EA Global21 Jul 2020 16:23 UTC
13 points
0 comments26 min readEA link
(www.youtube.com)

Bon­nie Jenk­ins: Fireside chat

EA Global22 Jul 2020 15:59 UTC
17 points
0 comments25 min readEA link
(www.youtube.com)

In­tel­lec­tual Diver­sity in AI Safety

KR22 Jul 2020 19:07 UTC
19 points
8 comments3 min readEA link

Scru­ti­niz­ing AI Risk (80K, #81) - v. quick summary

Louis_Dixon23 Jul 2020 19:02 UTC
10 points
0 comments3 min readEA link

Per­sonal thoughts on ca­reers in AI policy and strategy

carrickflynn27 Sep 2017 16:52 UTC
49 points
29 commentsEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
65 points
8 comments4 min readEA link

A pro­posed ad­just­ment to the as­tro­nom­i­cal waste argument

Nick_Beckstead27 May 2013 4:00 UTC
15 points
1 commentEA link

Con­ver­sa­tion with Holden Karnofsky, Nick Beck­stead, and Eliezer Yud­kowsky on the “long-run” per­spec­tive on effec­tive altruism

Nick_Beckstead18 Aug 2014 4:30 UTC
4 points
7 commentsEA link

EA read­ing list: longter­mism and ex­is­ten­tial risks

richard_ngo3 Aug 2020 9:52 UTC
34 points
3 comments1 min readEA link

The ex­pected value of ex­tinc­tion risk re­duc­tion is positive

JanBrauner9 Dec 2018 8:00 UTC
40 points
21 comments61 min readEA link

Ex­tinc­tion risk re­duc­tion and moral cir­cle ex­pan­sion: Spec­u­lat­ing sus­pi­cious convergence

MichaelA4 Aug 2020 11:38 UTC
12 points
4 comments6 min readEA link

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumway7 Aug 2020 6:27 UTC
34 points
18 comments16 min readEA link

On Col­lapse Risk (C-Risk)

Pawntoe42 Jan 2020 5:10 UTC
34 points
10 comments8 min readEA link

My cur­rent thoughts on MIRI’s “highly re­li­able agent de­sign” work

Daniel_Dewey7 Jul 2017 1:17 UTC
50 points
64 commentsEA link

Cost-Effec­tive­ness of Foods for Global Catas­tro­phes: Even Bet­ter than Be­fore?

Denkenberger19 Nov 2018 21:57 UTC
23 points
4 commentsEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger29 Oct 2017 23:28 UTC
36 points
9 commentsEA link

[Paper] In­ter­ven­tions that May Prevent or Mol­lify Su­per­vol­canic Eruptions

Denkenberger15 Jan 2018 21:46 UTC
20 points
5 commentsEA link

APPG on Fu­ture Gen­er­a­tions im­pact re­port – Rais­ing the pro­file of fu­ture gen­er­a­tion in the UK Parliament

weeatquince12 Aug 2020 14:24 UTC
90 points
2 comments17 min readEA link

Should We Pri­ori­tize Long-Term Ex­is­ten­tial Risk?

MichaelDickens20 Aug 2020 2:23 UTC
28 points
17 comments3 min readEA link

We’re (sur­pris­ingly) more pos­i­tive about tack­ling bio risks: out­comes of a survey

Sanjay25 Aug 2020 9:14 UTC
48 points
5 comments11 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA25 Aug 2020 9:53 UTC
26 points
2 comments2 min readEA link
(www.openphilanthropy.org)

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendal20 Jun 2019 20:18 UTC
56 points
8 comments20 min readEA link

A (Very) Short His­tory of the Col­lapse of Civ­i­liza­tions, and Why it Matters

Davidmanheim30 Aug 2020 7:49 UTC
46 points
16 comments3 min readEA link

3 sug­ges­tions about jar­gon in EA

MichaelA5 Jul 2020 3:37 UTC
105 points
16 comments5 min readEA link

AMA: To­bias Bau­mann, Cen­ter for Re­duc­ing Suffering

Tobias_Baumann6 Sep 2020 10:45 UTC
46 points
45 comments1 min readEA link

Model­ling the odds of re­cov­ery from civ­i­liza­tional collapse

MichaelA17 Sep 2020 11:58 UTC
26 points
5 comments2 min readEA link

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

Paul_Christiano1 Oct 2020 18:52 UTC
106 points
19 comments3 min readEA link

Int’l agree­ments to spend % of GDP on global pub­lic goods

HaukeHillebrandt22 Nov 2020 10:33 UTC
17 points
1 comment1 min readEA link

Should marginal longter­mist dona­tions sup­port fun­da­men­tal or in­ter­ven­tion re­search?

MichaelA30 Nov 2020 1:10 UTC
40 points
4 comments15 min readEA link

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DonyChristie29 Nov 2020 0:26 UTC
22 points
3 comments2 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory_Lewis13 Apr 2018 1:44 UTC
47 points
34 commentsEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

critch15 Dec 2020 12:15 UTC
10 points
0 comments56 min readEA link
(alignmentforum.org)

[Question] What are the best ar­ti­cles/​blogs on the psy­chol­ogy of ex­is­ten­tial risk?

geoffreymiller16 Dec 2020 18:05 UTC
24 points
7 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
134 points
14 comments68 min readEA link

In­ter­na­tional Co­op­er­a­tion Against Ex­is­ten­tial Risks: In­sights from In­ter­na­tional Re­la­tions Theory

Jenny_Xiao11 Jan 2021 7:10 UTC
37 points
7 comments6 min readEA link

Global Pri­ori­ties In­sti­tute: Re­search Agenda

Aaron Gertler20 Jan 2021 20:09 UTC
19 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

Some EA Fo­rum Posts I’d like to write

Linch23 Feb 2021 5:27 UTC
83 points
10 comments7 min readEA link

In­ter­ven­tion Pro­file: Bal­lot Initiatives

Jason Schukraft13 Jan 2020 15:41 UTC
110 points
4 comments42 min readEA link

Rus­sian x-risks newslet­ter, sum­mer 2019

avturchin7 Sep 2019 9:55 UTC
23 points
1 comment4 min readEA link

Rus­sian x-risks newslet­ter win­ter 2019-2020

avturchin1 Mar 2020 12:51 UTC
10 points
4 comments2 min readEA link

Rus­sian x-risks newslet­ter, fall 2019

avturchin3 Dec 2019 17:01 UTC
27 points
2 comments3 min readEA link

How likely is a nu­clear ex­change be­tween the US and Rus­sia?

Luisa_Rodriguez20 Jun 2019 1:49 UTC
63 points
10 comments14 min readEA link

[Notes] Steven Pinker and Yu­val Noah Harari in conversation

Louis_Dixon9 Feb 2020 12:49 UTC
29 points
2 comments7 min readEA link

Pres­i­dent Trump as a Global Catas­trophic Risk

HaydnBelfield18 Nov 2016 18:02 UTC
22 points
17 comments27 min readEA link

[Question] What ac­tions would ob­vi­ously de­crease x-risk?

reallyeli6 Oct 2019 21:00 UTC
22 points
27 comments1 min readEA link

Jaan Tal­linn: Fireside chat (2018)

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments12 min readEA link
(www.youtube.com)

Seth Baum: Rec­on­cil­ing in­ter­na­tional security

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments15 min readEA link
(www.youtube.com)

Amesh Adalja: Pan­demic pathogens

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments20 min readEA link
(www.youtube.com)

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments9 min readEA link
(www.youtube.com)

Toby Ord: Q&A (2020)

EA Global13 Jun 2020 8:17 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Luisa Ro­driguez: The like­li­hood and sever­ity of a US-Rus­sia nu­clear exchange

EA Global18 Oct 2019 18:05 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Ex­is­ten­tial risk and the fu­ture of hu­man­ity (Toby Ord)

EA Global21 Mar 2020 18:05 UTC
9 points
0 comments14 min readEA link
(www.youtube.com)

Notes on “Bioter­ror and Biowar­fare” (2006)

MichaelA1 Mar 2021 9:42 UTC
23 points
6 comments4 min readEA link

In­ter­view Thomas Moynihan: “The dis­cov­ery of ex­tinc­tion is a philo­soph­i­cal cen­tre­piece of the mod­ern age”

felix.h6 Mar 2021 11:51 UTC
11 points
0 comments18 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

jackmalde9 Mar 2021 17:58 UTC
77 points
43 comments19 min readEA link

Jenny Xiao: Dual moral obli­ga­tions and in­ter­na­tional co­op­er­a­tion against global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Jaan Tal­linn: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Nick Beck­stead: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

A Biose­cu­rity and Biorisk Read­ing+ List

tessa14 Mar 2021 2:30 UTC
75 points
8 comments11 min readEA link

In­ter­na­tional Crim­i­nal Law and the Fu­ture of Hu­man­ity: A The­ory of the Crime of Omnicide

philosophytorres22 Mar 2021 12:19 UTC
7 points
1 comment1 min readEA link

An­drew Sny­der Beat­tie: Biotech­nol­ogy and ex­is­ten­tial risk

EA Global3 Nov 2017 7:43 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Marc Lip­sitch: Prevent­ing catas­trophic risks by miti­gat­ing sub­catas­trophic ones

EA Global2 Jun 2017 8:48 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA Global2 Jun 2017 8:48 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA Global2 Jun 2017 8:48 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Max Daniel: Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

EA Global2 Jun 2017 8:48 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

The Case for Strong Longtermism

Global Priorities Institute3 Sep 2019 1:17 UTC
12 points
1 comment6 min readEA link
(globalprioritiesinstitute.org)

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maxime29 Mar 2021 18:10 UTC
110 points
21 comments11 min readEA link

New Cause Area: Pro­gram­matic Mettā

Milan_Griffes1 Apr 2021 9:00 UTC
8 points
4 comments2 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

Jia6 Apr 2021 8:49 UTC
43 points
5 comments7 min readEA link

AGI risk: analo­gies & arguments

technicalities23 Mar 2021 13:18 UTC
23 points
3 comments7 min readEA link

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torges9 Mar 2021 9:39 UTC
48 points
0 comments1 min readEA link
(longtermrisk.org)

What Ques­tions Should We Ask Speak­ers at the Stan­ford Ex­is­ten­tial Risks Con­fer­ence?

kuhanj10 Apr 2021 0:51 UTC
19 points
2 comments2 min readEA link

Talk­ing With a Biose­cu­rity Pro­fes­sional (Quick Notes)

AllAmericanBreakfast10 Apr 2021 4:23 UTC
35 points
0 comments2 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

Why I ex­pect suc­cess­ful (nar­row) alignment

Tobias_Baumann29 Dec 2018 15:46 UTC
18 points
10 commentsEA link
(s-risks.org)

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
25 points
1 comment1 min readEA link
(s-risks.org)

New in­fo­graphic based on “The Precipice”. any feed­back?

michael.andregg14 Jan 2021 7:29 UTC
43 points
4 comments1 min readEA link

Mo­ral plu­ral­ism and longter­mism | Sunyshore

evelynciara17 Apr 2021 0:14 UTC
26 points
0 comments5 min readEA link
(sunyshore.substack.com)

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_Carlsmith22 Mar 2021 8:21 UTC
93 points
13 comments12 min readEA link

EAGxVir­tual 2020 light­ning talks

EA Global25 Jan 2021 15:32 UTC
12 points
1 comment33 min readEA link
(www.youtube.com)

Com­par­a­tive Bias

Joey5 Nov 2014 5:57 UTC
5 points
5 commentsEA link

Ex­is­ten­tial Risk: More to explore

EA Introductory Program1 Jan 2021 10:15 UTC
1 point
0 comments1 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA2 May 2021 18:00 UTC
30 points
19 comments2 min readEA link

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

GCRI Open Call for Ad­visees and Collaborators

McKenna_Fitzgerald20 May 2021 22:07 UTC
13 points
0 comments4 min readEA link

[Question] MSc in Risk and Disaster Science? (UCL) - Does this fit the EA path?

yazanasad25 May 2021 3:33 UTC
10 points
6 comments1 min readEA link

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergal27 May 2021 6:44 UTC
110 points
15 comments57 min readEA link

Fi­nal Re­port of the Na­tional Se­cu­rity Com­mis­sion on Ar­tifi­cial In­tel­li­gence (NSCAI, 2021)

MichaelA1 Jun 2021 8:19 UTC
48 points
3 comments4 min readEA link
(www.nscai.gov)

Progress stud­ies vs. longter­mist EA: some differences

Max_Daniel31 May 2021 21:35 UTC
75 points
26 comments3 min readEA link

Astro­nom­i­cal Waste: The Op­por­tu­nity Cost of De­layed Tech­nolog­i­cal Devel­op­ment—Nick Bostrom (2003)

velutvulpes10 Jun 2021 21:21 UTC
10 points
0 comments8 min readEA link
(www.nickbostrom.com)

An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

MichaelA16 Jun 2021 16:12 UTC
31 points
0 comments2 min readEA link

Quotes about the long reflection

MichaelA5 Mar 2020 7:48 UTC
52 points
12 comments13 min readEA link

[Question] What ques­tions could COVID-19 provide ev­i­dence on that would help guide fu­ture EA de­ci­sions?

MichaelA27 Mar 2020 5:51 UTC
7 points
7 comments1 min readEA link

Differ­en­tial progress /​ in­tel­lec­tual progress /​ tech­nolog­i­cal development

MichaelA24 Apr 2020 14:08 UTC
31 points
14 comments7 min readEA link

Space gov­er­nance is im­por­tant, tractable and neglected

Tobias_Baumann7 Jan 2020 11:24 UTC
85 points
18 comments7 min readEA link

How tractable is chang­ing the course of his­tory?

Jamie_Harris22 May 2019 15:29 UTC
41 points
2 comments7 min readEA link
(www.sentienceinstitute.org)

Economist: “What’s the worst that could hap­pen”. A pos­i­tive, sharable but vague ar­ti­cle on Ex­is­ten­tial Risk

Nathan Young8 Jul 2020 10:37 UTC
12 points
3 comments2 min readEA link

[Question] Why al­tru­ism at all?

Singleton12 Jul 2020 22:04 UTC
−2 points
1 comment1 min readEA link

[Question] A bill to mas­sively ex­pand NSF to tech do­mains. What’s the rele­vance for x-risk?

EdoArad12 Jul 2020 15:20 UTC
22 points
4 comments1 min readEA link

Cli­mate change dona­tion recommendations

Sanjay16 Jul 2020 21:17 UTC
40 points
7 comments14 min readEA link

[Question] Put­ting Peo­ple First in a Cul­ture of De­hu­man­iza­tion

jhealy22 Jul 2020 3:31 UTC
16 points
3 comments1 min readEA link

[Question] Is nan­otech­nol­ogy (such as APM) im­por­tant for EAs’ to work on?

pixel_brownie_software12 Mar 2020 15:36 UTC
6 points
9 comments1 min readEA link

[Question] What do we do if AI doesn’t take over the world, but still causes a sig­nifi­cant global prob­lem?

James_Banks2 Aug 2020 3:35 UTC
16 points
5 comments1 min readEA link

State Space of X-Risk Trajectories

David_Kristoffersson6 Feb 2020 13:37 UTC
24 points
6 comments9 min readEA link

The Precipice: a risky re­view by a non-EA

fmoreno8 Aug 2020 14:40 UTC
13 points
0 comments18 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #3 - Ex­ten­sions and Variations

Alex HT20 Dec 2020 12:39 UTC
5 points
0 comments12 min readEA link

Ur­gency vs. Pa­tience—a Toy Model

Alex HT19 Aug 2020 14:13 UTC
39 points
4 comments3 min readEA link

[Question] Is ex­is­ten­tial risk more press­ing than other ways to im­prove the long-term fu­ture?

evelynciara20 Aug 2020 3:50 UTC
23 points
1 comment1 min readEA link

On­line Con­fer­ence Op­por­tu­nity for EA Grad Students

jonathancourtney21 Aug 2020 17:31 UTC
8 points
1 comment1 min readEA link

On The Rel­a­tive Long-Term Fu­ture Im­por­tance of In­vest­ments in Eco­nomic Growth and Global Catas­trophic Risk Reduction

poliboni30 Mar 2020 20:11 UTC
33 points
1 comment1 min readEA link

[Question] Are so­cial me­dia al­gorithms an ex­is­ten­tial risk?

BarryGrimes15 Sep 2020 8:52 UTC
24 points
13 comments1 min readEA link

Is Tech­nol­ogy Ac­tu­ally Mak­ing Things Bet­ter? – Pairagraph

evelynciara1 Oct 2020 16:06 UTC
16 points
1 comment1 min readEA link
(www.pairagraph.com)

New 3-hour pod­cast with An­ders Sand­berg about Grand Futures

Gus Docker6 Oct 2020 10:47 UTC
21 points
1 comment1 min readEA link

Leopold Aschen­bren­ner re­turns to X-risk and growth

nickwhitaker20 Oct 2020 23:24 UTC
24 points
3 comments1 min readEA link

4 Years Later: Pres­i­dent Trump and Global Catas­trophic Risk

HaydnBelfield25 Oct 2020 16:28 UTC
23 points
9 comments9 min readEA link

Why those who care about catas­trophic and ex­is­ten­tial risk should care about au­tonomous weapons

aaguirre11 Nov 2020 17:27 UTC
90 points
30 comments19 min readEA link

Plan of Ac­tion to Prevent Hu­man Ex­tinc­tion Risks

turchin14 Mar 2016 14:51 UTC
11 points
3 commentsEA link

The Map of Shelters and Re­fuges from Global Risks (Plan B of X-risks Preven­tion)

turchin22 Oct 2016 10:22 UTC
11 points
9 commentsEA link

Im­prov­ing long-run civil­i­sa­tional robustness

RyanCarey10 May 2016 11:14 UTC
9 points
6 commentsEA link

[Notes] Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

Louis_Dixon14 Jan 2020 22:13 UTC
39 points
7 comments14 min readEA link

Pangea: The Worst of Times

Halstead5 Apr 2020 15:13 UTC
82 points
7 comments8 min readEA link

Cli­mate change, geo­eng­ineer­ing, and ex­is­ten­tial risk

Halstead20 Mar 2018 10:48 UTC
16 points
11 commentsEA link

The Map of Im­pact Risks and As­teroid Defense

turchin3 Nov 2016 15:34 UTC
7 points
9 commentsEA link

[Paper] Sur­viv­ing global risks through the preser­va­tion of hu­man­ity’s data on the Moon

turchin3 Mar 2018 18:39 UTC
11 points
5 commentsEA link

11 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (June 2020 up­date)

HaydnBelfield2 Jul 2020 13:09 UTC
14 points
0 comments6 min readEA link
(www.cser.ac.uk)

W-Risk and the Tech­nolog­i­cal Wavefront (Nell Wat­son)

Aaron Gertler11 Nov 2018 23:22 UTC
9 points
1 commentEA link

Com­bi­na­tion Ex­is­ten­tial Risks

ozymandias14 Jan 2019 19:29 UTC
26 points
5 commentsEA link
(thingofthings.wordpress.com)

[Question] Donat­ing against Short Term AI risks

Jan-WillemvanPutten16 Nov 2020 12:23 UTC
5 points
9 comments1 min readEA link

How Rood­man’s GWP model trans­lates to TAI timelines

kokotajlod16 Nov 2020 14:11 UTC
21 points
0 comments2 min readEA link

Ques­tions for Jaan Tal­linn’s fireside chat in EAGxAPAC this weekend

BrianTan17 Nov 2020 2:12 UTC
13 points
8 comments1 min readEA link

Ques­tions for Nick Beck­stead’s fireside chat in EAGxAPAC this weekend

BrianTan17 Nov 2020 15:05 UTC
12 points
15 comments3 min readEA link

An­nounc­ing AI Safety Support

Linda Linsefors19 Nov 2020 20:19 UTC
53 points
0 comments4 min readEA link

Long-Term Fu­ture Fund: Ask Us Any­thing!

AdamGleave3 Dec 2020 13:44 UTC
88 points
154 comments1 min readEA link

[Question] Can we con­vince peo­ple to work on AI safety with­out con­vinc­ing them about AGI hap­pen­ing this cen­tury?

BrianTan26 Nov 2020 14:46 UTC
8 points
3 comments2 min readEA link

A toy model for tech­nolog­i­cal ex­is­ten­tial risk

RobertHarling28 Nov 2020 11:55 UTC
10 points
3 comments4 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port June—Septem­ber 2020

HaydnBelfield2 Dec 2020 18:33 UTC
22 points
0 comments18 min readEA link

[Question] Look­ing for col­lab­o­ra­tors af­ter last 80k pod­cast with Tris­tan Harris

Jan-WillemvanPutten7 Dec 2020 22:23 UTC
19 points
7 comments3 min readEA link

Good v. Op­ti­mal Futures

RobertHarling11 Dec 2020 16:38 UTC
31 points
10 comments6 min readEA link

Longter­mism which doesn’t care about Ex­tinc­tion—Im­pli­ca­tions of Be­natar’s asym­me­try be­tween pain and pleasure

jushy19 Dec 2020 12:31 UTC
17 points
11 comments1 min readEA link

I made a video on en­g­ineered pandemics

Jeroen_W21 Dec 2020 21:07 UTC
33 points
6 comments1 min readEA link

Against GDP as a met­ric for timelines and take­off speeds

kokotajlod29 Dec 2020 17:50 UTC
41 points
6 comments14 min readEA link

[Cross­post] Rel­a­tivis­tic Colonization

itaibn31 Dec 2020 2:30 UTC
7 points
7 comments4 min readEA link

Le­gal Pri­ori­ties Re­search: A Re­search Agenda

jonasschuett6 Jan 2021 21:47 UTC
57 points
4 comments1 min readEA link

Noah Tay­lor: Devel­op­ing a re­search agenda for bridg­ing ex­is­ten­tial risk and peace and con­flict studies

EA Global21 Jan 2021 16:19 UTC
20 points
0 comments20 min readEA link
(www.youtube.com)

[Pod­cast] Si­mon Beard on Parfit, Cli­mate Change, and Ex­is­ten­tial Risk

finm28 Jan 2021 19:47 UTC
11 points
0 comments1 min readEA link
(hearthisidea.com)

13 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan 2021 up­date)

HaydnBelfield8 Feb 2021 12:42 UTC
7 points
2 comments10 min readEA link

Stu­art Rus­sell Hu­man Com­pat­i­ble AI Roundtable with Allan Dafoe, Rob Re­ich, & Ma­ri­etje Schaake

Mahendra Prasad11 Feb 2021 7:43 UTC
16 points
0 comments1 min readEA link

Pan­demic Net­work Radar: A New Solu­tion for the Tragedy of the Com­mons, with Po-Shen Loh — What EAs can do

ChrisLakin18 Feb 2021 1:42 UTC
7 points
0 comments1 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

Surveillance and free ex­pres­sion | Sunyshore

evelynciara23 Feb 2021 2:14 UTC
9 points
0 comments9 min readEA link
(sunyshore.substack.com)

How to Sur­vive the End of the Universe

avturchin28 Nov 2019 12:40 UTC
42 points
11 comments33 min readEA link

A full syl­labus on longtermism

jtm5 Mar 2021 22:57 UTC
100 points
9 comments8 min readEA link

What is the ar­gu­ment against a Thanos-ing all hu­man­ity to save the lives of other sen­tient be­ings?

somethoughts7 Mar 2021 8:02 UTC
0 points
11 comments3 min readEA link

Re­sponse to Phil Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfield8 Mar 2021 18:09 UTC
84 points
67 comments5 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:48 UTC
115 points
28 comments63 min readEA link

2017 AI Safety Liter­a­ture Re­view and Char­ity Comparison

Larks20 Dec 2017 21:54 UTC
43 points
17 commentsEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

Larks13 Dec 2016 4:36 UTC
53 points
22 commentsEA link

Cri­tique of Su­per­in­tel­li­gence Part 1

Fods1213 Dec 2018 5:10 UTC
20 points
13 commentsEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

Fods1213 Dec 2018 5:12 UTC
7 points
12 commentsEA link

New pop­u­lar sci­ence book on x-risks: “End Times”

HaukeHillebrandt1 Oct 2019 7:18 UTC
17 points
2 comments2 min readEA link

[Pod­cast] Thomas Moynihan on the His­tory of Ex­is­ten­tial Risk

finm22 Mar 2021 11:07 UTC
26 points
2 comments1 min readEA link
(hearthisidea.com)

Ap­ply to the Stan­ford Ex­is­ten­tial Risks Con­fer­ence! (April 17-18)

kuhanj26 Mar 2021 18:28 UTC
24 points
2 comments1 min readEA link

How to PhD

eca28 Mar 2021 19:56 UTC
84 points
28 comments11 min readEA link

Ex­is­ten­tial risk as com­mon cause

technicalities5 Dec 2018 14:01 UTC
32 points
22 commentsEA link

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
38 points
3 comments1 min readEA link
(s-risks.org)

[Question] What is EA opinion on The Bul­letin of the Atomic Scien­tists?

VPetukhov2 Dec 2019 5:45 UTC
35 points
9 comments1 min readEA link

[Link] New Founders Pledge re­port on ex­is­ten­tial risk

Halstead28 Mar 2019 11:46 UTC
40 points
1 comment1 min readEA link

Five GCR grants from the Global Challenges Foundation

Aaron Gertler16 Jan 2020 0:46 UTC
34 points
1 comment5 min readEA link

Op­tion Value, an In­tro­duc­tory Guide

Lev_Maresca21 Feb 2020 14:45 UTC
29 points
3 comments6 min readEA link

Cur­rent Es­ti­mates for Like­li­hood of X-Risk?

rhys_lindmark6 Aug 2018 18:05 UTC
24 points
23 comments1 min readEA link

Some con­sid­er­a­tions for differ­ent ways to re­duce x-risk

tyrael4 Feb 2016 3:21 UTC
23 points
36 commentsEA link

X-risks of SETI and METI?

geoffreymiller2 Jul 2019 22:41 UTC
18 points
11 comments1 min readEA link

[Link] Thiel on GCRs

Milan_Griffes22 Jul 2019 20:47 UTC
28 points
11 comments1 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

willbradshaw28 Oct 2019 15:32 UTC
24 points
8 comments1 min readEA link

Beyond Astro­nom­i­cal Waste

Wei_Dai27 Dec 2018 9:27 UTC
23 points
2 commentsEA link
(www.lesswrong.com)

5 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (April 2020 up­date)

HaydnBelfield29 Apr 2020 9:37 UTC
23 points
1 comment4 min readEA link

Toby Ord: Fireside chat (2018)

EA Global1 Mar 2019 15:48 UTC
19 points
0 comments28 min readEA link
(www.youtube.com)

In­ter­na­tional co­op­er­a­tion as a tool to re­duce two ex­is­ten­tial risks.

johl@umich.edu19 Apr 2021 16:51 UTC
26 points
4 comments27 min readEA link

Jaime Yas­sif: Re­duc­ing global catas­trophic biolog­i­cal risks

EA Global25 Oct 2020 5:48 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Toby Ord at EA Global: Reconnect

EA Global20 Mar 2021 7:00 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

[Question] What would “do­ing enough” to safe­guard the long-term fu­ture look like?

HStencil22 Apr 2020 21:47 UTC
20 points
0 comments1 min readEA link

[Question] Is there any­thing like “green bonds” for x-risk miti­ga­tion?

Ramiro30 Jun 2020 0:33 UTC
21 points
1 comment1 min readEA link

Alien coloniza­tion of Earth’s im­pact the the rel­a­tive im­por­tance of re­duc­ing differ­ent ex­is­ten­tial risks

Evira5 Sep 2019 0:27 UTC
7 points
8 comments1 min readEA link

Niel Bow­er­man: Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

EA Global17 Jan 2020 1:07 UTC
7 points
2 comments15 min readEA link
(www.youtube.com)

En­light­ened Con­cerns of Tomorrow

cassidynelson15 Mar 2018 5:29 UTC
15 points
8 commentsEA link

Emily Grundy: Aus­trali­ans’ per­cep­tions of global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Luisa Ro­driguez: How to do em­piri­cal cause pri­ori­ti­za­tion re­search

EA Global21 Nov 2020 8:12 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Lec­ture Videos from Cam­bridge Con­fer­ence on Catas­trophic Risk

HaydnBelfield23 Apr 2019 16:03 UTC
15 points
3 comments1 min readEA link

Public Opinion about Ex­is­ten­tial Risk

cscanlon_duplicate0.889559973201212525 Aug 2018 12:34 UTC
13 points
9 commentsEA link

Policy and re­search ideas to re­duce ex­is­ten­tial risk

80000_Hours27 Apr 2020 8:46 UTC
2 points
0 comments4 min readEA link
(80000hours.org)

The case for re­duc­ing ex­is­ten­tial risk

80000_Hours1 Oct 2017 8:44 UTC
9 points
0 comments4 min readEA link
(80000hours.org)

My Cause Selec­tion: Dave Denkenberger

Denkenberger16 Aug 2015 15:06 UTC
6 points
7 commentsEA link

Kris­tian Rönn: Global challenges

EA Global11 Aug 2017 8:19 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Is­lands as re­fuges for sur­viv­ing global catastrophes

turchin13 Sep 2018 13:33 UTC
3 points
10 commentsEA link

Causal Net­work Model III: Findings

Alex_Barry22 Nov 2017 15:43 UTC
7 points
4 commentsEA link

An In­for­mal Re­view of Space Exploration

kbog31 Jan 2020 13:16 UTC
43 points
4 comments35 min readEA link

The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

DannyBressler20 May 2021 0:39 UTC
66 points
6 comments2 min readEA link

[Feed­back Re­quest] Hyper­text Fic­tion Piece on Ex­is­ten­tial Hope

Miranda_Zhang30 May 2021 15:44 UTC
31 points
0 comments1 min readEA link

High Im­pact Ca­reers in For­mal Ver­ifi­ca­tion: Ar­tifi­cial Intelligence

quinn5 Jun 2021 14:45 UTC
23 points
5 comments16 min readEA link

US Policy Ca­reers Speaker Series—Sum­mer 2021

Mauricio18 Jun 2021 20:01 UTC
62 points
0 comments2 min readEA link
No comments.