RSS

Ex­is­ten­tial risk

Core TagLast edit: 21 Jul 2022 1:53 UTC by Lizka

An existential risk is the risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential.[1][2] Existential risks include natural risks such as those posed by asteroids or supervolcanoes as well as anthropogenic risks like mishaps resulting from synthetic biology or artificial intelligence.

A number of authors have argued that existential risks are especially important because the long-run future of humanity matters a great deal.[1][3][4][5] Many believe that there is no intrinsic moral difference between the importance of a life today and one in a hundred years. However, there may be many more people in the future than there are now. They argue, therefore, that it is overwhelmingly important to preserve that potential, even if the risks to humanity are small.

One objection to this argument is that people have a special responsibility to other people currently alive that they do not have to people who have not yet been born.[6] Another objection is that, although it would in principle be important to manage, the risks are currently so unlikely and poorly understood that existential risk reduction is less cost-effective than work on other promising areas.

Recommendations

In The Precipice: Existential Risk and the Future of Humanity, Toby Ord offers several policy and research recommendations for handling existential risks:[7]

Further reading

Bostrom, Nick (2002) Existential risks: analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology, vol. 9.
A paper surveying a wide range of non-extinction existential risks.

Bostrom, Nick (2013) Existential risk prevention as global priority, Global Policy, vol. 4, pp. 15–31.

Matheny, Jason Gaverick (2007) Reducing the risk of human extinction, Risk Analysis, vol. 27, pp. 1335–1344.
A paper exploring the cost-effectiveness of extinction risk reduction.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Ord, Toby (2020) Existential risks to humanity in Pedro Conceição (ed.) The 2020 Human Development Report: The Next Frontier: Human Development and the Anthropocene, New York: United Nations Development Programme, pp. 106–111.

Related entries

civilizational collapse | criticism of longtermism and existential risk studies | dystopia | estimation of existential risks | ethics of existential risk | existential catastrophe | existential risk factor | existential security | global catastrophic risk | hinge of history | longtermism | Toby Ord | rationality community | Russell–Einstein Manifesto | s-risk

  1. ^

    Bostrom, Nick (2012) Frequently asked questions, Existential Risk: Threats to Humanity’s Future (updated 2013).

  2. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  3. ^

    Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, PhD thesis, Rutgers University.

  4. ^

    Bostrom, Nick (2013) Existential risk prevention as global priority, Global Policy, vol. 4, pp. 15–31.

  5. ^

    Greaves, Hilary & William Macaskill (2019) The case for strong longtermism, GPI working paper No. 7-2019, Working paper Global Priorities Institute, Oxford University.

  6. ^

    Roberts, M. A. (2009) The nonidentity problem, Stanford Encyclopedia of Philosophy, July 21 (updated 1 December 2020).

  7. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, pp. 280–281.

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA15 Jul 2020 12:28 UTC
77 points
7 comments7 min readEA link

“Long-Ter­mism” vs. “Ex­is­ten­tial Risk”

Scott Alexander6 Apr 2022 21:41 UTC
479 points
83 comments3 min readEA link

The ex­pected value of ex­tinc­tion risk re­duc­tion is positive

JanBrauner9 Dec 2018 8:00 UTC
55 points
22 comments39 min readEA link

Ex­is­ten­tial risks are not just about humanity

MichaelA28 Apr 2020 0:09 UTC
33 points
0 comments5 min readEA link

What is ex­is­ten­tial se­cu­rity?

MichaelA1 Sep 2020 9:40 UTC
31 points
1 comment6 min readEA link

Ex­is­ten­tial risk as com­mon cause

Gavin5 Dec 2018 14:01 UTC
45 points
22 comments5 min readEA link

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

antimonyanthony1 Jul 2021 21:01 UTC
105 points
10 comments32 min readEA link

Nick Bostrom – Ex­is­ten­tial Risk Preven­tion as Global Priority

Zach Stein-Perlman1 Feb 2013 17:00 UTC
15 points
1 comment1 min readEA link
(www.existential-risk.org)

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
129 points
118 comments32 min readEA link
(www.sentienceinstitute.org)

Database of ex­is­ten­tial risk estimates

MichaelA15 Apr 2020 12:43 UTC
120 points
36 comments5 min readEA link

On the as­sess­ment of vol­canic erup­tions as global catas­trophic or ex­is­ten­tial risks

Mike Cassidy13 Oct 2021 14:32 UTC
107 points
17 comments19 min readEA link

Ex­is­ten­tial Risk Ob­ser­va­tory: re­sults and 2022 targets

Otto14 Jan 2022 13:52 UTC
22 points
6 comments4 min readEA link

X-risks to all life v. to humans

RobertHarling3 Jun 2020 15:40 UTC
57 points
33 comments4 min readEA link

The Im­por­tance of Un­known Ex­is­ten­tial Risks

MichaelDickens23 Jul 2020 19:09 UTC
72 points
11 comments9 min readEA link

Quan­tify­ing the prob­a­bil­ity of ex­is­ten­tial catas­tro­phe: A re­ply to Beard et al.

MichaelA10 Aug 2020 5:56 UTC
21 points
3 comments3 min readEA link
(gcrinstitute.org)

Ob­jec­tives of longter­mist policy making

Henrik Øberg Myhre10 Feb 2021 18:26 UTC
54 points
7 comments22 min readEA link

Some con­sid­er­a­tions for differ­ent ways to re­duce x-risk

Jacy4 Feb 2016 3:21 UTC
28 points
36 comments5 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
301 points
85 comments37 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
147 points
28 comments64 min readEA link

Causal di­a­grams of the paths to ex­is­ten­tial catastrophe

MichaelA1 Mar 2020 14:08 UTC
48 points
14 comments12 min readEA link

Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catastrophes

MichaelA24 Apr 2020 13:27 UTC
28 points
3 comments7 min readEA link

Some thoughts on Toby Ord’s ex­is­ten­tial risk estimates

MichaelA7 Apr 2020 2:19 UTC
66 points
33 comments9 min readEA link

[Question] How Much Does New Re­search In­form Us About Ex­is­ten­tial Cli­mate Risk?

zdgroff22 Jul 2020 23:47 UTC
63 points
5 comments1 min readEA link

Miti­gat­ing x-risk through modularity

Toby Newberry17 Dec 2020 19:54 UTC
96 points
6 comments14 min readEA link

Sen­tience In­sti­tute 2021 End of Year Summary

Ali26 Nov 2021 14:40 UTC
65 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

Effec­tive strate­gies for chang­ing pub­lic opinion: A liter­a­ture review

Jamie_Harris9 Nov 2021 14:09 UTC
80 points
2 comments37 min readEA link
(www.sentienceinstitute.org)

Why I pri­ori­tize moral cir­cle ex­pan­sion over re­duc­ing ex­tinc­tion risk through ar­tifi­cial in­tel­li­gence alignment

Jacy20 Feb 2018 18:29 UTC
96 points
72 comments36 min readEA link
(www.sentienceinstitute.org)

ALTER Is­rael—Mid-year 2022 Update

Davidmanheim12 Jun 2022 9:22 UTC
63 points
0 comments2 min readEA link

Ex­is­ten­tial risk pes­simism and the time of perils

David Thorstad12 Aug 2022 14:42 UTC
156 points
51 comments21 min readEA link

Beyond Sim­ple Ex­is­ten­tial Risk: Sur­vival in a Com­plex In­ter­con­nected World

Gideon Futerman21 Nov 2022 14:35 UTC
51 points
51 comments21 min readEA link

Diver­sity In Ex­is­ten­tial Risk Stud­ies Sur­vey: SJ Beard

Gideon Futerman25 Nov 2022 16:29 UTC
6 points
0 comments1 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #1 - Sum­mary of the Paper

Alex HT21 Jun 2020 9:22 UTC
64 points
7 comments10 min readEA link

In­for­ma­tion se­cu­rity ca­reers for GCR reduction

ClaireZabel20 Jun 2019 23:56 UTC
186 points
34 comments8 min readEA link

Eight high-level un­cer­tain­ties about global catas­trophic and ex­is­ten­tial risk

SiebeRozendal28 Nov 2019 14:47 UTC
83 points
9 comments6 min readEA link

Ex­is­ten­tial Risk and Eco­nomic Growth

leopold3 Sep 2019 13:23 UTC
116 points
31 comments1 min readEA link

Book Re­view: The Precipice

Aaron Gertler9 Apr 2020 21:21 UTC
39 points
0 comments17 min readEA link
(slatestarcodex.com)

Im­prov­ing dis­aster shelters to in­crease the chances of re­cov­ery from a global catastrophe

Nick_Beckstead19 Feb 2014 22:17 UTC
24 points
5 comments26 min readEA link

The timing of labour aimed at re­duc­ing ex­is­ten­tial risk

Toby_Ord24 Jul 2014 4:08 UTC
19 points
6 comments7 min readEA link

Cru­cial ques­tions for longtermists

MichaelA29 Jul 2020 9:39 UTC
86 points
17 comments14 min readEA link

Giv­ing Now vs. Later for Ex­is­ten­tial Risk: An Ini­tial Approach

MichaelDickens29 Aug 2020 1:04 UTC
12 points
2 comments28 min readEA link

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickens3 Sep 2020 1:15 UTC
94 points
18 comments25 min readEA link

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA Global3 Sep 2020 18:11 UTC
31 points
0 comments22 min readEA link
(www.youtube.com)

AI Gover­nance: Op­por­tu­nity and The­ory of Impact

Allan Dafoe17 Sep 2020 6:30 UTC
219 points
17 comments12 min readEA link

Some global catas­trophic risk estimates

Tamay10 Feb 2021 19:32 UTC
105 points
14 comments1 min readEA link

My per­sonal cruxes for fo­cus­ing on ex­is­ten­tial risks /​ longter­mism /​ any­thing other than just video games

MichaelA13 Apr 2021 5:50 UTC
53 points
28 comments2 min readEA link

Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_Carlsmith28 Apr 2021 21:41 UTC
81 points
33 comments1 min readEA link

Progress stud­ies vs. longter­mist EA: some differences

Max_Daniel31 May 2021 21:35 UTC
83 points
27 comments3 min readEA link

The Gover­nance Prob­lem and the “Pretty Good” X-Risk

Zach Stein-Perlman28 Aug 2021 20:00 UTC
23 points
4 comments11 min readEA link

2021 ALLFED Highlights

Ross_Tieman17 Nov 2021 15:24 UTC
45 points
1 comment16 min readEA link

Early-warn­ing Fore­cast­ing Cen­ter: What it is, and why it’d be cool

Linch14 Mar 2022 19:20 UTC
57 points
8 comments11 min readEA link

Ex­per­i­men­tal longter­mism: the­ory needs data

Jan_Kulveit15 Mar 2022 10:05 UTC
182 points
10 comments4 min readEA link

A Land­scape Anal­y­sis of In­sti­tu­tional Im­prove­ment Opportunities

IanDavidMoss21 Mar 2022 0:15 UTC
96 points
24 comments29 min readEA link

Nu­clear risk re­search ideas: Sum­mary & introduction

MichaelA8 Apr 2022 11:17 UTC
93 points
4 comments7 min readEA link

The uni­ver­sal An­thro­pocene or things we can learn from exo-civil­i­sa­tions, even if we never meet any

FJehn26 Apr 2022 12:06 UTC
11 points
0 comments8 min readEA link

Ap­ply to join SHELTER Week­end this August

Joel Becker15 Jun 2022 14:21 UTC
108 points
19 comments2 min readEA link

In­ter­ac­tively Vi­su­al­iz­ing X-Risk

Ideopunk29 Jul 2022 16:43 UTC
50 points
27 comments2 min readEA link

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

𝕮𝖎𝖓𝖊𝖗𝖆13 Nov 2022 19:40 UTC
32 points
6 comments1 min readEA link

Long-Term Fu­ture Fund: April 2019 grant recommendations

Habryka23 Apr 2019 7:00 UTC
142 points
242 comments47 min readEA link

Which World Gets Saved

trammell9 Nov 2018 18:08 UTC
130 points
27 comments3 min readEA link

Will the Treaty on the Pro­hi­bi­tion of Nu­clear Weapons af­fect nu­clear de­pro­lifer­a­tion through le­gal chan­nels?

Luisa_Rodriguez6 Dec 2019 10:38 UTC
100 points
5 comments30 min readEA link

Which nu­clear wars should worry us most?

Luisa_Rodriguez16 Jun 2019 23:31 UTC
96 points
12 comments5 min readEA link

How bad would nu­clear win­ter caused by a US-Rus­sia nu­clear ex­change be?

Luisa_Rodriguez20 Jun 2019 1:48 UTC
126 points
16 comments40 min readEA link

How many peo­ple would be kil­led as a di­rect re­sult of a US-Rus­sia nu­clear ex­change?

Luisa_Rodriguez30 Jun 2019 3:00 UTC
94 points
17 comments43 min readEA link

Long-Term Fu­ture Fund: Au­gust 2019 grant recommendations

Habryka3 Oct 2019 18:46 UTC
79 points
70 comments64 min readEA link

Would US and Rus­sian nu­clear forces sur­vive a first strike?

Luisa_Rodriguez18 Jun 2019 0:28 UTC
84 points
4 comments19 min readEA link

Bioinfohazards

Fin17 Sep 2019 2:41 UTC
85 points
10 comments18 min readEA link

Key points from The Dead Hand, David E. Hoffman

Kit9 Aug 2019 13:59 UTC
71 points
8 comments8 min readEA link

Tech­ni­cal AGI safety re­search out­side AI

richard_ngo18 Oct 2019 15:02 UTC
86 points
5 comments4 min readEA link

Long-Term Fu­ture Fund AMA

Helen19 Dec 2018 4:10 UTC
39 points
30 comments1 min readEA link

AMA: Toby Ord, au­thor of “The Precipice” and co-founder of the EA movement

Toby_Ord17 Mar 2020 2:39 UTC
68 points
82 comments1 min readEA link

Crit­i­cal Re­view of ‘The Precipice’: A Re­assess­ment of the Risks of AI and Pandemics

Fods1211 May 2020 11:11 UTC
91 points
32 comments26 min readEA link

[Question] Pro­jects tack­ling nu­clear risk?

Sanjay29 May 2020 22:41 UTC
29 points
4 comments1 min readEA link

21 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Sep 2019 up­date)

HaydnBelfield5 Nov 2019 14:26 UTC
31 points
4 comments13 min readEA link

Bot­tle­necks and Solu­tions for the X-Risk Ecosystem

FlorentBerthet8 Oct 2018 12:47 UTC
53 points
14 comments8 min readEA link

[Question] Is some kind of min­i­mally-in­va­sive mass surveillance re­quired for catas­trophic risk pre­ven­tion?

Chris Leong1 Jul 2020 23:32 UTC
26 points
7 comments1 min readEA link

‘The Precipice’ Book Review

Matt Goodman27 Jul 2020 22:10 UTC
20 points
1 comment4 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

Jack10 Aug 2020 10:24 UTC
68 points
12 comments42 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphire20 Aug 2020 20:23 UTC
49 points
0 comments3 min readEA link

Fore­cast­ing Thread: Ex­is­ten­tial Risk

amandango22 Sep 2020 20:51 UTC
24 points
4 comments2 min readEA link
(www.lesswrong.com)

The end of the Bronze Age as an ex­am­ple of a sud­den col­lapse of civilization

FJehn28 Oct 2020 12:55 UTC
46 points
7 comments8 min readEA link

Nu­clear war is un­likely to cause hu­man extinction

Jeffrey Ladish7 Nov 2020 5:39 UTC
45 points
24 comments11 min readEA link

ALLFED 2020 Highlights

AronM19 Nov 2020 22:06 UTC
50 points
5 comments27 min readEA link

Del­e­gated agents in prac­tice: How com­pa­nies might end up sel­l­ing AI ser­vices that act on be­half of con­sumers and coal­i­tions, and what this im­plies for safety research

Remmelt26 Nov 2020 16:39 UTC
11 points
0 comments4 min readEA link

An­nounc­ing AXRP, the AI X-risk Re­search Podcast

DanielFilan23 Dec 2020 20:10 UTC
32 points
1 comment1 min readEA link

What is the like­li­hood that civ­i­liza­tional col­lapse would di­rectly lead to hu­man ex­tinc­tion (within decades)?

Luisa_Rodriguez24 Dec 2020 22:10 UTC
279 points
37 comments50 min readEA link

Assess­ing Cli­mate Change’s Con­tri­bu­tion to Global Catas­trophic Risk

HaydnBelfield19 Feb 2021 16:26 UTC
26 points
8 comments38 min readEA link

A Biose­cu­rity and Biorisk Read­ing+ List

Tessa14 Mar 2021 2:30 UTC
119 points
13 comments12 min readEA link

[Question] What do you make of the dooms­day ar­gu­ment?

niklas19 Mar 2021 6:30 UTC
12 points
8 comments1 min readEA link

In­tro­duc­ing The Non­lin­ear Fund: AI Safety re­search, in­cu­ba­tion, and funding

Kat Woods18 Mar 2021 14:07 UTC
71 points
32 comments5 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA4 Apr 2021 3:09 UTC
75 points
28 comments1 min readEA link
(globalprioritiesinstitute.org)

‘Are We Doomed?’ Memos

Miranda_Zhang19 May 2021 13:51 UTC
27 points
0 comments16 min readEA link

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawford2 Jun 2021 18:47 UTC
112 points
37 comments3 min readEA link

[Question] What would you ask a poli­cy­maker about ex­is­ten­tial risks?

James Nicholas Bryant6 Jul 2021 23:53 UTC
24 points
2 comments1 min readEA link

Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

80000_Hours29 Jul 2021 16:38 UTC
20 points
0 comments158 min readEA link

Nick Bostrom: An In­tro­duc­tion [early draft]

peterhartree31 Jul 2021 17:04 UTC
38 points
0 comments19 min readEA link

Col­lec­tive in­tel­li­gence as in­fras­truc­ture for re­duc­ing broad ex­is­ten­tial risks

vickyCYang2 Aug 2021 6:00 UTC
28 points
6 comments11 min readEA link

Op­ti­mal Allo­ca­tion of Spend­ing on Ex­is­ten­tial Risk Re­duc­tion over an In­finite Time Hori­zon (in a too sim­plis­tic model)

Yassin Alaya12 Aug 2021 20:14 UTC
13 points
4 comments1 min readEA link

Am­bi­guity aver­sion and re­duc­tion of X-risks: A mod­el­ling situation

Benedikt Schmidt13 Sep 2021 7:16 UTC
29 points
6 comments6 min readEA link

Great Power Conflict

Zach Stein-Perlman15 Sep 2021 15:00 UTC
11 points
7 comments4 min readEA link

Ma­jor UN re­port dis­cusses ex­is­ten­tial risk and fu­ture gen­er­a­tions (sum­mary)

finm17 Sep 2021 15:51 UTC
311 points
5 comments12 min readEA link

Guard­ing Against Pandemics

Guarding Against Pandemics18 Sep 2021 11:15 UTC
72 points
17 comments4 min readEA link

[Link post] How plau­si­ble are AI Takeover sce­nar­ios?

SammyDMartin27 Sep 2021 13:03 UTC
26 points
0 comments1 min readEA link

Good news on cli­mate change

John G. Halstead28 Oct 2021 14:04 UTC
224 points
35 comments12 min readEA link

Bounty to dis­close new x-risks

acylhalide5 Nov 2021 12:53 UTC
1 point
6 comments4 min readEA link

AI Safety Needs Great Engineers

Andy Jones23 Nov 2021 21:03 UTC
92 points
13 comments4 min readEA link

Not all x-risk is the same: im­pli­ca­tions of non-hu­man-descendants

Nikola18 Dec 2021 21:22 UTC
34 points
3 comments5 min readEA link

Democratis­ing Risk—or how EA deals with critics

CarlaZoeC28 Dec 2021 15:05 UTC
245 points
317 comments4 min readEA link

Sim­plify EA Pitches to “Holy Shit, X-Risk”

Neel Nanda11 Feb 2022 1:57 UTC
177 points
80 comments10 min readEA link
(www.neelnanda.io)

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluug8 Mar 2022 19:17 UTC
174 points
43 comments10 min readEA link
(skluug.substack.com)

How the Ukraine con­flict may in­fluence spend­ing on longter­mist pro­jects

Frank_R16 Mar 2022 8:15 UTC
23 points
3 comments2 min readEA link

Video and Tran­script of Pre­sen­ta­tion on Ex­is­ten­tial Risk from Power-Seek­ing AI

Joe_Carlsmith8 May 2022 3:52 UTC
83 points
7 comments30 min readEA link

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

HaydnBelfield19 May 2022 8:42 UTC
405 points
37 comments18 min readEA link

The value of x-risk re­duc­tion

Nathan_Barnard21 May 2022 19:40 UTC
19 points
10 comments4 min readEA link

We should ex­pect to worry more about spec­u­la­tive risks

Ben Garfinkel29 May 2022 21:08 UTC
119 points
15 comments3 min readEA link

New US Se­nate Bill on X-Risk Miti­ga­tion [Linkpost]

Evan R. Murphy4 Jul 2022 1:28 UTC
29 points
12 comments1 min readEA link
(www.hsgac.senate.gov)

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 87 Jul 2022 4:44 UTC
55 points
9 comments27 min readEA link

En­light­en­ment Values in a Vuln­er­a­ble World

Maxwell Tabarrok18 Jul 2022 11:54 UTC
56 points
17 comments31 min readEA link

Why poli­cy­mak­ers should be­ware claims of new “arms races” (Bul­letin of the Atomic Scien­tists)

christian.r14 Jul 2022 13:38 UTC
55 points
1 comment1 min readEA link
(thebulletin.org)

Most* small prob­a­bil­ities aren’t pas­calian

Gregory Lewis7 Aug 2022 16:17 UTC
200 points
20 comments6 min readEA link

Risks from atom­i­cally pre­cise man­u­fac­tur­ing—Prob­lem profile

Benjamin Hilton9 Aug 2022 13:41 UTC
47 points
4 comments5 min readEA link
(80000hours.org)

A pseudo math­e­mat­i­cal for­mu­la­tion of di­rect work choice be­tween two x-risks

Joseph Bloom11 Aug 2022 0:28 UTC
7 points
0 comments4 min readEA link

Nu­clear Fine-Tun­ing: How Many Wor­lds Have Been De­stroyed?

Ember17 Aug 2022 13:13 UTC
16 points
28 comments23 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risks In­tro­duc­tory Course (ERIC)

Nandini Shiralkar19 Aug 2022 15:57 UTC
57 points
14 comments7 min readEA link

Cli­mate Change & Longter­mism: new book-length report

John G. Halstead26 Aug 2022 9:13 UTC
292 points
159 comments13 min readEA link

EA is too fo­cused on the Man­hat­tan Project

trevor15 Sep 2022 2:00 UTC
14 points
0 comments1 min readEA link

9/​26 is Petrov Day

Lizka25 Sep 2022 23:14 UTC
62 points
10 comments2 min readEA link
(www.lesswrong.com)

Re­view: What We Owe The Future

Kelsey Piper21 Nov 2022 21:41 UTC
163 points
3 comments1 min readEA link
(asteriskmag.com)

Does cli­mate change de­serve more at­ten­tion within EA?

Ben17 Apr 2019 6:50 UTC
135 points
67 comments15 min readEA link

Con­cern­ing the Re­cent 2019-Novel Coron­avirus Outbreak

Matthew_Barnett27 Jan 2020 5:47 UTC
132 points
142 comments3 min readEA link

Age-Weighted Voting

William_MacAskill12 Jul 2019 15:21 UTC
65 points
39 comments6 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

Cor­po­rate Global Catas­trophic Risks (C-GCRs)

Hauke Hillebrandt30 Jun 2019 16:53 UTC
64 points
17 comments12 min readEA link

How x-risk pro­jects are differ­ent from startups

Jan_Kulveit5 Apr 2019 7:35 UTC
67 points
9 comments1 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDello20 May 2020 23:01 UTC
27 points
10 comments6 min readEA link

Sur­viv­ing Global Catas­tro­phe in Nu­clear Sub­marines as Refuges

turchin5 Apr 2017 8:06 UTC
14 points
5 comments1 min readEA link

Defin­ing Meta Ex­is­ten­tial Risk

rhys_lindmark9 Jul 2019 18:16 UTC
12 points
3 comments4 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port Oc­to­ber 2019 - Jan­uary 2020

HaydnBelfield8 Apr 2020 13:28 UTC
8 points
0 comments17 min readEA link

19 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan, Feb & Mar 2020 up­date)

HaydnBelfield8 Apr 2020 13:19 UTC
13 points
0 comments13 min readEA link

16 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Nov & Dec 2019 up­date)

HaydnBelfield15 Jan 2020 12:07 UTC
21 points
0 comments9 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port April—Septem­ber 2019

HaydnBelfield30 Sep 2019 19:20 UTC
14 points
1 comment15 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port: Novem­ber 2018 - April 2019

HaydnBelfield1 May 2019 15:34 UTC
10 points
16 comments15 min readEA link

CSER Spe­cial Is­sue: ‘Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk’

HaydnBelfield2 Oct 2018 17:18 UTC
9 points
1 comment1 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk: Six Month Re­port May-Oc­to­ber 2018

HaydnBelfield30 Nov 2018 20:32 UTC
26 points
2 comments17 min readEA link

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecas7 Jun 2020 19:52 UTC
2 points
15 comments3 min readEA link

ALLFED 2019 An­nual Re­port and Fundrais­ing Appeal

AronM23 Nov 2019 2:05 UTC
38 points
12 comments22 min readEA link

Differ­en­tial tech­nolog­i­cal de­vel­op­ment

james25 Jun 2020 10:54 UTC
31 points
8 comments5 min readEA link

Civ­i­liza­tion Re-Emerg­ing After a Catas­trophic Collapse

MichaelA27 Jun 2020 3:22 UTC
32 points
18 comments2 min readEA link
(www.youtube.com)

Prevent­ing hu­man extinction

Peter Singer19 Aug 2013 21:07 UTC
18 points
8 comments5 min readEA link

FLI AI Align­ment pod­cast: Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

evhub1 Jul 2020 20:59 UTC
13 points
2 comments1 min readEA link
(futureoflife.org)

[Question] Are there su­perfore­casts for ex­is­ten­tial risk?

Alex HT7 Jul 2020 7:39 UTC
24 points
13 comments1 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #2 - A Crit­i­cal Look at Model Conclusions

Ben Snodin18 Aug 2020 10:25 UTC
58 points
9 comments17 min readEA link

Carl Ro­bichaud: Fac­ing the risk of nu­clear war in the 21st century

EA Global15 Jul 2020 17:17 UTC
13 points
0 comments11 min readEA link
(www.youtube.com)

A list of good heuris­tics that the case for AI X-risk fails

Aaron Gertler16 Jul 2020 9:56 UTC
23 points
9 comments2 min readEA link
(www.alignmentforum.org)

Mike Hue­mer on The Case for Tyranny

Chris Leong16 Jul 2020 9:57 UTC
24 points
5 comments1 min readEA link
(fakenous.net)

Im­prov­ing the fu­ture by in­fluenc­ing ac­tors’ benev­olence, in­tel­li­gence, and power

MichaelA20 Jul 2020 10:00 UTC
73 points
15 comments17 min readEA link

Up­date on civ­i­liza­tional col­lapse research

Jeffrey Ladish10 Feb 2020 23:40 UTC
54 points
7 comments3 min readEA link

Toby Ord: Fireside Chat and Q&A

EA Global21 Jul 2020 16:23 UTC
13 points
0 comments25 min readEA link
(www.youtube.com)

Bon­nie Jenk­ins: Fireside chat

EA Global22 Jul 2020 15:59 UTC
17 points
0 comments24 min readEA link
(www.youtube.com)

In­tel­lec­tual Diver­sity in AI Safety

KR22 Jul 2020 19:07 UTC
21 points
8 comments3 min readEA link

Scru­ti­niz­ing AI Risk (80K, #81) - v. quick summary

Ben23 Jul 2020 19:02 UTC
10 points
1 comment3 min readEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
75 points
8 comments4 min readEA link

A pro­posed ad­just­ment to the as­tro­nom­i­cal waste argument

Nick_Beckstead27 May 2013 4:00 UTC
43 points
1 comment12 min readEA link

Con­ver­sa­tion with Holden Karnofsky, Nick Beck­stead, and Eliezer Yud­kowsky on the “long-run” per­spec­tive on effec­tive altruism

Nick_Beckstead18 Aug 2014 4:30 UTC
4 points
7 comments6 min readEA link

EA read­ing list: longter­mism and ex­is­ten­tial risks

richard_ngo3 Aug 2020 9:52 UTC
35 points
3 comments1 min readEA link

Ex­tinc­tion risk re­duc­tion and moral cir­cle ex­pan­sion: Spec­u­lat­ing sus­pi­cious convergence

MichaelA4 Aug 2020 11:38 UTC
12 points
4 comments6 min readEA link

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumway7 Aug 2020 6:27 UTC
40 points
18 comments16 min readEA link

On Col­lapse Risk (C-Risk)

Pawntoe42 Jan 2020 5:10 UTC
36 points
10 comments8 min readEA link

My cur­rent thoughts on MIRI’s “highly re­li­able agent de­sign” work

Daniel_Dewey7 Jul 2017 1:17 UTC
51 points
65 comments19 min readEA link

Cost-Effec­tive­ness of Foods for Global Catas­tro­phes: Even Bet­ter than Be­fore?

Denkenberger19 Nov 2018 21:57 UTC
25 points
4 comments10 min readEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger29 Oct 2017 23:28 UTC
38 points
9 comments16 min readEA link

[Paper] In­ter­ven­tions that May Prevent or Mol­lify Su­per­vol­canic Eruptions

Denkenberger15 Jan 2018 21:46 UTC
23 points
8 comments1 min readEA link

APPG on Fu­ture Gen­er­a­tions im­pact re­port – Rais­ing the pro­file of fu­ture gen­er­a­tion in the UK Parliament

weeatquince12 Aug 2020 14:24 UTC
87 points
2 comments17 min readEA link

Should We Pri­ori­tize Long-Term Ex­is­ten­tial Risk?

MichaelDickens20 Aug 2020 2:23 UTC
28 points
17 comments3 min readEA link

We’re (sur­pris­ingly) more pos­i­tive about tack­ling bio risks: out­comes of a survey

Sanjay25 Aug 2020 9:14 UTC
58 points
5 comments11 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA25 Aug 2020 9:53 UTC
29 points
4 comments2 min readEA link
(www.openphilanthropy.org)

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendal20 Jun 2019 20:18 UTC
64 points
8 comments20 min readEA link

A (Very) Short His­tory of the Col­lapse of Civ­i­liza­tions, and Why it Matters

Davidmanheim30 Aug 2020 7:49 UTC
51 points
16 comments3 min readEA link

3 sug­ges­tions about jar­gon in EA

MichaelA5 Jul 2020 3:37 UTC
130 points
19 comments5 min readEA link

AMA: To­bias Bau­mann, Cen­ter for Re­duc­ing Suffering

Tobias_Baumann6 Sep 2020 10:45 UTC
48 points
45 comments1 min readEA link

Model­ling the odds of re­cov­ery from civ­i­liza­tional collapse

MichaelA17 Sep 2020 11:58 UTC
39 points
8 comments2 min readEA link

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

Paul_Christiano1 Oct 2020 18:52 UTC
107 points
19 comments3 min readEA link

Int’l agree­ments to spend % of GDP on global pub­lic goods

Hauke Hillebrandt22 Nov 2020 10:33 UTC
18 points
1 comment1 min readEA link

Should marginal longter­mist dona­tions sup­port fun­da­men­tal or in­ter­ven­tion re­search?

MichaelA30 Nov 2020 1:10 UTC
43 points
4 comments15 min readEA link

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DonyChristie29 Nov 2020 0:26 UTC
22 points
3 comments2 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis13 Apr 2018 1:44 UTC
59 points
35 comments4 min readEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew Critch15 Dec 2020 12:15 UTC
11 points
0 comments56 min readEA link
(alignmentforum.org)

[Question] What are the best ar­ti­cles/​blogs on the psy­chol­ogy of ex­is­ten­tial risk?

Geoffrey Miller16 Dec 2020 18:05 UTC
24 points
7 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
150 points
16 comments70 min readEA link

In­ter­na­tional Co­op­er­a­tion Against Ex­is­ten­tial Risks: In­sights from In­ter­na­tional Re­la­tions Theory

Jenny_Xiao11 Jan 2021 7:10 UTC
40 points
7 comments6 min readEA link

Global Pri­ori­ties In­sti­tute: Re­search Agenda

Aaron Gertler20 Jan 2021 20:09 UTC
21 points
0 comments1 min readEA link
(globalprioritiesinstitute.org)

Some EA Fo­rum Posts I’d like to write

Linch23 Feb 2021 5:27 UTC
98 points
10 comments5 min readEA link

In­ter­ven­tion Pro­file: Bal­lot Initiatives

Jason Schukraft13 Jan 2020 15:41 UTC
116 points
5 comments36 min readEA link

Rus­sian x-risks newslet­ter, sum­mer 2019

avturchin7 Sep 2019 9:55 UTC
23 points
1 comment4 min readEA link

Rus­sian x-risks newslet­ter win­ter 2019-2020

avturchin1 Mar 2020 12:51 UTC
10 points
4 comments2 min readEA link

Rus­sian x-risks newslet­ter, fall 2019

avturchin3 Dec 2019 17:01 UTC
27 points
2 comments3 min readEA link

How likely is a nu­clear ex­change be­tween the US and Rus­sia?

Luisa_Rodriguez20 Jun 2019 1:49 UTC
69 points
12 comments13 min readEA link

[Notes] Steven Pinker and Yu­val Noah Harari in conversation

Ben9 Feb 2020 12:49 UTC
29 points
2 comments7 min readEA link

Pres­i­dent Trump as a Global Catas­trophic Risk

HaydnBelfield18 Nov 2016 18:02 UTC
22 points
17 comments27 min readEA link

[Question] What ac­tions would ob­vi­ously de­crease x-risk?

Eli Rose6 Oct 2019 21:00 UTC
22 points
28 comments1 min readEA link

Jaan Tal­linn: Fireside chat (2018)

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments13 min readEA link
(www.youtube.com)

Seth Baum: Rec­on­cil­ing in­ter­na­tional security

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments16 min readEA link
(www.youtube.com)

Amesh Adalja: Pan­demic pathogens

EA Global8 Jun 2018 7:15 UTC
9 points
1 comment21 min readEA link
(www.youtube.com)

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA Global8 Jun 2018 7:15 UTC
8 points
0 comments10 min readEA link
(www.youtube.com)

Toby Ord: Q&A (2020)

EA Global13 Jun 2020 8:17 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Luisa Ro­driguez: The like­li­hood and sever­ity of a US-Rus­sia nu­clear exchange

EA Global18 Oct 2019 18:05 UTC
10 points
0 comments1 min readEA link
(www.youtube.com)

Ex­is­ten­tial risk and the fu­ture of hu­man­ity (Toby Ord)

EA Global21 Mar 2020 18:05 UTC
9 points
1 comment14 min readEA link
(www.youtube.com)

Notes on “Bioter­ror and Biowar­fare” (2006)

MichaelA1 Mar 2021 9:42 UTC
24 points
6 comments4 min readEA link

In­ter­view Thomas Moynihan: “The dis­cov­ery of ex­tinc­tion is a philo­soph­i­cal cen­tre­piece of the mod­ern age”

felix.h6 Mar 2021 11:51 UTC
14 points
0 comments18 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

Jack Malde9 Mar 2021 17:58 UTC
90 points
45 comments19 min readEA link

Jenny Xiao: Dual moral obli­ga­tions and in­ter­na­tional co­op­er­a­tion against global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Jaan Tal­linn: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Nick Beck­stead: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

In­ter­na­tional Crim­i­nal Law and the Fu­ture of Hu­man­ity: A The­ory of the Crime of Omnicide

philosophytorres22 Mar 2021 12:19 UTC
−3 points
1 comment1 min readEA link

An­drew Sny­der Beat­tie: Biotech­nol­ogy and ex­is­ten­tial risk

EA Global3 Nov 2017 7:43 UTC
10 points
0 comments1 min readEA link
(www.youtube.com)

Marc Lip­sitch: Prevent­ing catas­trophic risks by miti­gat­ing sub­catas­trophic ones

EA Global2 Jun 2017 8:48 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA Global2 Jun 2017 8:48 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA Global2 Jun 2017 8:48 UTC
10 points
0 comments1 min readEA link
(www.youtube.com)

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_Daniel2 Jun 2017 8:48 UTC
8 points
1 comment1 min readEA link
(www.youtube.com)

The Case for Strong Longtermism

Global Priorities Institute3 Sep 2019 1:17 UTC
14 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maxime29 Mar 2021 18:10 UTC
116 points
23 comments11 min readEA link

New Cause Area: Pro­gram­matic Mettā

Milan_Griffes1 Apr 2021 12:54 UTC
6 points
4 comments2 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jia6 Apr 2021 8:49 UTC
50 points
6 comments7 min readEA link

AGI risk: analo­gies & arguments

Gavin23 Mar 2021 13:18 UTC
31 points
3 comments8 min readEA link
(www.gleech.org)

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torges9 Mar 2021 9:39 UTC
48 points
0 comments1 min readEA link
(longtermrisk.org)

What Ques­tions Should We Ask Speak­ers at the Stan­ford Ex­is­ten­tial Risks Con­fer­ence?

kuhanj10 Apr 2021 0:51 UTC
21 points
2 comments1 min readEA link

Talk­ing With a Biose­cu­rity Pro­fes­sional (Quick Notes)

AllAmericanBreakfast10 Apr 2021 4:23 UTC
39 points
0 comments2 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

Why I ex­pect suc­cess­ful (nar­row) alignment

Tobias_Baumann29 Dec 2018 15:46 UTC
18 points
10 comments1 min readEA link
(s-risks.org)

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
26 points
1 comment1 min readEA link
(s-risks.org)

New in­fo­graphic based on “The Precipice”. any feed­back?

michael.andregg14 Jan 2021 7:29 UTC
50 points
4 comments1 min readEA link

Mo­ral plu­ral­ism and longter­mism | Sunyshore

BrownHairedEevee17 Apr 2021 0:14 UTC
26 points
0 comments6 min readEA link
(sunyshore.substack.com)

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_Carlsmith22 Mar 2021 8:21 UTC
101 points
13 comments12 min readEA link

EAGxVir­tual 2020 light­ning talks

EA Global25 Jan 2021 15:32 UTC
13 points
1 comment33 min readEA link
(www.youtube.com)

Com­par­a­tive Bias

Joey5 Nov 2014 5:57 UTC
7 points
5 comments1 min readEA link

Ex­is­ten­tial Risk: More to explore

EA Handbook1 Jan 2021 10:15 UTC
2 points
0 comments1 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA2 May 2021 18:00 UTC
30 points
21 comments2 min readEA link

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

GCRI Open Call for Ad­visees and Collaborators

McKenna_Fitzgerald20 May 2021 22:07 UTC
13 points
0 comments4 min readEA link

[Question] MSc in Risk and Disaster Science? (UCL) - Does this fit the EA path?

yazanasad25 May 2021 3:33 UTC
10 points
6 comments1 min readEA link

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergal27 May 2021 6:44 UTC
110 points
17 comments58 min readEA link

Fi­nal Re­port of the Na­tional Se­cu­rity Com­mis­sion on Ar­tifi­cial In­tel­li­gence (NSCAI, 2021)

MichaelA1 Jun 2021 8:19 UTC
51 points
3 comments4 min readEA link
(www.nscai.gov)

Astro­nom­i­cal Waste: The Op­por­tu­nity Cost of De­layed Tech­nolog­i­cal Devel­op­ment—Nick Bostrom (2003)

james10 Jun 2021 21:21 UTC
10 points
0 comments8 min readEA link
(www.nickbostrom.com)

An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

MichaelA16 Jun 2021 16:12 UTC
38 points
0 comments2 min readEA link

[Pod­cast] Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

BrownHairedEevee25 Jun 2021 15:39 UTC
12 points
0 comments1 min readEA link
(80000hours.org)

Hauke Hille­brandt: In­ter­na­tional agree­ments to spend per­centage of GDP on global pub­lic goods

EA Global21 Nov 2020 8:12 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Shelly Ka­gan—read­ings for Ethics and the Fu­ture sem­i­nar (spring 2021)

james29 Jun 2021 9:59 UTC
91 points
7 comments5 min readEA link
(docs.google.com)

[Question] Is an in­crease in at­ten­tion to the idea that ‘suffer­ing is bad’ likely to in­crease ex­is­ten­tial risk?

dotsam30 Jun 2021 19:41 UTC
2 points
6 comments1 min readEA link

[Fu­ture Perfect] How to be a good ancestor

Pablo2 Jul 2021 13:17 UTC
41 points
3 comments2 min readEA link
(www.vox.com)

A Sim­ple Model of AGI De­ploy­ment Risk

djbinder9 Jul 2021 9:44 UTC
16 points
0 comments5 min readEA link

World fed­er­al­ism and EA

BrownHairedEevee14 Jul 2021 5:53 UTC
45 points
4 comments1 min readEA link

Seek­ing EA ex­perts in­ter­ested in the evolu­tion­ary psy­chol­ogy of ex­is­ten­tial risks

Geoffrey Miller23 Oct 2019 18:19 UTC
22 points
1 comment1 min readEA link

AMA: The new Open Philan­thropy Tech­nol­ogy Policy Fellowship

lukeprog26 Jul 2021 15:11 UTC
38 points
16 comments1 min readEA link

Ap­ply to the new Open Philan­thropy Tech­nol­ogy Policy Fel­low­ship!

lukeprog20 Jul 2021 18:41 UTC
78 points
7 comments4 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes28 Jul 2021 13:23 UTC
94 points
5 comments33 min readEA link

Matt Lev­ine on the Arche­gos failure

Kelsey Piper29 Jul 2021 19:36 UTC
135 points
5 comments4 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risk Observatory

Otto12 Aug 2021 15:51 UTC
35 points
0 comments5 min readEA link

Catas­trophic rec­t­an­gles—vi­su­al­is­ing catas­trophic risks

Rémi T22 Aug 2021 21:27 UTC
33 points
3 comments4 min readEA link

Teruji Thomas, ‘The Asym­me­try, Uncer­tainty, and the Long Term’

Pablo5 Nov 2019 20:24 UTC
43 points
6 comments1 min readEA link
(globalprioritiesinstitute.org)

Why I am prob­a­bly not a longtermist

Denise_Melchin23 Sep 2021 17:24 UTC
185 points
48 comments8 min readEA link

The catas­trophic pri­macy of re­ac­tivity over proac­tivity in gov­ern­men­tal risk as­sess­ment: brief UK case study

JuanGarcia27 Sep 2021 15:53 UTC
54 points
0 comments5 min readEA link

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJules3 Oct 2021 16:55 UTC
39 points
18 comments1 min readEA link

[Question] Help me un­der­stand this ex­pected value calculation

AndreaSR14 Oct 2021 6:23 UTC
15 points
8 comments1 min readEA link

“Nu­clear risk re­search, fore­cast­ing, & im­pact” [pre­sen­ta­tion]

MichaelA21 Oct 2021 10:54 UTC
13 points
0 comments1 min readEA link
(www.youtube.com)

[Question] What’s the GiveDirectly of longter­mism & ex­is­ten­tial risk?

Nathan Young15 Nov 2021 23:55 UTC
28 points
25 comments1 min readEA link

Com­pe­ti­tion for “For­tified Es­says” on nu­clear risk

MichaelA17 Nov 2021 20:55 UTC
33 points
0 comments3 min readEA link
(www.metaculus.com)

[Linkpost] Don’t Look Up—a Net­flix com­edy about as­ter­oid risk and re­al­is­tic so­cietal re­ac­tions (Dec. 24th)

Linch18 Nov 2021 21:40 UTC
63 points
16 comments1 min readEA link
(www.youtube.com)

Database of orgs rele­vant to longter­mist/​x-risk work

MichaelA19 Nov 2021 8:50 UTC
94 points
44 comments4 min readEA link

Com­mon Points of Ad­vice for Stu­dents and Early-Ca­reer Pro­fes­sion­als In­ter­ested in Global Catas­trophic Risk

SethBaum16 Nov 2021 20:51 UTC
58 points
5 comments15 min readEA link

Vi­talik: Cryp­toe­co­nomics and X-Risk Re­searchers Should Listen to Each Other More

Emerson Spartz21 Nov 2021 18:50 UTC
55 points
3 comments5 min readEA link

[Question] How would you define “ex­is­ten­tial risk?”

Linch29 Nov 2021 5:17 UTC
12 points
5 comments1 min readEA link

Mor­tal­ity, ex­is­ten­tial risk, and uni­ver­sal ba­sic income

Max Ghenis30 Nov 2021 8:28 UTC
12 points
5 comments22 min readEA link

Re­sponse to Re­cent Crit­i­cisms of Longtermism

ab13 Dec 2021 13:36 UTC
243 points
32 comments28 min readEA link

Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASB16 Dec 2021 21:40 UTC
80 points
6 comments3 min readEA link

Is Bit­coin Danger­ous?

postlibertarian19 Dec 2021 19:35 UTC
14 points
7 comments9 min readEA link

In­creased Availa­bil­ity and Willing­ness for De­ploy­ment of Re­sources for Effec­tive Altru­ism and Long-Termism

Evan_Gaensbauer29 Dec 2021 20:20 UTC
45 points
1 comment2 min readEA link

The Precipice: In­tro­duc­tion and Chap­ter One

Toby_Ord2 Jan 2021 7:13 UTC
20 points
0 comments1 min readEA link

“Don’t Look Up” and the cin­ema of ex­is­ten­tial risk | Slow Boring

BrownHairedEevee5 Jan 2022 4:28 UTC
23 points
0 comments1 min readEA link
(www.slowboring.com)

[linkpost] Peter Singer: The Hinge of History

mic16 Jan 2022 1:25 UTC
38 points
9 comments3 min readEA link

[Question] What would you say gives you a feel­ing of ex­is­ten­tial hope, and what can we do to in­spire more of it?

elteerkers26 Jan 2022 13:46 UTC
18 points
4 comments1 min readEA link

Notes on “The Poli­tics of Cri­sis Man­age­ment” (Boin et al., 2016)

Darius_M30 Jan 2022 22:51 UTC
29 points
1 comment18 min readEA link

Model­ling Great Power con­flict as an ex­is­ten­tial risk factor

Stephen Clare3 Feb 2022 11:41 UTC
120 points
26 comments19 min readEA link

Split­ting the timeline as an ex­tinc­tion risk intervention

NunoSempere6 Feb 2022 19:59 UTC
14 points
27 comments4 min readEA link

Stan­ford Ex­is­ten­tial Risk Con­fer­ence Feb. 26/​27

kuhanj11 Feb 2022 0:56 UTC
28 points
0 comments1 min readEA link

Risks from Asteroids

finm11 Feb 2022 21:01 UTC
44 points
9 comments5 min readEA link
(www.finmoorhouse.com)

Im­por­tant, ac­tion­able re­search ques­tions for the most im­por­tant century

Holden Karnofsky24 Feb 2022 16:34 UTC
280 points
15 comments19 min readEA link

How I Formed My Own Views About AI Safety

Neel Nanda27 Feb 2022 18:52 UTC
129 points
12 comments13 min readEA link
(www.neelnanda.io)

AGI x-risk timelines: 10% chance (by year X) es­ti­mates should be the head­line, not 50%.

Greg_Colbourn1 Mar 2022 12:02 UTC
67 points
22 comments1 min readEA link

.01% Fund—Ideation and Proposal

Linch1 Mar 2022 18:25 UTC
65 points
23 comments5 min readEA link

[Question] Is trans­for­ma­tive AI the biggest ex­is­ten­tial risk? Why or why not?

BrownHairedEevee5 Mar 2022 3:54 UTC
9 points
11 comments1 min readEA link

[Question] What are the stan­dard terms used to de­scribe risks in risk man­age­ment?

BrownHairedEevee5 Mar 2022 4:07 UTC
11 points
2 comments1 min readEA link

On pre­sent­ing the case for AI risk

Aryeh Englander8 Mar 2022 21:37 UTC
114 points
12 comments4 min readEA link

How likely is World War III?

Stephen Clare15 Feb 2022 15:09 UTC
113 points
20 comments16 min readEA link

[Cross-post] A nu­clear war fore­cast is not a coin flip

David Johnston15 Mar 2022 4:01 UTC
28 points
12 comments3 min readEA link

Me­diocre AI safety as ex­is­ten­tial risk

Gavin16 Mar 2022 11:50 UTC
52 points
12 comments3 min readEA link

Cli­mate Change Overview: CERI Sum­mer Re­search Fellowship

hb57417 Mar 2022 11:04 UTC
33 points
0 comments4 min readEA link

Hinges and crises

Jan_Kulveit17 Mar 2022 13:43 UTC
72 points
5 comments3 min readEA link

Free to at­tend: Cam­bridge Con­fer­ence on Catas­trophic Risk (19-21 April)

HaydnBelfield21 Mar 2022 13:23 UTC
19 points
2 comments1 min readEA link

What we tried

Jan_Kulveit21 Mar 2022 15:26 UTC
71 points
7 comments9 min readEA link

8 pos­si­ble high-level goals for work on nu­clear risk

MichaelA29 Mar 2022 6:30 UTC
41 points
4 comments13 min readEA link

13 ideas for new Ex­is­ten­tial Risk Movies & TV Shows – what are your ideas?

HaydnBelfield12 Apr 2022 11:47 UTC
79 points
14 comments4 min readEA link

Par­ti­ci­pate in the Hy­brid Fore­cast­ing-Per­sua­sion Tour­na­ment (on X-risk top­ics)

Jhrosenberg25 Apr 2022 22:13 UTC
52 points
4 comments2 min readEA link

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben Snodin2 May 2022 9:41 UTC
131 points
17 comments33 min readEA link

US Ci­ti­zens: Tar­geted poli­ti­cal con­tri­bu­tions are prob­a­bly the best pas­sive dona­tion op­por­tu­ni­ties for miti­gat­ing ex­is­ten­tial risk

Jeffrey Ladish5 May 2022 23:04 UTC
51 points
20 comments5 min readEA link

Geo­eng­ineer­ing to re­duce global catas­trophic risk?

Niklas Lehmann29 May 2022 15:50 UTC
7 points
3 comments5 min readEA link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew Critch13 May 2022 17:26 UTC
51 points
5 comments4 min readEA link

[Question] Is it pos­si­ble to have a high level of hu­man het­ero­gene­ity and low chance of ex­is­ten­tial risks?

ekka24 May 2022 21:55 UTC
4 points
0 comments1 min readEA link

Re­vis­it­ing “Why Global Poverty”

Jeff Kaufman1 Jun 2022 20:20 UTC
66 points
0 comments3 min readEA link
(www.jefftk.com)

Case study: Re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_Green2 Jun 2022 4:07 UTC
41 points
2 comments11 min readEA link

EA Re­search Around Min­eral Re­source Exhaustion

haywyer3 Jun 2022 0:59 UTC
1 point
0 comments1 min readEA link

A dis­en­tan­gle­ment pro­ject for the nu­clear se­cu­rity cause area

Sarah Weiler3 Jun 2022 5:29 UTC
14 points
0 comments6 min readEA link

Hu­man sur­vival is a policy choice

Peter Wildeford3 Jun 2022 18:53 UTC
25 points
2 comments6 min readEA link
(www.pasteurscube.com)

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
39 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

[Link] GCRI’s Seth Baum re­views The Precipice

Aryeh Englander6 Jun 2022 19:33 UTC
21 points
0 comments1 min readEA link

Read­ing the ethi­cists 2: Hunt­ing for AI al­ign­ment papers

Charlie Steiner6 Jun 2022 15:53 UTC
9 points
0 comments1 min readEA link
(www.lesswrong.com)

[Question] Model­ing hu­man­ity’s ro­bust­ness to GCRs?

rodeo_flagellum9 Jun 2022 17:20 UTC
7 points
1 comment2 min readEA link

AI Could Defeat All Of Us Combined

Holden Karnofsky10 Jun 2022 23:25 UTC
141 points
11 comments14 min readEA link

Launch of FERSTS Retreat

Theo K17 Jun 2022 11:53 UTC
26 points
0 comments2 min readEA link

[Question] What are the best re­sources on com­par­ing x-risk pre­ven­tion to im­prov­ing the value of the fu­ture in other ways?

LHA26 Jun 2022 3:22 UTC
8 points
3 comments1 min readEA link

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah Weiler26 Jun 2022 10:59 UTC
56 points
2 comments16 min readEA link

What suc­cess looks like

mariushobbhahn28 Jun 2022 14:30 UTC
105 points
20 comments19 min readEA link

Kurzge­sagt—The Last Hu­man (Longter­mist video)

Lizka28 Jun 2022 20:16 UTC
148 points
17 comments1 min readEA link
(www.youtube.com)

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

BrownHairedEevee26 Jul 2022 5:04 UTC
34 points
3 comments4 min readEA link
(sunyshore.substack.com)

The most im­por­tant cli­mate change uncertainty

cwa26 Jul 2022 15:15 UTC
138 points
27 comments11 min readEA link

Sav­ing lives near the precipice: we’re do­ing it wrong?

Samin29 Jul 2022 15:08 UTC
17 points
10 comments3 min readEA link

Three pillars for avoid­ing AGI catas­tro­phe: Tech­ni­cal al­ign­ment, de­ploy­ment de­ci­sions, and co­or­di­na­tion

alexlintz3 Aug 2022 21:24 UTC
68 points
3 comments11 min readEA link

Longter­mists Should Work on AI—There is No “AI Neu­tral” Sce­nario

simeon_c7 Aug 2022 16:43 UTC
43 points
62 comments6 min readEA link

Fu­ture Mat­ters #4: AI timelines, AGI risk, and ex­is­ten­tial risk from cli­mate change

Pablo8 Aug 2022 11:00 UTC
59 points
0 comments17 min readEA link

War Between the US and China: A case study for epistemic challenges around China-re­lated catas­trophic risk

Jordan_Schneider12 Aug 2022 2:19 UTC
73 points
17 comments43 min readEA link

Com­mon-sense cases where “hy­po­thet­i­cal fu­ture peo­ple” matter

levin12 Aug 2022 14:05 UTC
106 points
21 comments4 min readEA link

Global Devel­op­ment → re­duced ex-risk/​long-ter­mism. (Ini­tial draft/​ques­tion)

Arno13 Aug 2022 16:29 UTC
3 points
3 comments1 min readEA link

Pri­ori­tiz­ing x-risks may re­quire car­ing about fu­ture people

elifland14 Aug 2022 0:55 UTC
174 points
37 comments6 min readEA link
(www.foxy-scout.com)

“Holy Shit, X-risk” talk

michel15 Aug 2022 5:04 UTC
13 points
2 comments9 min readEA link

Na­ture: Nu­clear war be­tween two na­tions could spark global famine

Tyner15 Aug 2022 20:55 UTC
15 points
1 comment1 min readEA link
(www.nature.com)

The Parable of the Boy Who Cried 5% Chance of Wolf

Kat Woods15 Aug 2022 14:22 UTC
74 points
8 comments2 min readEA link

[Question] How to find *re­li­able* ways to im­prove the fu­ture?

Sjlver18 Aug 2022 12:47 UTC
53 points
35 comments2 min readEA link

“Ex­is­ten­tial Risk” is badly named and leads to nar­row fo­cus on as­tro­nom­i­cal waste

freedomandutility22 Aug 2022 20:25 UTC
38 points
2 comments2 min readEA link

New Pod­cast: X-Risk Upskill

Anthony Fleming27 Aug 2022 21:19 UTC
12 points
4 comments1 min readEA link

Re­think­ing longter­mism and global development

BrownHairedEevee2 Sep 2022 5:28 UTC
10 points
2 comments7 min readEA link
(sunyshore.substack.com)

Longterm cost-effec­tive­ness of Founders Pledge’s Cli­mate Change Fund

Vasco Grilo14 Sep 2022 15:11 UTC
35 points
7 comments6 min readEA link

[Question] How can we se­cure more re­search po­si­tions at our uni­ver­si­ties for x-risk re­searchers?

Neil Crawford6 Sep 2022 14:41 UTC
3 points
2 comments1 min readEA link

A Pin and a Bal­loon: An­thropic Frag­ility In­creases Chances of Ru­n­away Global Warm­ing

turchin11 Sep 2022 10:22 UTC
27 points
25 comments53 min readEA link

Differ­en­tial tech­nol­ogy de­vel­op­ment: preprint on the concept

Hamish_Hobbs12 Sep 2022 13:52 UTC
61 points
0 comments2 min readEA link

The Pug­wash Con­fer­ences and the Anti-Bal­lis­tic Mis­sile Treaty as a case study of Track II diplomacy

rani_martin16 Sep 2022 10:42 UTC
80 points
5 comments26 min readEA link

Linkpost for var­i­ous re­cent es­says on suffer­ing-fo­cused ethics, pri­ori­ties, and more

Magnus Vinding28 Sep 2022 8:58 UTC
82 points
0 comments5 min readEA link
(centerforreducingsuffering.org)

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8res6 Oct 2022 5:15 UTC
87 points
20 comments2 min readEA link

The Precipice—Sum­mary/​Review

Nikola11 Oct 2022 0:06 UTC
8 points
0 comments5 min readEA link

Notes on Apollo re­port on biodefense

Linch23 Jul 2022 23:49 UTC
66 points
1 comment12 min readEA link
(biodefensecommission.org)

Lord Martin Rees: an appreciation

HaydnBelfield24 Oct 2022 16:11 UTC
172 points
18 comments5 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnay26 Oct 2022 1:24 UTC
5 points
1 comment1 min readEA link

New book on s-risks

Tobias_Baumann26 Oct 2022 12:04 UTC
280 points
25 comments1 min readEA link

An­nounc­ing The Most Im­por­tant Cen­tury Writ­ing Prize

michel31 Oct 2022 21:37 UTC
45 points
0 comments2 min readEA link

Fund biose­cu­rity officers at universities

freedomandutility31 Oct 2022 11:49 UTC
13 points
3 comments1 min readEA link

Longter­mist ter­minol­ogy has bi­as­ing assumptions

Arepo30 Oct 2022 16:26 UTC
58 points
13 comments7 min readEA link

A pro­posed hi­er­ar­chy of longter­mist concepts

Arepo30 Oct 2022 16:26 UTC
33 points
13 comments4 min readEA link

AI X-Risk: In­te­grat­ing on the Shoulders of Giants

TD_Pilditch1 Nov 2022 16:07 UTC
28 points
0 comments47 min readEA link

How bad could a war get?

Stephen Clare4 Nov 2022 9:25 UTC
121 points
10 comments9 min readEA link

Fu­ture peo­ple might not ex­ist

Indra Gesink30 Nov 2022 19:17 UTC
16 points
0 comments4 min readEA link

EA needs more humor

SWK1 Dec 2022 5:30 UTC
35 points
14 comments5 min readEA link

Quotes about the long reflection

MichaelA5 Mar 2020 7:48 UTC
53 points
13 comments13 min readEA link

[Question] What ques­tions could COVID-19 provide ev­i­dence on that would help guide fu­ture EA de­ci­sions?

MichaelA27 Mar 2020 5:51 UTC
7 points
7 comments1 min readEA link

Differ­en­tial progress /​ in­tel­lec­tual progress /​ tech­nolog­i­cal development

MichaelA24 Apr 2020 14:08 UTC
35 points
16 comments7 min readEA link

Space gov­er­nance is im­por­tant, tractable and neglected

Tobias_Baumann7 Jan 2020 11:24 UTC
103 points
18 comments7 min readEA link

How tractable is chang­ing the course of his­tory?

Jamie_Harris22 May 2019 15:29 UTC
41 points
2 comments7 min readEA link
(www.sentienceinstitute.org)

Economist: “What’s the worst that could hap­pen”. A pos­i­tive, sharable but vague ar­ti­cle on Ex­is­ten­tial Risk

Nathan Young8 Jul 2020 10:37 UTC
12 points
3 comments3 min readEA link

[Question] Why al­tru­ism at all?

Singleton12 Jul 2020 22:04 UTC
−2 points
1 comment1 min readEA link

[Question] A bill to mas­sively ex­pand NSF to tech do­mains. What’s the rele­vance for x-risk?

EdoArad12 Jul 2020 15:20 UTC
22 points
4 comments1 min readEA link

Cli­mate change dona­tion recommendations

Sanjay16 Jul 2020 21:17 UTC
46 points
7 comments14 min readEA link

[Question] Put­ting Peo­ple First in a Cul­ture of De­hu­man­iza­tion

jhealy22 Jul 2020 3:31 UTC
16 points
3 comments1 min readEA link

[Question] Is nan­otech­nol­ogy (such as APM) im­por­tant for EAs’ to work on?

pixel_brownie_software12 Mar 2020 15:36 UTC
6 points
9 comments1 min readEA link

[Question] What do we do if AI doesn’t take over the world, but still causes a sig­nifi­cant global prob­lem?

James_Banks2 Aug 2020 3:35 UTC
16 points
5 comments1 min readEA link

State Space of X-Risk Trajectories

David_Kristoffersson6 Feb 2020 13:37 UTC
24 points
6 comments3 min readEA link

The Precipice: a risky re­view by a non-EA

fmoreno8 Aug 2020 14:40 UTC
13 points
0 comments18 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #3 - Ex­ten­sions and Variations

Alex HT20 Dec 2020 12:39 UTC
5 points
0 comments12 min readEA link

Ur­gency vs. Pa­tience—a Toy Model

Alex HT19 Aug 2020 14:13 UTC
39 points
4 comments4 min readEA link

[Question] Is ex­is­ten­tial risk more press­ing than other ways to im­prove the long-term fu­ture?

BrownHairedEevee20 Aug 2020 3:50 UTC
23 points
1 comment1 min readEA link

On­line Con­fer­ence Op­por­tu­nity for EA Grad Students

jonathancourtney21 Aug 2020 17:31 UTC
8 points
1 comment1 min readEA link

On The Rel­a­tive Long-Term Fu­ture Im­por­tance of In­vest­ments in Eco­nomic Growth and Global Catas­trophic Risk Reduction

poliboni30 Mar 2020 20:11 UTC
33 points
1 comment1 min readEA link

[Question] Are so­cial me­dia al­gorithms an ex­is­ten­tial risk?

BarryGrimes15 Sep 2020 8:52 UTC
24 points
13 comments1 min readEA link

Is Tech­nol­ogy Ac­tu­ally Mak­ing Things Bet­ter? – Pairagraph

BrownHairedEevee1 Oct 2020 16:06 UTC
16 points
1 comment1 min readEA link
(www.pairagraph.com)

New 3-hour pod­cast with An­ders Sand­berg about Grand Futures

Gus Docker6 Oct 2020 10:47 UTC
21 points
1 comment1 min readEA link

Leopold Aschen­bren­ner re­turns to X-risk and growth

nickwhitaker20 Oct 2020 23:24 UTC
24 points
3 comments1 min readEA link

4 Years Later: Pres­i­dent Trump and Global Catas­trophic Risk

HaydnBelfield25 Oct 2020 16:28 UTC
23 points
9 comments10 min readEA link

Why those who care about catas­trophic and ex­is­ten­tial risk should care about au­tonomous weapons

aaguirre11 Nov 2020 17:27 UTC
101 points
31 comments15 min readEA link

Plan of Ac­tion to Prevent Hu­man Ex­tinc­tion Risks

turchin14 Mar 2016 14:51 UTC
11 points
3 comments7 min readEA link

The Map of Shelters and Re­fuges from Global Risks (Plan B of X-risks Preven­tion)

turchin22 Oct 2016 10:22 UTC
16 points
9 comments7 min readEA link

Im­prov­ing long-run civil­i­sa­tional robustness

RyanCarey10 May 2016 11:14 UTC
9 points
6 comments3 min readEA link

[Notes] Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

Ben14 Jan 2020 22:13 UTC
39 points
7 comments14 min readEA link

Pangea: The Worst of Times

John G. Halstead5 Apr 2020 15:13 UTC
88 points
7 comments8 min readEA link

Cli­mate change, geo­eng­ineer­ing, and ex­is­ten­tial risk

John G. Halstead20 Mar 2018 10:48 UTC
20 points
11 comments1 min readEA link

The Map of Im­pact Risks and As­teroid Defense

turchin3 Nov 2016 15:34 UTC
7 points
9 comments4 min readEA link

[Paper] Sur­viv­ing global risks through the preser­va­tion of hu­man­ity’s data on the Moon

turchin3 Mar 2018 18:39 UTC
11 points
6 comments1 min readEA link

11 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (June 2020 up­date)

HaydnBelfield2 Jul 2020 13:09 UTC
14 points
0 comments7 min readEA link
(www.cser.ac.uk)

W-Risk and the Tech­nolog­i­cal Wavefront (Nell Wat­son)

Aaron Gertler11 Nov 2018 23:22 UTC
9 points
1 comment1 min readEA link

Com­bi­na­tion Ex­is­ten­tial Risks

ozymandias14 Jan 2019 19:29 UTC
26 points
5 comments2 min readEA link
(thingofthings.wordpress.com)

[Question] Donat­ing against Short Term AI risks

Jan-WillemvanPutten16 Nov 2020 12:23 UTC
6 points
10 comments1 min readEA link

How Rood­man’s GWP model trans­lates to TAI timelines

kokotajlod16 Nov 2020 14:11 UTC
22 points
0 comments3 min readEA link

Ques­tions for Jaan Tal­linn’s fireside chat in EAGxAPAC this weekend

BrianTan17 Nov 2020 2:12 UTC
13 points
8 comments1 min readEA link

Ques­tions for Nick Beck­stead’s fireside chat in EAGxAPAC this weekend

BrianTan17 Nov 2020 15:05 UTC
12 points
15 comments3 min readEA link

An­nounc­ing AI Safety Support

Linda Linsefors19 Nov 2020 20:19 UTC
54 points
0 comments4 min readEA link

Long-Term Fu­ture Fund: Ask Us Any­thing!

AdamGleave3 Dec 2020 13:44 UTC
89 points
154 comments1 min readEA link

[Question] Can we con­vince peo­ple to work on AI safety with­out con­vinc­ing them about AGI hap­pen­ing this cen­tury?

BrianTan26 Nov 2020 14:46 UTC
8 points
3 comments2 min readEA link

A toy model for tech­nolog­i­cal ex­is­ten­tial risk

RobertHarling28 Nov 2020 11:55 UTC
10 points
3 comments4 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port June—Septem­ber 2020

HaydnBelfield2 Dec 2020 18:33 UTC
24 points
0 comments17 min readEA link

[Question] Look­ing for col­lab­o­ra­tors af­ter last 80k pod­cast with Tris­tan Harris

Jan-WillemvanPutten7 Dec 2020 22:23 UTC
19 points
7 comments2 min readEA link

Good v. Op­ti­mal Futures

RobertHarling11 Dec 2020 16:38 UTC
32 points
10 comments6 min readEA link

The Next Pan­demic Could Be Worse, What Can We Do? (A Hap­pier World video)

Jeroen_W21 Dec 2020 21:07 UTC
34 points
6 comments1 min readEA link

Against GDP as a met­ric for timelines and take­off speeds

kokotajlod29 Dec 2020 17:50 UTC
41 points
6 comments14 min readEA link

[Cross­post] Rel­a­tivis­tic Colonization

itaibn31 Dec 2020 2:30 UTC
7 points
7 comments4 min readEA link

Le­gal Pri­ori­ties Re­search: A Re­search Agenda

jonasschuett6 Jan 2021 21:47 UTC
58 points
4 comments1 min readEA link

Noah Tay­lor: Devel­op­ing a re­search agenda for bridg­ing ex­is­ten­tial risk and peace and con­flict studies

EA Global21 Jan 2021 16:19 UTC
20 points
0 comments20 min readEA link
(www.youtube.com)

[Pod­cast] Si­mon Beard on Parfit, Cli­mate Change, and Ex­is­ten­tial Risk

finm28 Jan 2021 19:47 UTC
11 points
0 comments1 min readEA link
(hearthisidea.com)

13 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan 2021 up­date)

HaydnBelfield8 Feb 2021 12:42 UTC
7 points
2 comments10 min readEA link

Stu­art Rus­sell Hu­man Com­pat­i­ble AI Roundtable with Allan Dafoe, Rob Re­ich, & Ma­ri­etje Schaake

Mahendra Prasad11 Feb 2021 7:43 UTC
16 points
0 comments1 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

Surveillance and free ex­pres­sion | Sunyshore

BrownHairedEevee23 Feb 2021 2:14 UTC
10 points
0 comments9 min readEA link
(sunyshore.substack.com)

How to Sur­vive the End of the Universe

avturchin28 Nov 2019 12:40 UTC
47 points
11 comments33 min readEA link

A full syl­labus on longtermism

jtm5 Mar 2021 22:57 UTC
109 points
13 comments8 min readEA link

What is the ar­gu­ment against a Thanos-ing all hu­man­ity to save the lives of other sen­tient be­ings?

somethoughts7 Mar 2021 8:02 UTC
0 points
11 comments3 min readEA link

Re­sponse to Phil Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfield8 Mar 2021 18:09 UTC
130 points
77 comments5 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:48 UTC
118 points
28 comments64 min readEA link

2017 AI Safety Liter­a­ture Re­view and Char­ity Comparison

Larks20 Dec 2017 21:54 UTC
43 points
17 comments23 min readEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

Larks13 Dec 2016 4:36 UTC
57 points
22 comments28 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 1

Fods1213 Dec 2018 5:10 UTC
22 points
13 comments8 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

Fods1213 Dec 2018 5:12 UTC
9 points
12 comments7 min readEA link

New pop­u­lar sci­ence book on x-risks: “End Times”

Hauke Hillebrandt1 Oct 2019 7:18 UTC
17 points
2 comments2 min readEA link

[Pod­cast] Thomas Moynihan on the His­tory of Ex­is­ten­tial Risk

finm22 Mar 2021 11:07 UTC
26 points
2 comments1 min readEA link
(hearthisidea.com)

Ap­ply to the Stan­ford Ex­is­ten­tial Risks Con­fer­ence! (April 17-18)

kuhanj26 Mar 2021 18:28 UTC
26 points
2 comments1 min readEA link

How to PhD

eca28 Mar 2021 19:56 UTC
104 points
28 comments11 min readEA link

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
40 points
3 comments1 min readEA link
(s-risks.org)

[Question] What is EA opinion on The Bul­letin of the Atomic Scien­tists?

VPetukhov2 Dec 2019 5:45 UTC
36 points
9 comments1 min readEA link

[Link] New Founders Pledge re­port on ex­is­ten­tial risk

John G. Halstead28 Mar 2019 11:46 UTC
40 points
1 comment1 min readEA link

Five GCR grants from the Global Challenges Foundation

Aaron Gertler16 Jan 2020 0:46 UTC
34 points
1 comment5 min readEA link

Op­tion Value, an In­tro­duc­tory Guide

Caleb_Maresca21 Feb 2020 14:45 UTC
30 points
3 comments7 min readEA link

Cur­rent Es­ti­mates for Like­li­hood of X-Risk?

rhys_lindmark6 Aug 2018 18:05 UTC
24 points
23 comments1 min readEA link

X-risks of SETI and METI?

Geoffrey Miller2 Jul 2019 22:41 UTC
18 points
11 comments1 min readEA link

[Link] Thiel on GCRs

Milan_Griffes22 Jul 2019 20:47 UTC
28 points
11 comments1 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will Bradshaw28 Oct 2019 15:32 UTC
24 points
8 comments1 min readEA link

Beyond Astro­nom­i­cal Waste

Wei_Dai27 Dec 2018 9:27 UTC
23 points
2 comments1 min readEA link
(www.lesswrong.com)

5 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (April 2020 up­date)

HaydnBelfield29 Apr 2020 9:37 UTC
23 points
1 comment4 min readEA link

Toby Ord: Fireside chat (2018)

EA Global1 Mar 2019 15:48 UTC
19 points
0 comments29 min readEA link
(www.youtube.com)

In­ter­na­tional co­op­er­a­tion as a tool to re­duce two ex­is­ten­tial risks.

johl@umich.edu19 Apr 2021 16:51 UTC
27 points
4 comments23 min readEA link

Jaime Yas­sif: Re­duc­ing global catas­trophic biolog­i­cal risks

EA Global25 Oct 2020 5:48 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

Toby Ord at EA Global: Reconnect

EA Global20 Mar 2021 7:00 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

[Question] What would “do­ing enough” to safe­guard the long-term fu­ture look like?

HStencil22 Apr 2020 21:47 UTC
20 points
0 comments1 min readEA link

[Question] Is there any­thing like “green bonds” for x-risk miti­ga­tion?

Ramiro30 Jun 2020 0:33 UTC
21 points
1 comment1 min readEA link

Alien coloniza­tion of Earth’s im­pact the the rel­a­tive im­por­tance of re­duc­ing differ­ent ex­is­ten­tial risks

Evira5 Sep 2019 0:27 UTC
7 points
8 comments1 min readEA link

Niel Bow­er­man: Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

EA Global17 Jan 2020 1:07 UTC
7 points
2 comments16 min readEA link
(www.youtube.com)

En­light­ened Con­cerns of Tomorrow

cassidynelson15 Mar 2018 5:29 UTC
15 points
8 comments4 min readEA link

Emily Grundy: Aus­trali­ans’ per­cep­tions of global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Luisa Ro­driguez: How to do em­piri­cal cause pri­ori­ti­za­tion re­search

EA Global21 Nov 2020 8:12 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

Lec­ture Videos from Cam­bridge Con­fer­ence on Catas­trophic Risk

HaydnBelfield23 Apr 2019 16:03 UTC
15 points
3 comments1 min readEA link

Public Opinion about Ex­is­ten­tial Risk

cscanlon_duplicate0.889559973201212525 Aug 2018 12:34 UTC
13 points
9 comments8 min readEA link

Policy and re­search ideas to re­duce ex­is­ten­tial risk

80000_Hours27 Apr 2020 8:46 UTC
2 points
0 comments4 min readEA link
(80000hours.org)

The case for re­duc­ing ex­is­ten­tial risk

Benjamin_Todd1 Oct 2017 8:44 UTC
9 points
1 comment1 min readEA link
(80000hours.org)

My Cause Selec­tion: Dave Denkenberger

Denkenberger16 Aug 2015 15:06 UTC
6 points
7 comments3 min readEA link

Kris­tian Rönn: Global challenges

EA Global11 Aug 2017 8:19 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

Is­lands as re­fuges for sur­viv­ing global catastrophes

turchin13 Sep 2018 13:33 UTC
3 points
10 comments2 min readEA link

Ex­am­ple syl­labus “Ex­is­ten­tial Risks”

simonfriederich3 Jul 2021 9:23 UTC
14 points
2 comments10 min readEA link

Causal Net­work Model III: Findings

Alex_Barry22 Nov 2017 15:43 UTC
7 points
4 comments9 min readEA link

An In­for­mal Re­view of Space Exploration

kbog31 Jan 2020 13:16 UTC
53 points
7 comments36 min readEA link

The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

DannyBressler20 May 2021 0:39 UTC
66 points
6 comments2 min readEA link

[Feed­back Re­quest] Hyper­text Fic­tion Piece on Ex­is­ten­tial Hope

Miranda_Zhang30 May 2021 15:44 UTC
35 points
2 comments1 min readEA link

High Im­pact Ca­reers in For­mal Ver­ifi­ca­tion: Ar­tifi­cial Intelligence

quinn5 Jun 2021 14:45 UTC
25 points
6 comments16 min readEA link

[Past Event] US Policy Ca­reers Speaker Series—Sum­mer 2021

Mauricio18 Jun 2021 20:01 UTC
95 points
0 comments2 min readEA link

Ex­plor­ing Ex­is­ten­tial Risk—us­ing Con­nected Papers to find Effec­tive Altru­ism al­igned ar­ti­cles and researchers

Maris Sala23 Jun 2021 17:03 UTC
52 points
5 comments6 min readEA link

Nu­clear Strat­egy in a Semi-Vuln­er­a­ble World

Jackson Wagner28 Jun 2021 17:35 UTC
27 points
0 comments18 min readEA link

Robert Wright on us­ing cog­ni­tive em­pa­thy to save the world

80000_Hours27 May 2021 15:38 UTC
7 points
0 comments70 min readEA link

Mauhn Re­leases AI Safety Documentation

Berg Severens2 Jul 2021 12:19 UTC
4 points
2 comments1 min readEA link

[Question] Peo­ple work­ing on x-risks: what emo­tion­ally mo­ti­vates you?

Vael Gates5 Jul 2021 3:16 UTC
16 points
8 comments1 min readEA link

Po­ten­tial Risks from Ad­vanced Ar­tifi­cial In­tel­li­gence: The Philan­thropic Opportunity

Holden Karnofsky6 May 2016 12:55 UTC
2 points
0 comments23 min readEA link
(www.openphilanthropy.org)

Tay­lor Swift’s “long story short” Is Ac­tu­ally About Effec­tive Altru­ism and Longter­mism (PARODY)

shepardspie23 Jul 2021 13:25 UTC
34 points
13 comments7 min readEA link

How can economists best con­tribute to pan­demic pre­ven­tion and pre­pared­ness?

Rémi T22 Aug 2021 20:49 UTC
59 points
3 comments23 min readEA link

Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing: Which In­sti­tu­tions? (A Frame­work)

IanDavidMoss23 Aug 2021 2:26 UTC
73 points
6 comments34 min readEA link

Eco­nomic in­equal­ity and the long-term future

Global Priorities Institute30 Apr 2021 13:26 UTC
11 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

Do not go gen­tle: why the Asym­me­try does not sup­port anti-natalism

Global Priorities Institute30 Apr 2021 13:26 UTC
4 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

Ex­is­ten­tial risk from a Thomist Chris­tian perspective

Global Priorities Institute31 Dec 2020 14:27 UTC
3 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

Tough enough? Ro­bust satis­fic­ing as a de­ci­sion norm for long-term policy analysis

Global Priorities Institute31 Oct 2020 13:28 UTC
3 points
0 comments3 min readEA link
(globalprioritiesinstitute.org)

Pan­demic pre­ven­tion in Ger­man par­ties’ fed­eral elec­tion platforms

tilboy19 Sep 2021 7:40 UTC
17 points
2 comments5 min readEA link

Fa­nat­i­cism in AI: SERI Project

Jake Arft-Guatelli24 Sep 2021 4:39 UTC
7 points
2 comments5 min readEA link

Seek­ing so­cial sci­ence stu­dents /​ col­lab­o­ra­tors in­ter­ested in AI ex­is­ten­tial risks

Vael Gates24 Sep 2021 21:56 UTC
58 points
7 comments3 min readEA link

[Link post] Will we see fast AI Take­off?

SammyDMartin30 Sep 2021 14:03 UTC
18 points
0 comments1 min readEA link

Les­sons from Run­ning Stan­ford EA and SERI

kuhanj20 Aug 2021 14:51 UTC
247 points
26 comments23 min readEA link

Nu­clear Es­pi­onage and AI Governance

GAA4 Oct 2021 18:21 UTC
32 points
3 comments24 min readEA link

Carl Shul­man on the com­mon-sense case for ex­is­ten­tial risk work and its prac­ti­cal implications

80000_Hours8 Oct 2021 13:43 UTC
41 points
2 comments150 min readEA link

AI Risk in Africa

Claude Formanek12 Oct 2021 2:28 UTC
16 points
0 comments10 min readEA link

[Creative Writ­ing Con­test] The Puppy Problem

Louis13 Oct 2021 14:01 UTC
13 points
0 comments7 min readEA link

De­com­pos­ing Biolog­i­cal Risks: Harm, Po­ten­tial, and Strategies

simeon_c14 Oct 2021 7:09 UTC
26 points
3 comments8 min readEA link

[Creative Writ­ing Con­test] [Fic­tion] The Long Way Round

Toby Newberry14 Oct 2021 11:16 UTC
4 points
0 comments5 min readEA link

New Work­ing Paper Series of the Le­gal Pri­ori­ties Project

Legal Priorities Project18 Oct 2021 10:30 UTC
60 points
0 comments9 min readEA link

X-Risk, An­throp­ics, & Peter Thiel’s In­vest­ment Thesis

Jackson Wagner26 Oct 2021 18:38 UTC
42 points
1 comment20 min readEA link

[Creative Non­fic­tion] The Toba Su­per­vol­canic Eruption

Jackson Wagner29 Oct 2021 17:02 UTC
48 points
3 comments6 min readEA link

[Creative writ­ing con­test] The sor­cerer in chains

Swimmer30 Oct 2021 1:23 UTC
16 points
0 comments32 min readEA link

[Creative Writ­ing Con­test] [Fic­tion] The Rea­son Why

b_sen30 Oct 2021 2:37 UTC
2 points
0 comments5 min readEA link
(archiveofourown.org)

Ap­ply to be a Stan­ford HAI Ju­nior Fel­low (As­sis­tant Pro­fes­sor- Re­search) by Nov. 15, 2021

Vael Gates31 Oct 2021 2:21 UTC
15 points
0 comments1 min readEA link

Be a Stoic and build bet­ter democ­ra­cies: an Aussie-as take on x-risks (re­view es­say)

Matt Boyd21 Nov 2021 4:30 UTC
29 points
3 comments11 min readEA link

Pod­cast: Mag­nus Vind­ing on re­duc­ing suffer­ing, why AI progress is likely to be grad­ual and dis­tributed and how to rea­son about poli­tics

Gus Docker21 Nov 2021 15:29 UTC
26 points
0 comments1 min readEA link
(www.utilitarianpodcast.com)

[Question] How many EA 2021 $s would you trade off against a 0.01% chance of ex­is­ten­tial catas­tro­phe?

Linch27 Nov 2021 23:46 UTC
51 points
92 comments1 min readEA link

Strate­gic Risks and Un­likely Benefits

Anthony Repetto4 Dec 2021 6:01 UTC
1 point
0 comments4 min readEA link

What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

anson11 Dec 2021 1:16 UTC
12 points
0 comments42 min readEA link

Nines of safety: Ter­ence Tao’s pro­posed unit of mea­sure­ment of risk

anson12 Dec 2021 18:01 UTC
41 points
24 comments4 min readEA link

Ap­ply for Stan­ford Ex­is­ten­tial Risks Ini­ti­a­tive (SERI) Postdoc

Vael Gates14 Dec 2021 21:50 UTC
28 points
2 comments1 min readEA link

An EA case for in­ter­est in UAPs/​UFOs and an idea as to what they are

TheNotSoGreatFilter30 Dec 2021 17:13 UTC
30 points
13 comments5 min readEA link

PIBBSS Fel­low­ship: Bounty for Refer­rals & Dead­line Extension

Anna_Gajdova17 Jan 2022 16:23 UTC
17 points
7 comments1 min readEA link

My In­ter­view with The AI That Can Do It All

AndreFerretti31 Jan 2022 15:37 UTC
18 points
7 comments17 min readEA link

Neil Sin­hab­abu on metaethics and world gov­ern­ment for re­duc­ing ex­is­ten­tial risk

Gus Docker2 Feb 2022 20:23 UTC
7 points
0 comments85 min readEA link
(www.utilitarianpodcast.com)

Im­pact Op­por­tu­nity: In­fluence UK Biolog­i­cal Se­cu­rity Strategy

Jonathan Nankivell17 Feb 2022 20:36 UTC
49 points
0 comments3 min readEA link

Pod­cast: Bryan Ca­plan on open bor­ders, UBI, to­tal­i­tar­i­anism, AI, pan­demics, util­i­tar­i­anism and la­bor economics

Gus Docker22 Feb 2022 15:04 UTC
22 points
0 comments46 min readEA link
(www.utilitarianpodcast.com)

[Question] AI Eth­i­cal Committee

eaaicommittee1 Mar 2022 23:35 UTC
8 points
0 comments1 min readEA link

In­ter­view sub­jects for im­pact liti­ga­tion pro­ject (biose­cu­rity & pan­demic pre­pared­ness)

Legal Priorities Project3 Mar 2022 14:20 UTC
20 points
0 comments1 min readEA link

Best Coun­tries dur­ing Nu­clear War

AndreFerretti4 Mar 2022 11:19 UTC
7 points
15 comments1 min readEA link

Short­en­ing & en­light­en­ing dark ages as a sub-area of catas­trophic risk reduction

Jpmos5 Mar 2022 7:43 UTC
25 points
7 comments5 min readEA link

In­tro­duc­tory video on safe­guard­ing the long-term future

JulianHazell7 Mar 2022 12:52 UTC
23 points
3 comments1 min readEA link

An­nounc­ing the CERI Sum­mer Re­search Fellowship

Dewi Erwan7 Mar 2022 19:07 UTC
75 points
0 comments2 min readEA link

I want Fu­ture Perfect, but for sci­ence publications

James Lin8 Mar 2022 17:09 UTC
66 points
8 comments5 min readEA link

[Link] Sean Car­roll in­ter­views Aus­tralian poli­ti­cian An­drew Leigh on ex­is­ten­tial risks

Aryeh Englander8 Mar 2022 1:29 UTC
15 points
1 comment1 min readEA link

The most good sys­tem vi­sual and sta­bi­liza­tion steps

brb24314 Mar 2022 23:54 UTC
3 points
0 comments1 min readEA link

My cur­rent thoughts on the risks from SETI

Matthew_Barnett15 Mar 2022 17:17 UTC
47 points
9 comments10 min readEA link

Nu­clear Risk Overview: CERI Sum­mer Re­search Fellowship

Will Aldred27 Mar 2022 15:51 UTC
57 points
3 comments13 min readEA link

AI Safety Overview: CERI Sum­mer Re­search Fellowship

Jamie Bernardi24 Mar 2022 15:12 UTC
29 points
0 comments2 min readEA link

Mis­cel­la­neous & Meta X-Risk Overview: CERI Sum­mer Re­search Fellowship

Will Aldred30 Mar 2022 2:45 UTC
39 points
0 comments3 min readEA link

Cause pro­file: Cog­ni­tive En­hance­ment Re­search

George Altman27 Mar 2022 13:43 UTC
60 points
4 comments22 min readEA link

Nu­clear Ex­pert Com­ment on Samotsvety Nu­clear Risk Forecast

Jhrosenberg26 Mar 2022 9:22 UTC
127 points
13 comments16 min readEA link

[Question] Re­quest for As­sis­tance—Re­search on Sce­nario Devel­op­ment for Ad­vanced AI Risk

Kiliank30 Mar 2022 3:01 UTC
2 points
1 comment1 min readEA link

Com­mu­nity Build­ing for Grad­u­ate Stu­dents: A Tar­geted Approach

Neil Crawford29 Mar 2022 19:47 UTC
13 points
0 comments3 min readEA link

[Question] Is AI safety still ne­glected?

Coafos30 Mar 2022 9:09 UTC
12 points
14 comments1 min readEA link

An­nounc­ing the Fu­ture Fund

Nick_Beckstead28 Feb 2022 17:26 UTC
372 points
192 comments4 min readEA link
(ftxfuturefund.org)

Why should we care about ex­is­ten­tial risk?

RedStateBlueState8 Apr 2022 23:43 UTC
21 points
7 comments4 min readEA link

Reflect on Your Ca­reer Ap­ti­tudes (Ex­er­cise)

Akash10 Apr 2022 2:40 UTC
15 points
1 comment2 min readEA link

Cu­rated con­ver­sa­tions with brilli­ant effec­tive altruists

spencerg11 Apr 2022 15:32 UTC
28 points
0 comments22 min readEA link

A primer & some re­flec­tions on re­cent CSER work (EAB talk)

MMMaas12 Apr 2022 12:56 UTC
68 points
4 comments10 min readEA link

[Question] Please Share Your Per­spec­tives on the De­gree of So­cietal Im­pact from Trans­for­ma­tive AI Outcomes

Kiliank15 Apr 2022 1:23 UTC
3 points
3 comments1 min readEA link

[Question] What “pivotal” and use­ful re­search … would you like to see as­sessed? (Bounty for sug­ges­tions)

david_reinstein28 Apr 2022 15:49 UTC
37 points
21 comments7 min readEA link

Beg­ging, Plead­ing AI Orgs to Com­ment on NIST AI Risk Man­age­ment Framework

Bridges15 Apr 2022 19:35 UTC
87 points
4 comments2 min readEA link

Help with the Fo­rum; wiki edit­ing, giv­ing feed­back, mod­er­a­tion, and more

Lizka20 Apr 2022 12:58 UTC
89 points
6 comments3 min readEA link

How to or­ganise ‘the one per­cent’ to fix cli­mate change

One Percent Organiser16 Apr 2022 17:18 UTC
2 points
2 comments9 min readEA link

Which Post Idea Is Most Effec­tive?

Jordan Arel25 Apr 2022 4:47 UTC
26 points
6 comments2 min readEA link

Eco­nomic Pie Re­search as a Cause Area

mediche15 Apr 2022 10:41 UTC
4 points
3 comments1 min readEA link

How to en­gage with AI 4 So­cial Jus­tice ac­tors

TomWestgarth26 Apr 2022 8:39 UTC
13 points
6 comments1 min readEA link

On Pos­i­tivity given X-risks

YusefMosiahNathanson28 Apr 2022 9:02 UTC
1 point
0 comments4 min readEA link

Should longter­mists fo­cus more on cli­mate re­silience?

Richard Ren3 May 2022 16:51 UTC
46 points
15 comments20 min readEA link

Ver­ti­cal farm­ing to lessen our re­li­ance on the Sun

Ty5 May 2022 5:57 UTC
12 points
3 comments2 min readEA link

Com­piling re­sources com­par­ing AI mi­suse, mis­al­ign­ment, and in­com­pe­tence risk and tractability

Peter44445 May 2022 16:16 UTC
3 points
3 comments1 min readEA link

Tran­scripts of in­ter­views with AI researchers

Vael Gates9 May 2022 6:03 UTC
134 points
13 comments2 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link

Rab­bits, robots and resurrection

Patrick Wilson10 May 2022 15:00 UTC
9 points
0 comments15 min readEA link

Fo­cus of the IPCC Assess­ment Re­ports Has Shifted to Lower Temperatures

FJehn12 May 2022 12:15 UTC
10 points
16 comments8 min readEA link

Fermi es­ti­ma­tion of the im­pact you might have work­ing on AI safety

frib13 May 2022 13:30 UTC
24 points
13 comments1 min readEA link

[Link post] Promis­ing Paths to Align­ment—Con­nor Leahy | Talk

frances_lorenz14 May 2022 15:58 UTC
16 points
0 comments1 min readEA link

[Question] What are ex­am­ples where ex­treme risk poli­cies have been suc­cess­fully im­ple­mented?

Joris P16 May 2022 15:37 UTC
31 points
14 comments1 min readEA link

Deep­Mind’s gen­er­al­ist AI, Gato: A non-tech­ni­cal explainer

frances_lorenz16 May 2022 21:19 UTC
127 points
13 comments6 min readEA link

U.S. EAs Should Con­sider Ap­ply­ing to Join U.S. Diplomacy

abiolvera17 May 2022 17:14 UTC
114 points
20 comments9 min readEA link

Don’t Be Com­forted by Failed Apocalypses

ColdButtonIssues17 May 2022 11:20 UTC
20 points
13 comments1 min readEA link

BERI is seek­ing new col­lab­o­ra­tors (2022)

sawyer17 May 2022 17:31 UTC
21 points
0 comments1 min readEA link

My first effec­tive al­tru­ism con­fer­ence: 10 learn­ings, my 121s and next steps

Milan.Patel21 May 2022 8:51 UTC
9 points
3 comments4 min readEA link

The case to abol­ish the biol­ogy of suffer­ing as a longter­mist action

Gaetan_Selle21 May 2022 8:51 UTC
33 points
8 comments4 min readEA link

Ar­gu­ments for Why Prevent­ing Hu­man Ex­tinc­tion is Wrong

Anthony Fleming21 May 2022 7:17 UTC
34 points
48 comments3 min readEA link

GCRI Open Call for Ad­visees and Col­lab­o­ra­tors 2022

McKenna_Fitzgerald23 May 2022 21:41 UTC
4 points
3 comments1 min readEA link

Build­ing a Bet­ter Dooms­day Clock

christian.r25 May 2022 17:02 UTC
23 points
2 comments1 min readEA link
(www.lawfareblog.com)

Fo­cus on Civ­i­liza­tional Re­silience over Cause Areas

timfarkas26 May 2022 17:37 UTC
15 points
6 comments2 min readEA link

EA, Psy­chol­ogy & AI Safety Research

Sam Ellis26 May 2022 23:46 UTC
25 points
3 comments7 min readEA link

An­nounc­ing the Le­gal Pri­ori­ties Pro­ject Writ­ing Com­pe­ti­tion: Im­prov­ing Cost-Benefit Anal­y­sis to Ac­count for Ex­is­ten­tial and Catas­trophic Risks

Mackenzie7 Jun 2022 9:37 UTC
104 points
8 comments9 min readEA link

How to dis­solve moral clue­less­ness about donat­ing mosquito nets

ben.smith8 Jun 2022 7:12 UTC
25 points
8 comments12 min readEA link

Things usu­ally end slowly

OllieBase7 Jun 2022 17:00 UTC
76 points
14 comments7 min readEA link

‘EA Ar­chi­tect’: Up­dates on Civ­i­liza­tional Shelters & Ca­reer Options

Tereza_Flidrova8 Jun 2022 13:45 UTC
67 points
6 comments7 min readEA link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael Gates14 Jun 2022 0:49 UTC
45 points
5 comments30 min readEA link

Weekly EA Global Com­mu­nity Meet and Greet.

Brainy10 Jun 2022 11:10 UTC
1 point
0 comments1 min readEA link

Fønix: Bioweapons shelter pro­ject launch

Ulrik Horn14 Jun 2022 3:44 UTC
73 points
18 comments8 min readEA link

The im­por­tance of get­ting digi­tal con­scious­ness right

Derek Shiller13 Jun 2022 10:41 UTC
62 points
13 comments6 min readEA link

Re­sources I send to AI re­searchers about AI safety

Vael Gates14 Jun 2022 2:23 UTC
60 points
1 comment10 min readEA link

Steer­ing AI to care for an­i­mals, and soon

Andrew Critch14 Jun 2022 1:13 UTC
205 points
38 comments1 min readEA link

Ex­pected eth­i­cal value of a ca­reer in AI safety

Jordan Taylor14 Jun 2022 14:25 UTC
35 points
16 comments13 min readEA link

[Question] What are EA’s biggest leg­ible achieve­ments in x-risk?

acylhalide14 Jun 2022 18:37 UTC
33 points
17 comments1 min readEA link

FYI: I’m work­ing on a book about the threat of AGI/​ASI for a gen­eral au­di­ence. I hope it will be of value to the cause and the community

Darren McKee17 Jun 2022 11:52 UTC
26 points
1 comment2 min readEA link

Con­cerns with Differ­ence-Mak­ing Risk Aversion

Charlotte17 Jun 2022 13:59 UTC
41 points
1 comment6 min readEA link

‘Force mul­ti­pli­ers’ for EA research

Craig Drayton18 Jun 2022 13:39 UTC
18 points
7 comments4 min readEA link

My notes on: A Very Ra­tional End of the World | Thomas Moynihan

Vasco Grilo20 Jun 2022 8:50 UTC
13 points
1 comment5 min readEA link

Mili­tary Ar­tifi­cial In­tel­li­gence as Con­trib­u­tor to Global Catas­trophic Risk

MMMaas27 Jun 2022 10:35 UTC
40 points
0 comments54 min readEA link

[Long ver­sion] Case study: re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_Green27 Jun 2022 19:20 UTC
44 points
0 comments43 min readEA link

Cos­mic’s Mug­ger : Should we re­ally de­lay cos­mic ex­pan­sion ?

Lysandre Terrisse30 Jun 2022 6:41 UTC
4 points
0 comments4 min readEA link

Com­po­nents of Strate­gic Clar­ity [Strate­gic Per­spec­tives on Long-term AI Gover­nance, #2]

MMMaas2 Jul 2022 11:22 UTC
58 points
0 comments5 min readEA link

There are no peo­ple to be effec­tively al­tru­is­tic for on a dead planet: EA fund­ing of pro­jects with­out con­duct­ing En­vi­ron­men­tal Im­pact Assess­ments (EIAs), Health and Safety Assess­ments (HSAs) and Life Cy­cle Assess­ments (LCAs) = catastrophe

Deborah W.A. Foulkes26 May 2022 23:46 UTC
6 points
20 comments8 min readEA link

Open Cli­mate Data as a pos­si­ble cause area, Open Philanthropy

Ben Yeoh3 Jul 2022 12:47 UTC
3 points
0 comments11 min readEA link

My Most Likely Rea­son to Die Young is AI X-Risk

AISafetyIsNotLongtermist4 Jul 2022 15:34 UTC
226 points
62 comments4 min readEA link
(www.lesswrong.com)

The es­tab­lished nuke risk field de­serves more engagement

Ilverin4 Jul 2022 19:39 UTC
17 points
12 comments1 min readEA link

In­tro­duc­ing the Fund for Align­ment Re­search (We’re Hiring!)

AdamGleave6 Jul 2022 2:00 UTC
74 points
3 comments4 min readEA link

An­nounc­ing Fu­ture Fo­rum—Ap­ply Now

isaakfreeman6 Jul 2022 17:35 UTC
92 points
11 comments4 min readEA link

Well-stud­ied Ex­is­ten­tial Risks with Pre­dic­tive Indicators

Noah Scales6 Jul 2022 22:13 UTC
4 points
0 comments3 min readEA link

[Question] Is there any re­search on in­ter­nal­iz­ing x-risks or global catas­trophic risks into economies?

Ramiro6 Jul 2022 17:08 UTC
19 points
3 comments1 min readEA link

[Book] On Assess­ing the Risk of Nu­clear War

Aryeh Englander7 Jul 2022 21:08 UTC
25 points
2 comments8 min readEA link

Re­silience Via Frag­mented Power

steve632014 Jul 2022 15:37 UTC
2 points
0 comments6 min readEA link

[7] Win­ter-Safe Deter­rence as a Prac­ti­cal Con­tri­bu­tion to Re­duc­ing Nu­clear Win­ter Risk: A Re­ply (Baum, 2015)

Will Aldred5 Jul 2022 17:34 UTC
28 points
0 comments2 min readEA link
(gcrinstitute.org)

Cli­mate change is Now Self-amplifying

Noah Scales11 Jul 2022 10:48 UTC
−3 points
2 comments3 min readEA link

The Threat of Cli­mate Change Is Exaggerated

Samrin Saleem11 Jul 2022 15:44 UTC
7 points
15 comments14 min readEA link

[4] Cli­matic con­se­quences of re­gional nu­clear con­flicts (Robock et al., 2007)

Will Aldred15 Jul 2022 10:04 UTC
32 points
0 comments2 min readEA link
(climate.envsci.rutgers.edu)

[Question] Ex­is­ten­tial Biorisk vs. GCBR

Will Aldred15 Jul 2022 21:16 UTC
37 points
2 comments1 min readEA link

Re­sults of a Span­ish-speak­ing es­say con­test about Global Catas­trophic Risk

Jaime Sevilla15 Jul 2022 16:53 UTC
86 points
7 comments6 min readEA link

Why EAs are skep­ti­cal about AI Safety

Lukas Trötzmüller18 Jul 2022 19:01 UTC
277 points
31 comments30 min readEA link

BERI is hiring a Deputy Director

sawyer18 Jul 2022 22:12 UTC
6 points
0 comments1 min readEA link

Why EA needs Oper­a­tions Re­search: the sci­ence of de­ci­sion making

wesg21 Jul 2022 0:47 UTC
68 points
20 comments14 min readEA link

EA is be­com­ing in­creas­ingly in­ac­cessible, at the worst pos­si­ble time

Ann Garth22 Jul 2022 15:40 UTC
69 points
13 comments14 min readEA link

The Charle­magne Effect: The Longter­mist Case For Neartermism

Reed Shafer-Ray25 Jul 2022 8:12 UTC
15 points
7 comments21 min readEA link

[Question] Odds of re­cov­er­ing val­ues af­ter col­lapse?

Will Aldred24 Jul 2022 18:20 UTC
63 points
13 comments3 min readEA link

Low-key Longtermism

Jonathan Rystrom25 Jul 2022 13:39 UTC
26 points
6 comments8 min readEA link

[Question] Slow­ing down AI progress?

Eleni_A26 Jul 2022 8:46 UTC
14 points
9 comments1 min readEA link

More to ex­plore on ‘Our Fi­nal Cen­tury’

EA Handbook15 Jul 2022 23:00 UTC
1 point
0 comments2 min readEA link

[Question] How long does it take to un­der­srand AI X-Risk from scratch so that I have a con­fi­dent, clear men­tal model of it from first prin­ci­ples?

Jordan Arel27 Jul 2022 16:58 UTC
29 points
6 comments1 min readEA link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni_A1 Sep 2022 11:45 UTC
17 points
1 comment3 min readEA link

In­vite: UnCon­fer­ence, How best for hu­mans to thrive and sur­vive over the long-term

Ben Yeoh27 Jul 2022 22:19 UTC
10 points
2 comments2 min readEA link

(p-)Zom­bie Uni­verse: an­other X-risk

tobytrem28 Jul 2022 21:34 UTC
19 points
5 comments4 min readEA link

The first AGI will be a buggy mess

titotal30 Jul 2022 13:53 UTC
47 points
20 comments9 min readEA link

The dan­ger of nu­clear war is greater than it has ever been. Why donat­ing to and sup­port­ing Back from the Brink is an effec­tive re­sponse to this threat

astupple2 Aug 2022 2:31 UTC
14 points
8 comments5 min readEA link

Longter­mism, risk, and extinction

Richard Pettigrew4 Aug 2022 15:25 UTC
55 points
12 comments41 min readEA link

On the Risk of an Ac­ci­den­tal or Unau­tho­rized Nu­clear De­to­na­tion (Iklé, Aron­son, Madan­sky, 1958)

nathan980004 Aug 2022 13:19 UTC
4 points
0 comments1 min readEA link
(www.rand.org)

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

jskatt4 Aug 2022 19:23 UTC
17 points
9 comments1 min readEA link

Where are the red lines for AI?

Karl von Wendt5 Aug 2022 9:41 UTC
13 points
3 comments6 min readEA link

Is­lands, nu­clear win­ter, and trade dis­rup­tion as a hu­man ex­is­ten­tial risk factor

Matt Boyd7 Aug 2022 2:18 UTC
34 points
5 comments19 min readEA link

How I Came To Longter­mism On My Own & An Out­sider Per­spec­tive On EA Longtermism

Jordan Arel7 Aug 2022 2:42 UTC
34 points
2 comments20 min readEA link

Ques­tion about ter­minol­ogy for lesser X-risks and S-risks

Laura Leighton8 Aug 2022 4:39 UTC
8 points
4 comments1 min readEA link

[Question] What’s the like­li­hood of ir­recov­er­able civ­i­liza­tional col­lapse if 90% of the pop­u­la­tion dies?

simeon_c7 Aug 2022 19:47 UTC
21 points
3 comments1 min readEA link

“Nor­mal ac­ci­dents” and AI sys­tems

Eleni_A8 Aug 2022 18:43 UTC
4 points
1 comment1 min readEA link
(www.achan.ca)

Clas­sify­ing sources of AI x-risk

Sam Clarke8 Aug 2022 18:18 UTC
36 points
6 comments3 min readEA link

Which of these ar­gu­ments for x-risk do you think we should test?

Wim9 Aug 2022 13:43 UTC
3 points
2 comments1 min readEA link

Anti-squat­ted AI x-risk do­mains index

plex12 Aug 2022 12:00 UTC
52 points
9 comments1 min readEA link

Are we already past the precipice?

Dem0sthenes10 Aug 2022 4:01 UTC
1 point
5 comments2 min readEA link

The His­tory, Episte­mol­ogy and Strat­egy of Tech­nolog­i­cal Res­traint, and les­sons for AI (short es­say)

MMMaas10 Aug 2022 11:00 UTC
61 points
3 comments9 min readEA link
(verfassungsblog.de)

What per­centage of things that could kill us all are “Other” risks?

PCO Moore10 Aug 2022 9:20 UTC
7 points
0 comments4 min readEA link

An open let­ter to my great grand kids’ great grand kids

Locke10 Aug 2022 15:07 UTC
1 point
0 comments13 min readEA link

ISYP Third Nu­clear Age Con­fer­ence New Age, New Think­ing: Challenges of a Third Nu­clear Age 31 Oc­to­ber-2 Novem­ber 2022, in Ber­lin, Ger­many

Daniel Ajudeonu11 Aug 2022 9:43 UTC
4 points
0 comments5 min readEA link

Why say ‘longter­mism’ and not just ‘ex­tinc­tion risk’?

tcelferact10 Aug 2022 23:05 UTC
5 points
4 comments1 min readEA link

Will longter­mists self-efface

Noah Scales12 Aug 2022 2:32 UTC
−1 points
23 comments6 min readEA link

Cos­mic rays could cause ma­jor elec­tronic dis­rup­tion and pose a small ex­is­ten­tial risk

M_Allcock12 Aug 2022 3:30 UTC
11 points
0 comments12 min readEA link

Re­fut­ing longter­mism with Fer­mat’s Last Theorem

astupple16 Aug 2022 12:26 UTC
3 points
32 comments3 min readEA link

Is Civ­i­liza­tion on the Brink of Col­lapse? - Kurzgesagt

Gabriel Mukobi16 Aug 2022 20:06 UTC
29 points
5 comments1 min readEA link
(www.youtube.com)

Could re­al­is­tic de­pic­tions of catas­trophic AI risks effec­tively re­duce said risks?

Matthew Barber17 Aug 2022 20:01 UTC
26 points
11 comments2 min readEA link

[Cross­post]: Huge vol­canic erup­tions: time to pre­pare (Na­ture)

Mike Cassidy19 Aug 2022 12:02 UTC
106 points
1 comment1 min readEA link
(www.nature.com)

Su­perfore­cast­ing Long-Term Risks and Cli­mate Change

LuisEUrtubey19 Aug 2022 18:05 UTC
47 points
0 comments2 min readEA link

[Question] I’m in­ter­view­ing Bear Brau­moel­ler about ‘Only The Dead: The Per­sis­tence of War in the Modern Age’. What should I ask?

Robert_Wiblin19 Aug 2022 15:18 UTC
12 points
2 comments1 min readEA link

Ok Doomer! SRM and Catas­trophic Risk Podcast

Gideon Futerman20 Aug 2022 12:22 UTC
10 points
4 comments1 min readEA link
(open.spotify.com)

BERI, Epoch, and FAR will ex­plain their work & cur­rent job open­ings on­line this Sunday

Rockwell19 Aug 2022 20:34 UTC
7 points
0 comments1 min readEA link

How much dona­tions are needed to neu­tral­ise the an­nual x-risk foot­print of the mean hu­man?

Vasco Grilo22 Sep 2022 6:41 UTC
8 points
2 comments1 min readEA link

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey Miller23 Aug 2022 17:15 UTC
35 points
17 comments1 min readEA link

War in Taiwan and AI Timelines

Jordan_Schneider24 Aug 2022 2:24 UTC
18 points
3 comments9 min readEA link
(www.chinatalk.media)

[Question] How to dis­close a new x-risk?

harsimony24 Aug 2022 1:35 UTC
20 points
8 comments1 min readEA link

Trans­lat­ing The Precipice into Czech: My ex­pe­rience and recommendations

Anna Stadlerova24 Aug 2022 4:51 UTC
87 points
7 comments20 min readEA link

What Is The Most Effec­tive Way To Look At Ex­is­ten­tial Risk?

Phil Tanny26 Aug 2022 11:21 UTC
−2 points
2 comments2 min readEA link

What if states don’t listen? A fun­da­men­tal gap in x-risk re­duc­tion strate­gies

HTC30 Aug 2022 4:27 UTC
24 points
1 comment17 min readEA link

The Hu­man Con­di­tion: A Cru­cial Com­po­nent of Ex­is­ten­tial Risk Calcu­la­tions

Phil Tanny28 Aug 2022 14:51 UTC
−10 points
5 comments1 min readEA link

A cri­tique of strong longtermism

Pablo Rosado28 Aug 2022 19:33 UTC
7 points
11 comments14 min readEA link

What 80000 Hours gets wrong about so­lar geoengineering

Gideon Futerman29 Aug 2022 13:24 UTC
32 points
4 comments23 min readEA link

The Hap­piness Max­i­mizer: Why EA is an x-risk

Obasi Shaw30 Aug 2022 4:29 UTC
8 points
6 comments29 min readEA link

Chain­ing the evil ge­nie: why “outer” AI safety is prob­a­bly easy

titotal30 Aug 2022 13:55 UTC
22 points
10 comments10 min readEA link

The great en­ergy de­scent—Part 1: Can re­new­ables re­place fos­sil fuels?

Corentin Biteau31 Aug 2022 21:51 UTC
25 points
0 comments22 min readEA link

The great en­ergy de­scent—Part 2: Limits to growth and why we prob­a­bly won’t reach the stars

Corentin Biteau31 Aug 2022 21:51 UTC
10 points
0 comments25 min readEA link

The great en­ergy de­scent (short ver­sion) - An im­por­tant thing EA might have missed

Corentin Biteau31 Aug 2022 21:50 UTC
49 points
79 comments10 min readEA link

The great en­ergy de­scent—Post 3: What we can do, what we can’t do

Corentin Biteau31 Aug 2022 21:51 UTC
10 points
0 comments22 min readEA link

In­tel­li­gence failures and a the­ory of change for fore­cast­ing

Nathan_Barnard31 Aug 2022 2:05 UTC
12 points
1 comment10 min readEA link

In­tel­li­gence failures and a the­ory of change for fore­cast­ing

Nathan_Barnard31 Aug 2022 2:05 UTC
12 points
1 comment10 min readEA link

A Cri­tique of AI Takeover Scenarios

Fods1231 Aug 2022 13:49 UTC
44 points
4 comments12 min readEA link

Biose­cu­rity challenges posed by Dual-Use Re­search of Con­cern (DURC)

Byron Cohen1 Sep 2022 7:33 UTC
12 points
0 comments8 min readEA link
(raisinghealth.substack.com)

Crit­i­cism of the main frame­work in AI alignment

Michele Campolo31 Aug 2022 21:44 UTC
34 points
4 comments7 min readEA link

[Question] Is there a “What We Owe The Fu­ture” fel­low­ship study guide?

Jordan Arel1 Sep 2022 1:40 UTC
8 points
2 comments1 min readEA link

The top X-fac­tor EA ne­glects: desta­bi­liza­tion of the United States

Yelnats T.J.31 Aug 2022 19:18 UTC
12 points
2 comments18 min readEA link

Longter­mism Sus­tain­abil­ity Un­con­fer­ence Invite

Ben Yeoh1 Sep 2022 12:34 UTC
3 points
0 comments2 min readEA link

What could a fel­low­ship scheme aimed at tack­ling the biggest threats to hu­man­ity look like?

james_r1 Sep 2022 15:29 UTC
4 points
0 comments5 min readEA link

[Question] How much does cli­mate change & the de­cline of liberal democ­racy in­di­rectly in­crease the prob­a­bil­ity of an x-risk?

Arden P. B. Wiese1 Sep 2022 18:33 UTC
7 points
7 comments1 min readEA link

The fu­ture of humanity

Dem0sthenes1 Sep 2022 22:34 UTC
1 point
0 comments8 min readEA link

Pre­sent-day good in­ten­tions aren’t suffi­cient to make the longterm fu­ture good in expectation

trurl2 Sep 2022 3:22 UTC
6 points
0 comments14 min readEA link

Path de­pen­dence and its im­pact on long-term outcomes

Archanaa2 Sep 2022 4:27 UTC
11 points
1 comment13 min readEA link

Sys­temic Cas­cad­ing Risks: Rele­vance in Longter­mism & Value Lock-In

Richard Ren2 Sep 2022 7:53 UTC
44 points
10 comments16 min readEA link

Crit­i­cism of EA and longtermism

St. Ignorant2 Sep 2022 7:23 UTC
2 points
0 comments14 min readEA link

A Case Against Strong Longtermism

A. Wolff2 Sep 2022 16:40 UTC
9 points
4 comments39 min readEA link

Seek­ing feed­back/​gaug­ing in­ter­est: Crowd­sourc­ing x crowd­fund­ing for ex­is­ten­tial risk ven­tures

Ruby Tang4 Sep 2022 16:18 UTC
4 points
0 comments1 min readEA link

In­ter­re­lat­ed­ness of x-risks and sys­temic fragilities

Naryan4 Sep 2022 21:36 UTC
17 points
6 comments2 min readEA link

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard Ren5 Sep 2022 15:07 UTC
65 points
5 comments5 min readEA link

Time/​Ta­lent/​Money Con­trib­u­tors to Ex­is­ten­tial Risk Ventures

Ruby Tang6 Sep 2022 9:52 UTC
1 point
2 comments1 min readEA link

[Question] Would cre­at­ing and bury­ing a se­ries of dooms­day chests to re­boot civ­i­liza­tion be a wor­thy use of re­sources?

ewu7 Sep 2022 2:45 UTC
5 points
1 comment1 min readEA link

It’s (not) how you use it

Eleni_A7 Sep 2022 13:28 UTC
6 points
3 comments2 min readEA link

EAs in­ter­ested in US policy: Con­sider the Scov­ille Fellowship

US Policy Careers7 Sep 2022 17:06 UTC
34 points
0 comments9 min readEA link

A model about the effect of to­tal ex­is­ten­tial risk on ca­reer choice

Jonas Moss10 Sep 2022 7:18 UTC
11 points
4 comments2 min readEA link

AI Risk In­tro 1: Ad­vanced AI Might Be Very Bad

LRudL11 Sep 2022 10:57 UTC
22 points
0 comments30 min readEA link

[Question] How have nu­clear win­ter mod­els evolved?

Jordan Arel11 Sep 2022 22:40 UTC
14 points
3 comments1 min readEA link

Cryp­tocur­rency Ex­ploits Show the Im­por­tance of Proac­tive Poli­cies for AI X-Risk

eSpencer16 Sep 2022 4:44 UTC
12 points
0 comments3 min readEA link

Civ­i­liza­tion Re­cov­ery Kits

Soof Golan21 Sep 2022 9:26 UTC
25 points
9 comments2 min readEA link

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotal22 Sep 2022 15:00 UTC
34 points
9 comments15 min readEA link

Sum­mary: the Global Catas­trophic Risk Man­age­ment Act of 2022

Anthony Fleming23 Sep 2022 3:19 UTC
20 points
3 comments2 min readEA link

Global Challenges Pro­ject—Ex­is­ten­tial Risk Workshop

Emma Abele23 Sep 2022 22:13 UTC
3 points
0 comments1 min readEA link

Cli­mate-con­tin­gent Fi­nance, and A Gen­er­al­ized Mechanism for X-Risk Re­duc­tion Financing

johnjnay26 Sep 2022 13:23 UTC
6 points
1 comment26 min readEA link

Les­sons from Three Mile Is­land for AI Warn­ing Shots

NickGabs26 Sep 2022 2:47 UTC
38 points
0 comments12 min readEA link

NASA will re-di­rect an as­ter­oid tonight as a test for plane­tary defence (link-post)

Ben Stewart26 Sep 2022 4:58 UTC
68 points
14 comments1 min readEA link
(theconversation.com)

Assess­ing SERI/​CHERI/​CERI sum­mer pro­gram im­pact by sur­vey­ing fellows

LRudL26 Sep 2022 15:29 UTC
99 points
11 comments15 min readEA link

AI Safety Endgame Stories

IvanVendrov28 Sep 2022 17:12 UTC
23 points
1 comment1 min readEA link

Carnegie Coun­cil MisUn­der­stands Longtermism

Jeff A30 Sep 2022 2:57 UTC
6 points
8 comments1 min readEA link
(www.carnegiecouncil.org)

The threat of syn­thetic bioter­ror de­mands even fur­ther ac­tion and leadership

dEAsign30 Sep 2022 8:58 UTC
8 points
0 comments2 min readEA link

Eli’s re­view of “Is power-seek­ing AI an ex­is­ten­tial risk?”

elifland30 Sep 2022 12:21 UTC
56 points
3 comments1 min readEA link

Longter­mists should take cli­mate change very seriously

Nir Eyal3 Oct 2022 18:33 UTC
29 points
10 comments8 min readEA link

Samotsvety Nu­clear Risk up­date Oc­to­ber 2022

NunoSempere3 Oct 2022 18:10 UTC
262 points
52 comments16 min readEA link

Over­re­act­ing to cur­rent events can be very costly

Kelsey Piper4 Oct 2022 21:30 UTC
280 points
71 comments4 min readEA link

[Question] Track­ing Com­pute Stocks and Flows: Case Stud­ies?

Cullen_OKeefe5 Oct 2022 17:54 UTC
34 points
1 comment1 min readEA link

Prob­a­bil­ity of ex­tinc­tion for var­i­ous types of catastrophes

Vasco Grilo9 Oct 2022 15:30 UTC
16 points
0 comments10 min readEA link

Shel­ter­ing hu­man­ity against x-risk: re­port from the SHELTER weekend

Janne M. Korhonen10 Oct 2022 15:09 UTC
68 points
3 comments5 min readEA link

Rad­i­cal Longter­mism and a Newfound Sense of Uncertainty

Parrhesia13 Oct 2022 2:12 UTC
4 points
2 comments3 min readEA link

Sixty years af­ter the Cuban Mis­sile Cri­sis, a new era of global catas­trophic risks

christian.r13 Oct 2022 11:25 UTC
30 points
0 comments1 min readEA link
(thebulletin.org)

The Vi­talik Bu­terin Fel­low­ship in AI Ex­is­ten­tial Safety is open for ap­pli­ca­tions!

Cynthia Chen14 Oct 2022 3:23 UTC
36 points
0 comments2 min readEA link

The US ex­pands re­stric­tions on AI ex­ports to China. What are the x-risk effects?

Stephen Clare14 Oct 2022 18:17 UTC
145 points
17 comments4 min readEA link

[Job]: AI Stan­dards Devel­op­ment Re­search Assistant

Tony Barrett14 Oct 2022 20:18 UTC
13 points
0 comments2 min readEA link

Shal­low Re­port on Nu­clear War

Joel Tan (CEARCH)18 Oct 2022 7:36 UTC
33 points
14 comments18 min readEA link

Cen­tre for Ex­plo­ra­tory Altru­ism Re­search (CEARCH)

Joel Tan (CEARCH)18 Oct 2022 7:23 UTC
114 points
15 comments5 min readEA link

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

Froolow18 Oct 2022 22:54 UTC
97 points
63 comments39 min readEA link

How to Take Over the Uni­verse (in Three Easy Steps)

Writer18 Oct 2022 15:04 UTC
11 points
0 comments12 min readEA link
(youtu.be)

[Question] Is there an or­ga­ni­za­tion or in­di­vi­d­u­als work­ing on how to boot­strap in­dus­trial civ­i­liza­tion?

steve632021 Oct 2022 3:36 UTC
15 points
8 comments1 min readEA link

Let us know how psy­chol­ogy can help in­crease your impact

Inga21 Oct 2022 10:32 UTC
29 points
0 comments1 min readEA link

AGI will ar­rive by the end of this decade ei­ther as a uni­corn or as a black swan

Yuri Barzov21 Oct 2022 10:50 UTC
−4 points
5 comments3 min readEA link

How might we al­ign trans­for­ma­tive AI if it’s de­vel­oped very soon?

Holden Karnofsky29 Aug 2022 15:48 UTC
153 points
16 comments44 min readEA link

Is space coloniza­tion de­sir­able? Re­view of Dark Sk­ies: Space Ex­pan­sion­ism, Plane­tary Geopoli­tics, and the Ends of Humanity

sphor7 Oct 2022 12:26 UTC
13 points
3 comments3 min readEA link
(bostonreview.net)

AGI and Lock-In

Lukas_Finnveden29 Oct 2022 1:56 UTC
121 points
23 comments10 min readEA link
(docs.google.com)

How to re­con­sider a prediction

Noah Scales25 Oct 2022 21:28 UTC
2 points
2 comments4 min readEA link

[Question] How bi­nary is longterm value?

Vasco Grilo1 Nov 2022 15:21 UTC
13 points
15 comments1 min readEA link

An­nounc­ing the Founders Pledge Global Catas­trophic Risks Fund

christian.r26 Oct 2022 13:39 UTC
49 points
1 comment3 min readEA link

A Cri­tique of Longter­mism by Pop­u­lar YouTube Science Chan­nel, Sabine Hossen­felder: “Elon Musk & The Longter­mists: What Is Their Plan?”

Ram Aditya29 Oct 2022 17:31 UTC
56 points
21 comments2 min readEA link

EA has got­ten it very wrong on cli­mate change—a Cana­dian case study

Stephen Beard29 Oct 2022 19:30 UTC
6 points
8 comments14 min readEA link

Model­ling civil­i­sa­tion af­ter a catastrophe

Arepo30 Oct 2022 16:26 UTC
40 points
5 comments11 min readEA link

[Cause Ex­plo­ra­tion Prizes] NOT Get­ting Ab­solutely Hosed by a So­lar Flare

aurellem26 Aug 2022 8:23 UTC
5 points
0 comments2 min readEA link

[Question] Does Fac­tory Farm­ing Make Nat­u­ral Pan­demics More Likely?

brook31 Oct 2022 12:50 UTC
12 points
2 comments1 min readEA link

Why do we post our AI safety plans on the In­ter­net?

Peter S. Park31 Oct 2022 16:27 UTC
13 points
22 comments11 min readEA link

Sum­mary of Deep Time Reck­on­ing by Vin­cent Ialenti

vinegar10@gmail.com31 Oct 2022 20:00 UTC
4 points
0 comments10 min readEA link

[Question] Trac­tors that need to be con­nected to func­tion?

mikbp31 Oct 2022 20:42 UTC
4 points
2 comments1 min readEA link

A new database of nan­otech­nol­ogy strat­egy re­sources

Ben Snodin5 Nov 2022 5:20 UTC
37 points
0 comments1 min readEA link

a ca­sual in­tro to AI doom and alignment

carado2 Nov 2022 9:42 UTC
10 points
1 comment1 min readEA link

WFW?: Op­por­tu­nity and The­ory of Impact

DavidCorfield2 Nov 2022 0:45 UTC
2 points
5 comments14 min readEA link
(www.whatfuture.world)

AI Safety Needs Great Product Builders

goodgravy2 Nov 2022 11:33 UTC
44 points
1 comment6 min readEA link

A The­olo­gian’s Re­sponse to An­thro­pogenic Ex­is­ten­tial Risk

Fr Peter Wyg3 Nov 2022 4:37 UTC
100 points
17 comments11 min readEA link

Open Let­ter Against Reck­less Nu­clear Es­ca­la­tion and Use

Vasco Grilo3 Nov 2022 15:08 UTC
10 points
2 comments1 min readEA link
(futureoflife.org)

My sum­mary of “Prag­matic AI Safety”

Eleni_A5 Nov 2022 14:47 UTC
14 points
0 comments5 min readEA link

[Video] How hav­ing Fast Fourier Trans­forms sooner could have helped with Nu­clear Disar­ma­ment—Veritasium

mako yass3 Nov 2022 20:52 UTC
12 points
1 comment1 min readEA link
(www.youtube.com)

You won’t solve al­ign­ment with­out agent foundations

Samin6 Nov 2022 8:07 UTC
12 points
0 comments1 min readEA link

The fu­ture of nu­clear war

turchin21 May 2022 8:00 UTC
34 points
2 comments35 min readEA link

4 Key As­sump­tions in AI Safety

Prometheus7 Nov 2022 10:50 UTC
5 points
0 comments1 min readEA link

Test Your Knowl­edge of the World’s Biggest Problems

AndreFerretti9 Nov 2022 16:04 UTC
18 points
2 comments1 min readEA link

Anat­o­miz­ing Chem­i­cal and Biolog­i­cal Non-State Adversaries

ncmoulios11 Nov 2022 21:23 UTC
2 points
0 comments1 min readEA link

Time con­sis­tency for the EA com­mu­nity: Pro­jects that bridge the gap be­tween near-term boot­strap­ping and long-term targets

Arturo Macias12 Nov 2022 7:44 UTC
3 points
0 comments7 min readEA link

AI Align­ment X-Risk Anal­y­sis & Man­age­ment | William James Draft

William James12 Nov 2022 22:22 UTC
1 point
0 comments23 min readEA link

The EA com­mu­ni­ties that emerged from the Chicx­u­lub crater

Silvia Fernández14 Nov 2022 19:46 UTC
10 points
1 comment8 min readEA link

De­lay, De­tect, Defend: Prepar­ing for a Fu­ture in which Thou­sands Can Re­lease New Pan­demics by Kevin Esvelt

Jeremy15 Nov 2022 16:23 UTC
165 points
6 comments1 min readEA link
(dam.gcsp.ch)

The limited up­side of interpretability

Peter S. Park15 Nov 2022 20:22 UTC
14 points
0 comments10 min readEA link

Cruxes for nu­clear risk re­duc­tion efforts—A proposal

Sarah Weiler16 Nov 2022 6:03 UTC
34 points
0 comments24 min readEA link

What are the most promis­ing strate­gies for re­duc­ing the prob­a­bil­ity of nu­clear war?

Sarah Weiler16 Nov 2022 6:09 UTC
36 points
1 comment27 min readEA link

A case against fo­cus­ing on tail-end nu­clear war risks

Sarah Weiler16 Nov 2022 6:08 UTC
27 points
11 comments10 min readEA link

[Doc­toral sem­i­nar] Chem­i­cal and biolog­i­cal weapons: In­ter­na­tional in­ves­tiga­tive mechanisms

ncmoulios17 Nov 2022 12:26 UTC
17 points
0 comments1 min readEA link
(www.asser.nl)

Can “sus­tain­abil­ity” help us safe­guard the fu­ture?

simonfriederich24 Nov 2022 14:02 UTC
3 points
1 comment2 min readEA link

Birth rates and civil­i­sa­tion doom loop

deus77718 Nov 2022 10:56 UTC
−40 points
0 comments2 min readEA link

In­tro­duc­ing The Log­i­cal Foun­da­tion, A Plan to End Poverty With Guaran­teed Income

Michael Simm18 Nov 2022 8:13 UTC
14 points
3 comments24 min readEA link

Mul­ti­ple high-im­pact PhD stu­dent positions

Denkenberger19 Nov 2022 0:02 UTC
31 points
0 comments3 min readEA link

Ar­tifi­cial In­tel­li­gence and Nu­clear Com­mand, Con­trol, & Com­mu­ni­ca­tions: The Risks of Integration

Peter Rautenbach18 Nov 2022 13:01 UTC
59 points
3 comments50 min readEA link

New re­port on how much com­pu­ta­tional power it takes to match the hu­man brain (Open Philan­thropy)

Aaron Gertler15 Sep 2020 1:06 UTC
40 points
1 comment18 min readEA link
(www.openphilanthropy.org)

The SBF Saga Is Not A Rea­son to Throw Out the Mo­ral Calculus

sampajanna20 Nov 2022 5:51 UTC
14 points
2 comments3 min readEA link

[Question] Benefits/​Risks of Scott Aaron­son’s Ortho­dox/​Re­form Fram­ing for AI Alignment

Jeremy21 Nov 2022 17:47 UTC
15 points
5 comments1 min readEA link
(scottaaronson.blog)

Toby Ord’s new re­port on les­sons from the de­vel­op­ment of the atomic bomb

Ishan Mukherjee22 Nov 2022 10:37 UTC
64 points
0 comments1 min readEA link
(www.governance.ai)

Suc­cess Max­i­miza­tion: An Alter­na­tive to Ex­pected Utility The­ory and a Gen­er­al­iza­tion of Max­ipok to Mo­ral Uncertainty

Mahendra Prasad26 Nov 2022 1:53 UTC
13 points
3 comments2 min readEA link

Good Fu­tures Ini­ti­a­tive: Win­ter Pro­ject In­tern­ship

Aris Richardson27 Nov 2022 23:27 UTC
63 points
5 comments4 min readEA link

Pro­posal for a Nu­clear Off-Ramp Toolkit

Stan Pinsent29 Nov 2022 16:02 UTC
15 points
0 comments3 min readEA link

“The Physi­cists”: A play about ex­tinc­tion and the re­spon­si­bil­ity of scientists

Lara_TH29 Nov 2022 16:53 UTC
25 points
0 comments8 min readEA link

Sim­ple BOTEC on X-Risk Work for Neartermists

Phosphorous2 Dec 2022 18:41 UTC
15 points
8 comments4 min readEA link