RSS

Ex­is­ten­tial risk

Core TagLast edit: 5 Apr 2023 11:30 UTC by Will Howard

An existential risk is a risk that threatens the destruction of the long-term potential of life.[1] An existential risk could threaten the extinction of humans (and other sentient beings), or it could threaten some other unrecoverable collapse or permanent failure to achieve a potential good state. Natural risks such as those posed by asteroids or supervolcanoes could be existential risks, as could anthropogenic (human-caused) risks like accidents from synthetic biology or unaligned artificial intelligence.

Estimating the probability of existential risk from different factors is difficult, but there are some estimates.[1]

Some view reducing existential risks as a key moral priority, for a variety of reasons.[2] Some people simply view the current estimates of existential risk as unacceptably high. Other authors argue that existential risks are especially important because the long-run future of humanity matters a great deal.[3] Many believe that there is no intrinsic moral difference between the importance of a life today and one in a hundred years. However, there may be many more people in the future than there are now. Given these assumptions, existential risks threaten not only the beings alive right now, but also the enormous number of lives yet to be lived. One objection to this argument is that people have a special responsibility to other people currently alive that they do not have to people who have not yet been born.[4] Another objection is that, although it would in principle be important to manage, the risks are currently so unlikely and poorly understood that existential risk reduction is less cost-effective than work on other promising areas.

In The Precipice: Existential Risk and the Future of Humanity, Toby Ord offers several policy and research recommendations for handling existential risks:[5]

Further reading

Bostrom, Nick (2002) Existential risks: analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology, vol. 9.
A paper surveying a wide range of non-extinction existential risks.

Bostrom, Nick (2013) Existential risk prevention as global priority, Global Policy, vol. 4, pp. 15–31.

Matheny, Jason Gaverick (2007) Reducing the risk of human extinction, Risk Analysis, vol. 27, pp. 1335–1344.
A paper exploring the cost-effectiveness of extinction risk reduction.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Ord, Toby (2020) Existential risks to humanity in Pedro Conceição (ed.) The 2020 Human Development Report: The Next Frontier: Human Development and the Anthropocene, New York: United Nations Development Programme, pp. 106–111.

Sánchez, Sebastián (2022) Timeline of existential risk, Timelines Wiki.

Related entries

civilizational collapse | criticism of longtermism and existential risk studies | dystopia | estimation of existential risks | ethics of existential risk | existential catastrophe | existential risk factor | existential security | global catastrophic risk | hinge of history | longtermism | Toby Ord | rationality community | Russell–Einstein Manifesto | s-risk

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  2. ^

    Todd, Benjamin (2017) The case for reducing existential risks, 80,000 Hours website. (Updated June 2022.)

  3. ^

    Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, PhD thesis, Rutgers University.

  4. ^

    Roberts, M. A. (2009) The nonidentity problem, Stanford Encyclopedia of Philosophy, July 21 (updated 1 December 2020).

  5. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, pp. 280–281.

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA15 Jul 2020 12:28 UTC
79 points
7 comments7 min readEA link

“Long-Ter­mism” vs. “Ex­is­ten­tial Risk”

Scott Alexander6 Apr 2022 21:41 UTC
520 points
81 comments3 min readEA link

Katja Grace: Let’s think about slow­ing down AI

peterhartree23 Dec 2022 0:57 UTC
83 points
6 comments2 min readEA link
(worldspiritsockpuppet.substack.com)

The ex­pected value of ex­tinc­tion risk re­duc­tion is positive

JanBrauner9 Dec 2018 8:00 UTC
56 points
22 comments39 min readEA link

Chart­ing the precipice: The time of per­ils and pri­ori­tiz­ing x-risk

David Rhys Bernard24 Oct 2023 16:25 UTC
86 points
14 comments25 min readEA link

Is x-risk the most cost-effec­tive if we count only the next few gen­er­a­tions?

Laura Duffy30 Oct 2023 12:43 UTC
116 points
7 comments20 min readEA link
(docs.google.com)

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
140 points
118 comments32 min readEA link
(www.sentienceinstitute.org)

Nick Bostrom – Ex­is­ten­tial Risk Preven­tion as Global Priority

Zach Stein-Perlman1 Feb 2013 17:00 UTC
15 points
1 comment1 min readEA link
(www.existential-risk.org)

Ex­is­ten­tial risks are not just about humanity

MichaelA28 Apr 2020 0:09 UTC
35 points
0 comments5 min readEA link

What is ex­is­ten­tial se­cu­rity?

MichaelA1 Sep 2020 9:40 UTC
34 points
1 comment6 min readEA link

Ex­is­ten­tial risk as com­mon cause

Gavin5 Dec 2018 14:01 UTC
49 points
22 comments5 min readEA link

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanni1 Jul 2021 21:01 UTC
125 points
10 comments32 min readEA link

A pro­posed hi­er­ar­chy of longter­mist concepts

Arepo30 Oct 2022 16:26 UTC
32 points
13 comments4 min readEA link

On the as­sess­ment of vol­canic erup­tions as global catas­trophic or ex­is­ten­tial risks

Mike Cassidy13 Oct 2021 14:32 UTC
112 points
18 comments19 min readEA link

Database of ex­is­ten­tial risk estimates

MichaelA15 Apr 2020 12:43 UTC
130 points
37 comments5 min readEA link

Ex­cerpts from “Do­ing EA Bet­ter” on x-risk methodology

BrownHairedEevee26 Jan 2023 1:04 UTC
21 points
5 comments6 min readEA link
(forum.effectivealtruism.org)

Ex­is­ten­tial Risk Ob­ser­va­tory: re­sults and 2022 targets

Otto14 Jan 2022 13:52 UTC
22 points
6 comments4 min readEA link

Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catastrophes

MichaelA24 Apr 2020 13:27 UTC
38 points
3 comments7 min readEA link

How bad would hu­man ex­tinc­tion be?

arvomm23 Oct 2023 12:01 UTC
118 points
23 comments18 min readEA link

Some con­sid­er­a­tions for differ­ent ways to re­duce x-risk

Jacy4 Feb 2016 3:21 UTC
28 points
34 comments5 min readEA link

Ob­jec­tives of longter­mist policy making

Henrik Øberg Myhre10 Feb 2021 18:26 UTC
54 points
7 comments22 min readEA link

The Im­por­tance of Un­known Ex­is­ten­tial Risks

MichaelDickens23 Jul 2020 19:09 UTC
72 points
11 comments9 min readEA link

Quan­tify­ing the prob­a­bil­ity of ex­is­ten­tial catas­tro­phe: A re­ply to Beard et al.

MichaelA10 Aug 2020 5:56 UTC
21 points
3 comments3 min readEA link
(gcrinstitute.org)

X-risks to all life v. to humans

RobertHarling3 Jun 2020 15:40 UTC
66 points
33 comments4 min readEA link

Some thoughts on Toby Ord’s ex­is­ten­tial risk estimates

MichaelA7 Apr 2020 2:19 UTC
67 points
33 comments9 min readEA link

Why I pri­ori­tize moral cir­cle ex­pan­sion over re­duc­ing ex­tinc­tion risk through ar­tifi­cial in­tel­li­gence alignment

Jacy20 Feb 2018 18:29 UTC
106 points
72 comments36 min readEA link
(www.sentienceinstitute.org)

Diver­sity In Ex­is­ten­tial Risk Stud­ies Sur­vey: SJ Beard

GideonF25 Nov 2022 16:29 UTC
2 points
0 comments1 min readEA link

How bad would nu­clear win­ter caused by a US-Rus­sia nu­clear ex­change be?

Luisa_Rodriguez20 Jun 2019 1:48 UTC
136 points
18 comments40 min readEA link

Effec­tive strate­gies for chang­ing pub­lic opinion: A liter­a­ture review

Jamie_Harris9 Nov 2021 14:09 UTC
81 points
2 comments37 min readEA link
(www.sentienceinstitute.org)

En­light­en­ment Values in a Vuln­er­a­ble World

Maxwell Tabarrok18 Jul 2022 11:54 UTC
64 points
18 comments31 min readEA link

The 25 re­searchers who have pub­lished the largest num­ber of aca­demic ar­ti­cles on ex­is­ten­tial risk

FJehn12 Aug 2023 8:57 UTC
34 points
21 comments4 min readEA link
(existentialcrunch.substack.com)

The Odyssean Process

Odyssean Institute24 Nov 2023 13:48 UTC
24 points
6 comments1 min readEA link
(www.odysseaninstitute.org)

The trou­ble with tip­ping points: Are we steer­ing to­wards a cli­mate catas­tro­phe or a man­age­able challenge?

FJehn19 Jun 2023 8:57 UTC
24 points
18 comments8 min readEA link
(existentialcrunch.substack.com)

Beyond Sim­ple Ex­is­ten­tial Risk: Sur­vival in a Com­plex In­ter­con­nected World

GideonF21 Nov 2022 14:35 UTC
83 points
67 comments21 min readEA link

Ex­is­ten­tial risk pes­simism and the time of perils

David Thorstad12 Aug 2022 14:42 UTC
173 points
67 comments21 min readEA link

Sum­mary of posts on XPT fore­casts on AI risk and timelines

Forecasting Research Institute25 Jul 2023 8:42 UTC
28 points
2 comments4 min readEA link

ALTER Is­rael—Mid-year 2022 Update

Davidmanheim12 Jun 2022 9:22 UTC
63 points
0 comments2 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
147 points
28 comments64 min readEA link

Causal di­a­grams of the paths to ex­is­ten­tial catastrophe

MichaelA1 Mar 2020 14:08 UTC
51 points
13 comments12 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
325 points
88 comments37 min readEA link

Global catas­trophic risks law ap­proved in the United States

JorgeTorresC7 Mar 2023 14:28 UTC
157 points
7 comments1 min readEA link
(riesgoscatastroficosglobales.com)

[Question] How Much Does New Re­search In­form Us About Ex­is­ten­tial Cli­mate Risk?

zdgroff22 Jul 2020 23:47 UTC
63 points
5 comments1 min readEA link

Miti­gat­ing x-risk through modularity

Toby Newberry17 Dec 2020 19:54 UTC
103 points
6 comments14 min readEA link

Nathan A. Sears (1987-2023)

HaydnBelfield29 Mar 2023 16:07 UTC
284 points
7 comments4 min readEA link

Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 1: In­tro­duc­tion and cu­mu­la­tive risk) - Reflec­tive altruism

BrownHairedEevee3 Jul 2023 6:33 UTC
78 points
6 comments6 min readEA link
(ineffectivealtruismblog.com)

[Question] Con­crete, ex­ist­ing ex­am­ples of high-im­pact risks from AI?

freedomandutility15 Apr 2023 22:19 UTC
9 points
1 comment1 min readEA link

Prior prob­a­bil­ity of this be­ing the most im­por­tant century

Vasco Grilo15 Jul 2023 7:18 UTC
8 points
2 comments2 min readEA link

AGI Catas­tro­phe and Takeover: Some Refer­ence Class-Based Priors

zdgroff24 May 2023 19:14 UTC
98 points
8 comments6 min readEA link

The Parable of the Boy Who Cried 5% Chance of Wolf

Kat Woods15 Aug 2022 14:22 UTC
76 points
8 comments2 min readEA link

How much should gov­ern­ments pay to pre­vent catas­tro­phes? Longter­mism’s limited role

EJT19 Mar 2023 16:50 UTC
258 points
35 comments35 min readEA link
(philpapers.org)

AI Gover­nance: Op­por­tu­nity and The­ory of Impact

Allan Dafoe17 Sep 2020 6:30 UTC
256 points
16 comments12 min readEA link

Early-warn­ing Fore­cast­ing Cen­ter: What it is, and why it’d be cool

Linch14 Mar 2022 19:20 UTC
57 points
8 comments11 min readEA link

An­nounc­ing The Most Im­por­tant Cen­tury Writ­ing Prize

michel31 Oct 2022 21:37 UTC
48 points
0 comments2 min readEA link

Ex­is­ten­tial Risk and Eco­nomic Growth

leopold3 Sep 2019 13:23 UTC
112 points
31 comments1 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

Winston24 Oct 2023 19:40 UTC
122 points
6 comments6 min readEA link

A Land­scape Anal­y­sis of In­sti­tu­tional Im­prove­ment Opportunities

IanDavidMoss21 Mar 2022 0:15 UTC
97 points
25 comments29 min readEA link

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

𝕮𝖎𝖓𝖊𝖗𝖆13 Nov 2022 19:40 UTC
35 points
6 comments1 min readEA link

The Gover­nance Prob­lem and the “Pretty Good” X-Risk

Zach Stein-Perlman28 Aug 2021 20:00 UTC
23 points
4 comments11 min readEA link

Book Re­view: The Precipice

Aaron Gertler9 Apr 2020 21:21 UTC
39 points
0 comments17 min readEA link
(slatestarcodex.com)

Re­think’s CURVE Se­quence—The Good and the Gaps

Jack Malde28 Nov 2023 1:06 UTC
96 points
7 comments10 min readEA link

Cru­cial ques­tions for longtermists

MichaelA29 Jul 2020 9:39 UTC
102 points
17 comments14 min readEA link

Eight high-level un­cer­tain­ties about global catas­trophic and ex­is­ten­tial risk

SiebeRozendal28 Nov 2019 14:47 UTC
85 points
9 comments6 min readEA link

Some global catas­trophic risk estimates

Tamay10 Feb 2021 19:32 UTC
106 points
15 comments1 min readEA link

My per­sonal cruxes for fo­cus­ing on ex­is­ten­tial risks /​ longter­mism /​ any­thing other than just video games

MichaelA13 Apr 2021 5:50 UTC
55 points
28 comments2 min readEA link

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo2 Dec 2023 8:20 UTC
42 points
8 comments15 min readEA link

[Question] Where should I give to help pre­vent nu­clear war?

Luke Eure19 Nov 2023 5:05 UTC
20 points
9 comments1 min readEA link

In­ter­ac­tively Vi­su­al­iz­ing X-Risk

Conor Barnes29 Jul 2022 16:43 UTC
50 points
27 comments2 min readEA link

Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_Carlsmith28 Apr 2021 21:41 UTC
87 points
34 comments1 min readEA link

AMA: Chris­tian Ruhl (se­nior global catas­trophic risk re­searcher at Founders Pledge)

Lizka26 Sep 2023 9:50 UTC
68 points
28 comments1 min readEA link

In­ter­me­di­ate goals for re­duc­ing risks from nu­clear weapons: A shal­low re­view (part 1/​4)

MichaelA1 May 2023 15:04 UTC
34 points
0 comments11 min readEA link
(docs.google.com)

Read­ing Group Launch: In­tro­duc­tion to Nu­clear Is­sues, March-April 2023

Isabel3 Feb 2023 14:55 UTC
11 points
2 comments3 min readEA link

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickens3 Sep 2020 1:15 UTC
96 points
18 comments25 min readEA link

Can a war cause hu­man ex­tinc­tion? Once again, not on priors

Vasco Grilo25 Jan 2024 7:56 UTC
67 points
29 comments18 min readEA link

In­for­ma­tion se­cu­rity ca­reers for GCR reduction

ClaireZabel20 Jun 2019 23:56 UTC
187 points
35 comments8 min readEA link

Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 2: Ig­nor­ing back­ground risk) - Reflec­tive altruism

BrownHairedEevee3 Jul 2023 6:34 UTC
87 points
7 comments6 min readEA link
(ineffectivealtruismblog.com)

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA Global3 Sep 2020 18:11 UTC
32 points
0 comments22 min readEA link
(www.youtube.com)

Giv­ing Now vs. Later for Ex­is­ten­tial Risk: An Ini­tial Approach

MichaelDickens29 Aug 2020 1:04 UTC
14 points
2 comments28 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #1 - Sum­mary of the Paper

Alex HT21 Jun 2020 9:22 UTC
64 points
7 comments10 min readEA link

Ap­ply to join SHELTER Week­end this August

Joel Becker15 Jun 2022 14:21 UTC
108 points
19 comments2 min readEA link

2021 ALLFED Highlights

Ross_Tieman17 Nov 2021 15:24 UTC
45 points
1 comment16 min readEA link

Im­prov­ing dis­aster shelters to in­crease the chances of re­cov­ery from a global catastrophe

Nick_Beckstead19 Feb 2014 22:17 UTC
24 points
5 comments26 min readEA link

Progress stud­ies vs. longter­mist EA: some differences

Max_Daniel31 May 2021 21:35 UTC
83 points
27 comments3 min readEA link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotal17 May 2023 11:58 UTC
41 points
3 comments15 min readEA link

Ex­is­ten­tial Risk Model­ling with Con­tin­u­ous-Time Markov Chains

Radical Empath Ismam23 Jan 2023 20:32 UTC
87 points
9 comments12 min readEA link

The uni­ver­sal An­thro­pocene or things we can learn from exo-civil­i­sa­tions, even if we never meet any

FJehn26 Apr 2022 12:06 UTC
11 points
0 comments8 min readEA link

The timing of labour aimed at re­duc­ing ex­is­ten­tial risk

Toby_Ord24 Jul 2014 4:08 UTC
21 points
7 comments7 min readEA link

[Question] Nu­clear safety/​se­cu­rity: Why doesn’t EA pri­ori­tize it more?

Rockwell30 Aug 2023 21:43 UTC
33 points
20 comments1 min readEA link

Ex­per­i­men­tal longter­mism: the­ory needs data

Jan_Kulveit15 Mar 2022 10:05 UTC
186 points
9 comments4 min readEA link

Ap­ply to the Cavendish Labs Fel­low­ship (by 4/​15)

Derik K3 Apr 2023 23:06 UTC
35 points
2 comments1 min readEA link

Re­search pro­ject idea: How should EAs re­act to fun­ders pul­ling out of the nu­clear risk space?

MichaelA15 Apr 2023 14:37 UTC
12 points
0 comments3 min readEA link

Two im­por­tant re­cent AI Talks- Ge­bru and Lazar

GideonF6 Mar 2023 1:30 UTC
−12 points
5 comments1 min readEA link

Nu­clear risk re­search ideas: Sum­mary & introduction

MichaelA8 Apr 2022 11:17 UTC
103 points
4 comments7 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risks In­tro­duc­tory Course (ERIC)

Nandini Shiralkar19 Aug 2022 15:57 UTC
57 points
14 comments7 min readEA link

Op­ti­mal Allo­ca­tion of Spend­ing on Ex­is­ten­tial Risk Re­duc­tion over an In­finite Time Hori­zon (in a too sim­plis­tic model)

Yassin Alaya12 Aug 2021 20:14 UTC
13 points
4 comments1 min readEA link

Could Ukraine re­take Crimea?

mhint1991 May 2023 1:06 UTC
6 points
3 comments4 min readEA link

Bear Brau­moel­ler has passed away

Stephen Clare5 May 2023 14:06 UTC
152 points
4 comments1 min readEA link

Re­view: What We Owe The Future

Kelsey Piper21 Nov 2022 21:41 UTC
165 points
3 comments1 min readEA link
(asteriskmag.com)

Cli­mate anoma­lies and so­cietal collapse

FJehn8 Feb 2024 9:49 UTC
13 points
6 comments10 min readEA link
(existentialcrunch.substack.com)

Risks from atom­i­cally pre­cise man­u­fac­tur­ing—Prob­lem profile

Benjamin Hilton9 Aug 2022 13:41 UTC
53 points
4 comments5 min readEA link
(80000hours.org)

Bounty to dis­close new x-risks

acylhalide5 Nov 2021 12:53 UTC
1 point
5 comments4 min readEA link

Most* small prob­a­bil­ities aren’t pas­calian

Gregory Lewis7 Aug 2022 16:17 UTC
212 points
20 comments6 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

Jack10 Aug 2020 10:24 UTC
74 points
12 comments42 min readEA link

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluug8 Mar 2022 19:17 UTC
188 points
43 comments10 min readEA link
(skluug.substack.com)

Am­bi­guity aver­sion and re­duc­tion of X-risks: A mod­el­ling situation

Benedikt Schmidt13 Sep 2021 7:16 UTC
29 points
6 comments6 min readEA link

What If 99% of Hu­man­ity Van­ished? (A Hap­pier World video)

Jeroen Willems16 Feb 2023 17:10 UTC
16 points
1 comment3 min readEA link

Ma­jor UN re­port dis­cusses ex­is­ten­tial risk and fu­ture gen­er­a­tions (sum­mary)

finm17 Sep 2021 15:51 UTC
314 points
5 comments12 min readEA link

[Question] Pro­jects tack­ling nu­clear risk?

Sanjay29 May 2020 22:41 UTC
29 points
3 comments1 min readEA link

The value of x-risk re­duc­tion

Nathan_Barnard21 May 2022 19:40 UTC
19 points
10 comments4 min readEA link

An­nounc­ing New Begin­ner-friendly Book on AI Safety and Risk

Darren McKee25 Nov 2023 15:57 UTC
108 points
9 comments1 min readEA link

The end of the Bronze Age as an ex­am­ple of a sud­den col­lapse of civilization

FJehn28 Oct 2020 12:55 UTC
53 points
7 comments8 min readEA link

Famine’s Role in So­cietal Collapse

FJehn5 Oct 2023 6:19 UTC
14 points
1 comment6 min readEA link
(existentialcrunch.substack.com)

Tech­ni­cal AGI safety re­search out­side AI

richard_ngo18 Oct 2019 15:02 UTC
89 points
5 comments4 min readEA link

Rea­sons to have hope

jwpieters20 Apr 2023 10:19 UTC
53 points
4 comments1 min readEA link

Bot­tle­necks and Solu­tions for the X-Risk Ecosystem

FlorentBerthet8 Oct 2018 12:47 UTC
53 points
12 comments8 min readEA link

ALLFED 2020 Highlights

AronM19 Nov 2020 22:06 UTC
51 points
5 comments27 min readEA link

A Biose­cu­rity and Biorisk Read­ing+ List

Tessa14 Mar 2021 2:30 UTC
135 points
13 comments12 min readEA link

Democratis­ing Risk—or how EA deals with critics

CarlaZoeC28 Dec 2021 15:05 UTC
260 points
311 comments4 min readEA link

Cli­mate Change & Longter­mism: new book-length report

John G. Halstead26 Aug 2022 9:13 UTC
313 points
161 comments13 min readEA link

[Question] (Where) Does an­i­mal x-risk fit?

StephenRo21 Dec 2023 11:04 UTC
21 points
8 comments1 min readEA link

Which World Gets Saved

trammell9 Nov 2018 18:08 UTC
142 points
27 comments3 min readEA link

Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

80000_Hours29 Jul 2021 16:38 UTC
20 points
0 comments158 min readEA link

In­tro­duc­ing The Non­lin­ear Fund: AI Safety re­search, in­cu­ba­tion, and funding

Kat Woods18 Mar 2021 14:07 UTC
71 points
32 comments5 min readEA link

Long-Term Fu­ture Fund: April 2019 grant recommendations

Habryka23 Apr 2019 7:00 UTC
142 points
242 comments47 min readEA link

Will the Treaty on the Pro­hi­bi­tion of Nu­clear Weapons af­fect nu­clear de­pro­lifer­a­tion through le­gal chan­nels?

Luisa_Rodriguez6 Dec 2019 10:38 UTC
100 points
5 comments30 min readEA link

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 87 Jul 2022 4:44 UTC
61 points
9 comments27 min readEA link

9/​26 is Petrov Day

Lizka25 Sep 2022 23:14 UTC
72 points
10 comments2 min readEA link
(www.lesswrong.com)

Long-Term Fu­ture Fund: Au­gust 2019 grant recommendations

Habryka3 Oct 2019 18:46 UTC
79 points
70 comments64 min readEA link

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawford2 Jun 2021 18:47 UTC
119 points
37 comments3 min readEA link

How will a nu­clear war end?

Kinoshita Yoshikazu (pseudonym)23 Jun 2023 10:50 UTC
14 points
4 comments2 min readEA link

[Question] Is some kind of min­i­mally-in­va­sive mass surveillance re­quired for catas­trophic risk pre­ven­tion?

Chris Leong1 Jul 2020 23:32 UTC
26 points
7 comments1 min readEA link

‘Are We Doomed?’ Memos

Miranda_Zhang19 May 2021 13:51 UTC
27 points
0 comments16 min readEA link

On Col­lapse Risk (C-Risk)

Pawntoe42 Jan 2020 5:10 UTC
39 points
10 comments8 min readEA link

New US Se­nate Bill on X-Risk Miti­ga­tion [Linkpost]

Evan R. Murphy4 Jul 2022 1:28 UTC
22 points
12 comments1 min readEA link
(www.hsgac.senate.gov)

A pseudo math­e­mat­i­cal for­mu­la­tion of di­rect work choice be­tween two x-risks

Joseph Bloom11 Aug 2022 0:28 UTC
7 points
0 comments4 min readEA link

Dona­tion recom­men­da­tions for xrisk + ai safety

vincentweisser6 Feb 2023 21:25 UTC
17 points
11 comments1 min readEA link

Call for Cruxes by Rhyme, a Longter­mist His­tory Con­sul­tancy

Lara_TH1 Mar 2023 10:20 UTC
147 points
6 comments3 min readEA link

Why poli­cy­mak­ers should be­ware claims of new “arms races” (Bul­letin of the Atomic Scien­tists)

christian.r14 Jul 2022 13:38 UTC
55 points
1 comment1 min readEA link
(thebulletin.org)

AI Safety Needs Great Engineers

Andy Jones23 Nov 2021 21:03 UTC
98 points
13 comments4 min readEA link

Paper Sum­mary: The Effec­tive­ness of AI Ex­is­ten­tial Risk Com­mu­ni­ca­tion to the Amer­i­can and Dutch Public

Otto9 Mar 2023 10:40 UTC
97 points
11 comments4 min readEA link

Crit­i­cal Re­view of ‘The Precipice’: A Re­assess­ment of the Risks of AI and Pandemics

Fods1211 May 2020 11:11 UTC
110 points
32 comments26 min readEA link

Great Power Conflict

Zach Stein-Perlman15 Sep 2021 15:00 UTC
11 points
7 comments4 min readEA link

In­tro­duc­tion to Space and Ex­is­ten­tial Risk

JordanStone23 Sep 2023 19:56 UTC
26 points
0 comments7 min readEA link

Nu­clear war is un­likely to cause hu­man extinction

Jeffrey Ladish7 Nov 2020 5:39 UTC
61 points
27 comments11 min readEA link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 1:23 UTC
167 points
20 comments39 min readEA link

We should ex­pect to worry more about spec­u­la­tive risks

bgarfinkel29 May 2022 21:08 UTC
120 points
14 comments3 min readEA link

Would US and Rus­sian nu­clear forces sur­vive a first strike?

Luisa_Rodriguez18 Jun 2019 0:28 UTC
85 points
4 comments19 min readEA link

Video and Tran­script of Pre­sen­ta­tion on Ex­is­ten­tial Risk from Power-Seek­ing AI

Joe_Carlsmith8 May 2022 3:52 UTC
97 points
7 comments30 min readEA link

BERI is seek­ing new trial collaborators

elizabethcooper14 Jul 2023 17:08 UTC
16 points
0 comments1 min readEA link

Model­ling Great Power con­flict as an ex­is­ten­tial risk factor

Stephen Clare3 Feb 2022 11:41 UTC
122 points
26 comments19 min readEA link

21 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Sep 2019 up­date)

HaydnBelfield5 Nov 2019 14:26 UTC
31 points
4 comments13 min readEA link

Assess­ing Cli­mate Change’s Con­tri­bu­tion to Global Catas­trophic Risk

HaydnBelfield19 Feb 2021 16:26 UTC
27 points
8 comments38 min readEA link

A se­lec­tion of cross-cut­ting re­sults from the XPT

Forecasting Research Institute26 Sep 2023 23:50 UTC
17 points
0 comments9 min readEA link

Sum­mary of “The Precipice” (3 of 4): Play­ing Rus­sian roulette with the future

rileyharris21 Aug 2023 7:55 UTC
4 points
0 comments1 min readEA link
(www.millionyearview.com)

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

Sim­plify EA Pitches to “Holy Shit, X-Risk”

Neel Nanda11 Feb 2022 1:57 UTC
184 points
78 comments10 min readEA link
(www.neelnanda.io)

An­nounc­ing “Fore­cast­ing Ex­is­ten­tial Risks: Ev­i­dence from a Long-Run Fore­cast­ing Tour­na­ment”

Forecasting Research Institute10 Jul 2023 17:04 UTC
160 points
30 comments2 min readEA link

How many peo­ple would be kil­led as a di­rect re­sult of a US-Rus­sia nu­clear ex­change?

Luisa_Rodriguez30 Jun 2019 3:00 UTC
97 points
17 comments43 min readEA link

[Linkpost] Be­ware the Squir­rel by Ver­ity Harding

Arden3 Sep 2023 21:04 UTC
1 point
1 comment2 min readEA link
(samf.substack.com)

2023 Stan­ford Ex­is­ten­tial Risks Conference

elizabethcooper24 Feb 2023 17:49 UTC
29 points
5 comments1 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA4 Apr 2021 3:09 UTC
79 points
27 comments1 min readEA link
(globalprioritiesinstitute.org)

Ap­pli­ca­tions open! UChicago Ex­is­ten­tial Risk Lab­o­ra­tory’s 2023 Sum­mer Re­search Fellowship

ZacharyRudolph1 Apr 2023 20:55 UTC
39 points
1 comment1 min readEA link

Del­e­gated agents in prac­tice: How com­pa­nies might end up sel­l­ing AI ser­vices that act on be­half of con­sumers and coal­i­tions, and what this im­plies for safety research

Remmelt26 Nov 2020 16:39 UTC
11 points
0 comments4 min readEA link

Fore­cast­ing Thread: Ex­is­ten­tial Risk

amandango22 Sep 2020 20:51 UTC
24 points
4 comments2 min readEA link
(www.lesswrong.com)

Nick Bostrom: An In­tro­duc­tion [early draft]

peterhartree31 Jul 2021 17:04 UTC
38 points
0 comments19 min readEA link

Long-Term Fu­ture Fund AMA

Helen19 Dec 2018 4:10 UTC
39 points
30 comments1 min readEA link

Which nu­clear wars should worry us most?

Luisa_Rodriguez16 Jun 2019 23:31 UTC
103 points
13 comments5 min readEA link

Bioinfohazards

Fin17 Sep 2019 2:41 UTC
87 points
8 comments18 min readEA link

Nel­son Man­dela’s or­ga­ni­za­tion, The Elders, back­ing x risk pre­ven­tion and longtermism

krohmal51 Feb 2023 6:40 UTC
179 points
4 comments1 min readEA link
(theelders.org)

[Question] What do you make of the dooms­day ar­gu­ment?

niklas19 Mar 2021 6:30 UTC
14 points
8 comments1 min readEA link

Dis­in­for­ma­tion as a GCR Threat Mul­ti­plier and Ev­i­dence Based Response

Ari9624 Jan 2024 11:19 UTC
2 points
0 comments8 min readEA link

[Link post] How plau­si­ble are AI Takeover sce­nar­ios?

SammyDMartin27 Sep 2021 13:03 UTC
26 points
0 comments1 min readEA link

Defend­ing against hy­po­thet­i­cal moon life dur­ing Apollo 11

eukaryote7 Jan 2024 23:59 UTC
67 points
3 comments32 min readEA link
(eukaryotewritesblog.com)

Sum­mary: Tiny prob­a­bil­ities and the value of the far fu­ture (Pe­tra Koso­nen)

Nicholas Kruus17 Feb 2024 14:11 UTC
7 points
1 comment4 min readEA link

An­nounc­ing AXRP, the AI X-risk Re­search Podcast

DanielFilan23 Dec 2020 20:10 UTC
32 points
1 comment1 min readEA link

Good news on cli­mate change

John G. Halstead28 Oct 2021 14:04 UTC
230 points
34 comments12 min readEA link

BERI’s 2024 Goals and Predictions

elizabethcooper12 Jan 2024 22:15 UTC
9 points
0 comments1 min readEA link
(existence.org)

Key points from The Dead Hand, David E. Hoffman

Kit9 Aug 2019 13:59 UTC
71 points
8 comments8 min readEA link

Lo­ca­tion Model­ling for Post-Nu­clear Re­fuge Bunkers

Bleddyn Mottershead14 Feb 2024 7:09 UTC
10 points
2 comments15 min readEA link

Not all x-risk is the same: im­pli­ca­tions of non-hu­man-descendants

Nikola18 Dec 2021 21:22 UTC
36 points
4 comments5 min readEA link

AMA: Toby Ord, au­thor of “The Precipice” and co-founder of the EA movement

Toby_Ord17 Mar 2020 2:39 UTC
68 points
82 comments1 min readEA link

Tort Law Can Play an Im­por­tant Role in Miti­gat­ing AI Risk

Gabriel Weil12 Feb 2024 17:11 UTC
80 points
4 comments5 min readEA link

Pop­u­la­tion After a Catastrophe

Stan Pinsent2 Oct 2023 16:06 UTC
33 points
12 comments14 min readEA link

How the Ukraine con­flict may in­fluence spend­ing on longter­mist pro­jects

Frank_R16 Mar 2022 8:15 UTC
23 points
3 comments2 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphire20 Aug 2020 20:23 UTC
51 points
0 comments3 min readEA link

Miti­gat­ing Eth­i­cal Con­cerns and Risks in the US Ap­proach to Au­tonomous Weapons Sys­tems through Effec­tive Altruism

Vee11 Jun 2023 10:37 UTC
5 points
2 comments4 min readEA link

A Dou­ble Fea­ture on The Extropians

Maxwell Tabarrok3 Jun 2023 18:29 UTC
47 points
3 comments1 min readEA link

[Question] What am I miss­ing re. open-source LLM’s?

another-anon-do-gooder4 Dec 2023 4:48 UTC
1 point
2 comments1 min readEA link

[Linkpost] OpenAI lead­ers call for reg­u­la­tion of “su­per­in­tel­li­gence” to re­duce ex­is­ten­tial risk.

Lowe25 May 2023 14:14 UTC
5 points
0 comments1 min readEA link

Long list of AI ques­tions

NunoSempere6 Dec 2023 11:12 UTC
124 points
11 comments86 min readEA link

‘The Precipice’ Book Review

Matt Goodman27 Jul 2020 22:10 UTC
14 points
1 comment4 min readEA link

In­tro­duc­ing The Long Game Pro­ject: Table­top Ex­er­cises for a Re­silient Tomorrow

Dr Dan Epstein17 May 2023 8:56 UTC
48 points
7 comments5 min readEA link

Nu­clear Fine-Tun­ing: How Many Wor­lds Have Been De­stroyed?

Ember17 Aug 2022 13:13 UTC
16 points
28 comments23 min readEA link

En­gag­ing UK Cen­tre-Right Types in Ex­is­ten­tial Risk

Max_Thilo4 Dec 2023 9:26 UTC
17 points
0 comments1 min readEA link

The Re­think Pri­ori­ties Ex­is­ten­tial Se­cu­rity Team’s Strat­egy for 2023

Ben Snodin8 May 2023 8:08 UTC
92 points
3 comments16 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum Hinchcliffe14 Oct 2023 21:18 UTC
23 points
4 comments9 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn2 May 2023 10:17 UTC
68 points
35 comments13 min readEA link

Col­lec­tive in­tel­li­gence as in­fras­truc­ture for re­duc­ing broad ex­is­ten­tial risks

vickyCYang2 Aug 2021 6:00 UTC
30 points
6 comments11 min readEA link

Cos­mic’s Mug­ger : Should we re­ally de­lay cos­mic ex­pan­sion ?

Lysandre Terrisse30 Jun 2022 6:41 UTC
10 points
1 comment4 min readEA link

What is the like­li­hood that civ­i­liza­tional col­lapse would di­rectly lead to hu­man ex­tinc­tion (within decades)?

Luisa_Rodriguez24 Dec 2020 22:10 UTC
287 points
37 comments50 min readEA link

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

HaydnBelfield19 May 2022 8:42 UTC
462 points
44 comments18 min readEA link

Ob­sta­cles to the U.S. for Sup­port­ing Ver­ifi­ca­tions in the BWC, and Po­ten­tial Solu­tions.

Garrett Ehinger14 Apr 2023 2:48 UTC
27 points
2 comments16 min readEA link

[Question] What would you ask a poli­cy­maker about ex­is­ten­tial risks?

James Nicholas Bryant6 Jul 2021 23:53 UTC
24 points
2 comments1 min readEA link

EA is too fo­cused on the Man­hat­tan Project

trevor15 Sep 2022 2:00 UTC
17 points
0 comments1 min readEA link

Guard­ing Against Pandemics

Guarding Against Pandemics18 Sep 2021 11:15 UTC
72 points
15 comments4 min readEA link

Ex­is­ten­tial risk x Crypto: An un­con­fer­ence at Zuzalu

Yesh11 Apr 2023 13:31 UTC
6 points
0 comments1 min readEA link

X-Risk Re­searchers Sur­vey

NitaSangha24 Apr 2023 8:06 UTC
12 points
1 comment1 min readEA link

Sen­tience In­sti­tute 2021 End of Year Summary

Ali26 Nov 2021 14:40 UTC
66 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
26 points
1 comment1 min readEA link
(s-risks.org)

Eng­ineered plant pan­demics and so­cietal col­lapse risk

freedomandutility4 Aug 2023 17:06 UTC
13 points
2 comments1 min readEA link

Im­por­tant, ac­tion­able re­search ques­tions for the most im­por­tant century

Holden Karnofsky24 Feb 2022 16:34 UTC
291 points
13 comments19 min readEA link

ProMED, plat­form which alerted the world to Covid, might col­lapse—can EA donors fund it?

freedomandutility4 Aug 2023 16:42 UTC
41 points
4 comments1 min readEA link

Stan­ford Ex­is­ten­tial Risk Con­fer­ence Feb. 26/​27

kuhanj11 Feb 2022 0:56 UTC
28 points
0 comments1 min readEA link

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotal30 Mar 2023 22:07 UTC
11 points
8 comments5 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
155 points
16 comments70 min readEA link

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo28 Mar 2023 7:43 UTC
12 points
2 comments8 min readEA link

Man­i­fund x AI Worldviews

Austin31 Mar 2023 15:32 UTC
32 points
2 comments2 min readEA link
(manifund.org)

Suc­ces­sif: Join our AI pro­gram to help miti­gate the catas­trophic risks of AI

ClaireB25 Oct 2023 16:51 UTC
15 points
0 comments5 min readEA link

Ries­gos Catas­trófi­cos Globales needs funding

Jaime Sevilla1 Aug 2023 16:26 UTC
98 points
1 comment3 min readEA link

New Cause Area: Pro­gram­matic Mettā

Milan_Griffes1 Apr 2021 12:54 UTC
4 points
1 comment2 min readEA link

Longter­mism Fund: Au­gust 2023 Grants Report

Michael Townsend20 Aug 2023 5:34 UTC
81 points
3 comments5 min readEA link

AMA: Andy We­ber (U.S. As­sis­tant Sec­re­tary of Defense from 2009-2014)

Lizka26 Sep 2023 9:40 UTC
132 points
49 comments1 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

[Linkpost] Prospect Magaz­ine—How to save hu­man­ity from extinction

jackva26 Sep 2023 19:16 UTC
32 points
2 comments1 min readEA link
(www.prospectmagazine.co.uk)

Cor­po­rate Global Catas­trophic Risks (C-GCRs)

Hauke Hillebrandt30 Jun 2019 16:53 UTC
63 points
17 comments12 min readEA link

[Question] How would you define “ex­is­ten­tial risk?”

Linch29 Nov 2021 5:17 UTC
12 points
4 comments1 min readEA link

Rus­sian x-risks newslet­ter, sum­mer 2019

avturchin7 Sep 2019 9:55 UTC
23 points
1 comment4 min readEA link

3 sug­ges­tions about jar­gon in EA

MichaelA5 Jul 2020 3:37 UTC
131 points
18 comments5 min readEA link

[Question] Is trans­for­ma­tive AI the biggest ex­is­ten­tial risk? Why or why not?

BrownHairedEevee5 Mar 2022 3:54 UTC
9 points
11 comments1 min readEA link

Be­ing at peace with Doom

Johannes C. Mayer9 Apr 2023 15:01 UTC
15 points
7 comments4 min readEA link
(www.lesswrong.com)

[Fu­ture Perfect] How to be a good ancestor

Pablo2 Jul 2021 13:17 UTC
41 points
3 comments2 min readEA link
(www.vox.com)

The Precipice: a risky re­view by a non-EA

fmoreno8 Aug 2020 14:40 UTC
14 points
1 comment18 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

BrownHairedEevee26 Jul 2022 5:04 UTC
36 points
3 comments4 min readEA link
(sunyshore.substack.com)

Some EA Fo­rum Posts I’d like to write

Linch23 Feb 2021 5:27 UTC
100 points
10 comments5 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

Notes on “The Poli­tics of Cri­sis Man­age­ment” (Boin et al., 2016)

DM30 Jan 2022 22:51 UTC
29 points
1 comment18 min readEA link

Three pillars for avoid­ing AGI catas­tro­phe: Tech­ni­cal al­ign­ment, de­ploy­ment de­ci­sions, and co­or­di­na­tion

alexlintz3 Aug 2022 21:24 UTC
90 points
4 comments11 min readEA link

[Question] Why isn’t there a char­ity eval­u­a­tor for longter­mist pro­jects?

BrownHairedEevee29 Jul 2023 16:30 UTC
106 points
43 comments1 min readEA link

State­ment on Plu­ral­ism in Ex­is­ten­tial Risk Stud­ies

GideonF16 Aug 2023 14:29 UTC
20 points
46 comments7 min readEA link

My at­tempt at ex­plain­ing the case for AI risk in a straight­for­ward way

JulianHazell25 Mar 2023 16:32 UTC
25 points
7 comments18 min readEA link
(muddyclothes.substack.com)

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

Shap­ing Hu­man­ity’s Longterm Trajectory

Toby_Ord18 Jul 2023 10:09 UTC
178 points
54 comments2 min readEA link
(files.tobyord.com)

Man­i­fund: what we’re fund­ing (week 1)

Austin15 Jul 2023 0:28 UTC
43 points
11 comments3 min readEA link
(manifund.substack.com)

“Aligned with who?” Re­sults of sur­vey­ing 1,000 US par­ti­ci­pants on AI values

Holly Morgan21 Mar 2023 22:07 UTC
40 points
0 comments2 min readEA link
(www.lesswrong.com)

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DC29 Nov 2020 0:26 UTC
22 points
3 comments2 min readEA link

Five Years of Re­think Pri­ori­ties: Im­pact, Fu­ture Plans, Fund­ing Needs (July 2023)

Rethink Priorities18 Jul 2023 15:59 UTC
107 points
3 comments16 min readEA link

[Question] Strongest real-world ex­am­ples sup­port­ing AI risk claims?

rosehadshar5 Sep 2023 15:11 UTC
52 points
9 comments1 min readEA link

[Question] What are the best re­sources on com­par­ing x-risk pre­ven­tion to im­prov­ing the value of the fu­ture in other ways?

LHA26 Jun 2022 3:22 UTC
8 points
3 comments1 min readEA link

Great power con­flict—prob­lem pro­file (sum­mary and high­lights)

Stephen Clare7 Jul 2023 14:40 UTC
110 points
6 comments5 min readEA link
(80000hours.org)

An­nounc­ing Man­i­fund Regrants

Austin5 Jul 2023 19:42 UTC
217 points
51 comments4 min readEA link
(manifund.org)

A re­sponse to Michael Plant’s re­view of What We Owe The Future

Jack Malde4 Oct 2023 23:40 UTC
61 points
14 comments10 min readEA link

The most im­por­tant cli­mate change uncertainty

cwa26 Jul 2022 15:15 UTC
144 points
28 comments11 min readEA link

The GiveWiki’s Top Picks in AI Safety for the Giv­ing Sea­son of 2023

Dawn Drescher7 Dec 2023 9:23 UTC
25 points
0 comments3 min readEA link
(impactmarkets.substack.com)

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo8 Jul 2023 8:40 UTC
8 points
0 comments2 min readEA link

An­nounc­ing the Ex­is­ten­tial In­foSec Forum

calebp7 Jul 2023 21:08 UTC
89 points
1 comment2 min readEA link

“Effec­tive Altru­ism, Longter­mism, and the Prob­lem of Ar­bi­trary Power” by Gwilym David Blunt

WobblyPandaPanda12 Nov 2023 1:21 UTC
22 points
2 comments1 min readEA link
(www.thephilosopher1923.org)

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevin4 Sep 2023 19:02 UTC
20 points
2 comments6 min readEA link

Juan B. Gar­cía Martínez on tack­ling many causes at once and his jour­ney into EA

Amber Dawn30 Jun 2023 13:48 UTC
92 points
3 comments8 min readEA link
(contemplatonist.substack.com)

Start­ing the sec­ond Green Revolution

freedomandutility29 Jun 2023 12:23 UTC
30 points
3 comments1 min readEA link

Ap­ply to Spring 2024 policy in­tern­ships (we can help)

Elika4 Oct 2023 14:45 UTC
26 points
2 comments1 min readEA link

Ex­is­ten­tial risk and the fu­ture of hu­man­ity (Toby Ord)

EA Global21 Mar 2020 18:05 UTC
10 points
1 comment14 min readEA link
(www.youtube.com)

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

GideonF16 Mar 2023 14:37 UTC
56 points
11 comments15 min readEA link

More than Earth War­riors: The Di­verse Roles of Geo­scien­tists in Effec­tive Altruism

Christopher Chan31 Aug 2023 6:30 UTC
56 points
5 comments16 min readEA link

“Safety Cul­ture for AI” is im­por­tant, but isn’t go­ing to be easy

Davidmanheim26 Jun 2023 11:27 UTC
50 points
0 comments2 min readEA link
(papers.ssrn.com)

New book on s-risks

Tobias_Baumann26 Oct 2022 12:04 UTC
295 points
27 comments1 min readEA link

The catas­trophic pri­macy of re­ac­tivity over proac­tivity in gov­ern­men­tal risk as­sess­ment: brief UK case study

JuanGarcia27 Sep 2021 15:53 UTC
56 points
0 comments5 min readEA link

New in­fo­graphic based on “The Precipice”. any feed­back?

michael.andregg14 Jan 2021 7:29 UTC
50 points
4 comments1 min readEA link

Risks from so­lar flares?

freedomandutility7 Mar 2023 11:12 UTC
20 points
6 comments1 min readEA link

Still no strong ev­i­dence that LLMs in­crease bioter­ror­ism risk

freedomandutility2 Nov 2023 21:23 UTC
57 points
9 comments1 min readEA link

Fu­ture benefits of miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo4 Mar 2023 16:22 UTC
20 points
0 comments28 min readEA link

In­ter­na­tional Crim­i­nal Law and the Fu­ture of Hu­man­ity: A The­ory of the Crime of Omnicide

philosophytorres22 Mar 2021 12:19 UTC
−3 points
1 comment1 min readEA link

In­tro­duc­ing the new Ries­gos Catas­trófi­cos Globales team

Jaime Sevilla3 Mar 2023 23:04 UTC
77 points
3 comments5 min readEA link
(riesgoscatastroficosglobales.com)

Some more pro­jects I’d like to see

finm25 Feb 2023 22:22 UTC
67 points
12 comments24 min readEA link
(finmoorhouse.com)

What does Putin’s sus­pen­sion of a nu­clear treaty to­day mean for x-risk from nu­clear weapons?

freedomandutility21 Feb 2023 16:46 UTC
37 points
2 comments1 min readEA link

Com­mu­nity Build­ing for Grad­u­ate Stu­dents: A Tar­geted Approach

Neil Crawford29 Mar 2022 19:47 UTC
13 points
0 comments3 min readEA link

“Is this risk ac­tu­ally ex­is­ten­tial?” may be less im­por­tant than we think

mikbp3 Mar 2023 22:18 UTC
8 points
8 comments2 min readEA link

What is it like do­ing AI safety work?

Kat Woods21 Feb 2023 19:24 UTC
99 points
2 comments10 min readEA link

A Sur­vey of the Po­ten­tial Long-term Im­pacts of AI

Sam Clarke18 Jul 2022 9:48 UTC
63 points
2 comments27 min readEA link

Maybe longter­mism isn’t for everyone

BrownHairedEevee10 Feb 2023 16:48 UTC
39 points
17 comments1 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk: Six Month Re­port May-Oc­to­ber 2018

HaydnBelfield30 Nov 2018 20:32 UTC
26 points
2 comments17 min readEA link

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
42 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Un­jour­nal’s 1st eval is up: Re­silient foods pa­per (Denken­berger et al) & AMA ~48 hours

david_reinstein6 Feb 2023 19:18 UTC
77 points
10 comments3 min readEA link
(sciety.org)

Pro­posal: Create A New Longter­mism Organization

Brian Lui7 Feb 2023 5:59 UTC
25 points
37 comments6 min readEA link

How Re­think Pri­ori­ties’ Re­search could in­form your grantmaking

kierangreig4 Oct 2023 18:24 UTC
59 points
0 comments2 min readEA link

My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

NunoSempere23 Jan 2023 20:08 UTC
431 points
116 comments14 min readEA link
(nunosempere.com)

Con­ver­sa­tion with Holden Karnofsky, Nick Beck­stead, and Eliezer Yud­kowsky on the “long-run” per­spec­tive on effec­tive altruism

Nick_Beckstead18 Aug 2014 4:30 UTC
11 points
7 comments6 min readEA link

Non-util­i­tar­ian effec­tive altruism

keir bradwell29 Jan 2023 6:07 UTC
41 points
10 comments17 min readEA link
(keirbradwell.substack.com)

80,000 Hours ca­reer re­view: In­for­ma­tion se­cu­rity in high-im­pact areas

80000_Hours16 Jan 2023 12:45 UTC
56 points
10 comments11 min readEA link
(80000hours.org)

Re­think Pri­ori­ties: Seek­ing Ex­pres­sions of In­ter­est for Spe­cial Pro­jects Next Year

kierangreig29 Nov 2023 13:44 UTC
58 points
0 comments5 min readEA link

Re­place Neglectedness

Indra Gesink16 Jan 2023 17:42 UTC
51 points
4 comments4 min readEA link

We should say more than “x-risk is high”

OllieBase16 Dec 2022 22:09 UTC
52 points
12 comments4 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port: Novem­ber 2018 - April 2019

HaydnBelfield1 May 2019 15:34 UTC
10 points
16 comments15 min readEA link

One Hun­dred Opinions on Nu­clear War (Ladish, 2019)

Will Aldred29 Dec 2022 20:23 UTC
12 points
0 comments3 min readEA link
(jeffreyladish.com)

Ex­is­ten­tial Risk: More to explore

EA Handbook1 Jan 2021 10:15 UTC
2 points
0 comments1 min readEA link

Com­pe­ti­tion for “For­tified Es­says” on nu­clear risk

MichaelA17 Nov 2021 20:55 UTC
35 points
0 comments3 min readEA link
(www.metaculus.com)

Read­ing the ethi­cists 2: Hunt­ing for AI al­ign­ment papers

Charlie Steiner6 Jun 2022 15:53 UTC
9 points
0 comments1 min readEA link
(www.lesswrong.com)

Solv­ing al­ign­ment isn’t enough for a flour­ish­ing future

mic2 Feb 2024 18:22 UTC
26 points
0 comments22 min readEA link
(papers.ssrn.com)

Google Maps nuke-mode

AndreFerretti31 Jan 2023 21:37 UTC
11 points
6 comments1 min readEA link

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maxime29 Mar 2021 18:10 UTC
116 points
23 comments11 min readEA link

How x-risk pro­jects are differ­ent from startups

Jan_Kulveit5 Apr 2019 7:35 UTC
67 points
9 comments1 min readEA link

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8res6 Oct 2022 5:15 UTC
90 points
20 comments2 min readEA link

Geo­eng­ineer­ing to re­duce global catas­trophic risk?

Niklas Lehmann29 May 2022 15:50 UTC
7 points
3 comments5 min readEA link

Kurzge­sagt—The Last Hu­man (Longter­mist video)

Lizka28 Jun 2022 20:16 UTC
150 points
17 comments1 min readEA link
(www.youtube.com)

A Sim­ple Model of AGI De­ploy­ment Risk

djbinder9 Jul 2021 9:44 UTC
29 points
0 comments5 min readEA link

Database of orgs rele­vant to longter­mist/​x-risk work

MichaelA19 Nov 2021 8:50 UTC
103 points
65 comments4 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risk Observatory

Otto12 Aug 2021 15:51 UTC
39 points
0 comments5 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port Oc­to­ber 2019 - Jan­uary 2020

HaydnBelfield8 Apr 2020 13:28 UTC
8 points
0 comments17 min readEA link

What Re­think Pri­ori­ties Gen­eral Longter­mism Team Did in 2022, and Up­dates in Light of the Cur­rent Situation

Linch14 Dec 2022 13:37 UTC
162 points
9 comments19 min readEA link

Linkpost for var­i­ous re­cent es­says on suffer­ing-fo­cused ethics, pri­ori­ties, and more

Magnus Vinding28 Sep 2022 8:58 UTC
87 points
0 comments5 min readEA link
(centerforreducingsuffering.org)

Rus­sian x-risks newslet­ter, fall 2019

avturchin3 Dec 2019 17:01 UTC
27 points
2 comments3 min readEA link

Seth Baum: Rec­on­cil­ing in­ter­na­tional security

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments16 min readEA link
(www.youtube.com)

Sav­ing lives near the precipice

MikhailSamin29 Jul 2022 15:08 UTC
18 points
10 comments3 min readEA link

Planned Up­dates to U.S. Reg­u­la­tory Anal­y­sis Meth­ods are Likely Rele­vant to EAs

MHR7 Apr 2023 0:36 UTC
163 points
6 comments4 min readEA link

Fi­nal Re­port of the Na­tional Se­cu­rity Com­mis­sion on Ar­tifi­cial In­tel­li­gence (NSCAI, 2021)

MichaelA1 Jun 2021 8:19 UTC
51 points
3 comments4 min readEA link
(www.nscai.gov)

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJules3 Oct 2021 16:55 UTC
48 points
19 comments1 min readEA link

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendal20 Jun 2019 20:18 UTC
69 points
8 comments20 min readEA link

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah Weiler26 Jun 2022 10:59 UTC
57 points
2 comments16 min readEA link

[Question] Is it pos­si­ble to have a high level of hu­man het­ero­gene­ity and low chance of ex­is­ten­tial risks?

ekka24 May 2022 21:55 UTC
4 points
0 comments1 min readEA link

Global Devel­op­ment → re­duced ex-risk/​long-ter­mism. (Ini­tial draft/​ques­tion)

Arno13 Aug 2022 16:29 UTC
3 points
3 comments1 min readEA link

Marc Lip­sitch: Prevent­ing catas­trophic risks by miti­gat­ing sub­catas­trophic ones

EA Global2 Jun 2017 8:48 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Me­diocre AI safety as ex­is­ten­tial risk

Gavin16 Mar 2022 11:50 UTC
52 points
12 comments3 min readEA link

Com­mon-sense cases where “hy­po­thet­i­cal fu­ture peo­ple” matter

tlevin12 Aug 2022 14:05 UTC
107 points
21 comments4 min readEA link

Free to at­tend: Cam­bridge Con­fer­ence on Catas­trophic Risk (19-21 April)

HaydnBelfield21 Mar 2022 13:23 UTC
19 points
2 comments1 min readEA link

Global Pri­ori­ties In­sti­tute: Re­search Agenda

Aaron Gertler20 Jan 2021 20:09 UTC
22 points
0 comments1 min readEA link
(globalprioritiesinstitute.org)

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew Critch13 May 2022 17:26 UTC
51 points
5 comments4 min readEA link

Should marginal longter­mist dona­tions sup­port fun­da­men­tal or in­ter­ven­tion re­search?

MichaelA30 Nov 2020 1:10 UTC
43 points
4 comments15 min readEA link

Sur­viv­ing Global Catas­tro­phe in Nu­clear Sub­marines as Refuges

turchin5 Apr 2017 8:06 UTC
14 points
4 comments1 min readEA link

Launch of FERSTS Retreat

Theo K17 Jun 2022 11:53 UTC
26 points
0 comments2 min readEA link

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecas7 Jun 2020 19:52 UTC
2 points
15 comments3 min readEA link

.01% Fund—Ideation and Proposal

Linch1 Mar 2022 18:25 UTC
69 points
23 comments5 min readEA link

In­tel­lec­tual Diver­sity in AI Safety

KR22 Jul 2020 19:07 UTC
21 points
8 comments3 min readEA link

CSER Spe­cial Is­sue: ‘Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk’

HaydnBelfield2 Oct 2018 17:18 UTC
9 points
1 comment1 min readEA link

Hauke Hille­brandt: In­ter­na­tional agree­ments to spend per­centage of GDP on global pub­lic goods

EA Global21 Nov 2020 8:12 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Im­prov­ing the fu­ture by in­fluenc­ing ac­tors’ benev­olence, in­tel­li­gence, and power

MichaelA20 Jul 2020 10:00 UTC
75 points
15 comments17 min readEA link

Par­ti­ci­pate in the Hy­brid Fore­cast­ing-Per­sua­sion Tour­na­ment (on X-risk top­ics)

Jhrosenberg25 Apr 2022 22:13 UTC
53 points
4 comments2 min readEA link

Com­par­a­tive Bias

Joey5 Nov 2014 5:57 UTC
7 points
5 comments1 min readEA link

How likely is a nu­clear ex­change be­tween the US and Rus­sia?

Luisa_Rodriguez20 Jun 2019 1:49 UTC
80 points
13 comments13 min readEA link

EAGxVir­tual 2020 light­ning talks

EA Global25 Jan 2021 15:32 UTC
13 points
1 comment33 min readEA link
(www.youtube.com)

FLI AI Align­ment pod­cast: Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

evhub1 Jul 2020 20:59 UTC
13 points
2 comments1 min readEA link
(futureoflife.org)

Longter­mist (es­pe­cially x-risk) ter­minol­ogy has bi­as­ing assumptions

Arepo30 Oct 2022 16:26 UTC
64 points
13 comments7 min readEA link

Na­ture: Nu­clear war be­tween two na­tions could spark global famine

Tyner15 Aug 2022 20:55 UTC
15 points
1 comment1 min readEA link
(www.nature.com)

[Linkpost] Don’t Look Up—a Net­flix com­edy about as­ter­oid risk and re­al­is­tic so­cietal re­ac­tions (Dec. 24th)

Linch18 Nov 2021 21:40 UTC
63 points
16 comments1 min readEA link
(www.youtube.com)

AGI x-risk timelines: 10% chance (by year X) es­ti­mates should be the head­line, not 50%.

Greg_Colbourn1 Mar 2022 12:02 UTC
67 points
22 comments1 min readEA link

Age-Weighted Voting

William_MacAskill12 Jul 2019 15:21 UTC
71 points
39 comments6 min readEA link

Jenny Xiao: Dual moral obli­ga­tions and in­ter­na­tional co­op­er­a­tion against global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Fu­ture Mat­ters #4: AI timelines, AGI risk, and ex­is­ten­tial risk from cli­mate change

Pablo8 Aug 2022 11:00 UTC
59 points
0 comments17 min readEA link

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASB16 Dec 2021 21:40 UTC
81 points
6 comments3 min readEA link

[Question] Are there su­perfore­casts for ex­is­ten­tial risk?

Alex HT7 Jul 2020 7:39 UTC
24 points
13 comments1 min readEA link

Rus­sian x-risks newslet­ter win­ter 2019-2020

avturchin1 Mar 2020 12:51 UTC
10 points
4 comments2 min readEA link

AI Could Defeat All Of Us Combined

Holden Karnofsky10 Jun 2022 23:25 UTC
143 points
14 comments17 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDello20 May 2020 23:01 UTC
27 points
10 comments6 min readEA link

EA Re­search Around Min­eral Re­source Exhaustion

haywyer3 Jun 2022 0:59 UTC
2 points
0 comments1 min readEA link

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_Carlsmith22 Mar 2021 8:21 UTC
102 points
13 comments12 min readEA link

An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

MichaelA16 Jun 2021 16:12 UTC
38 points
0 comments2 min readEA link

“Holy Shit, X-risk” talk

michel15 Aug 2022 5:04 UTC
13 points
2 comments9 min readEA link

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments10 min readEA link
(www.youtube.com)

Why I am prob­a­bly not a longtermist

Denise_Melchin23 Sep 2021 17:24 UTC
214 points
47 comments8 min readEA link

[Question] What are the best ar­ti­cles/​blogs on the psy­chol­ogy of ex­is­ten­tial risk?

Geoffrey Miller16 Dec 2020 18:05 UTC
24 points
7 comments1 min readEA link

Does cli­mate change de­serve more at­ten­tion within EA?

Ben17 Apr 2019 6:50 UTC
145 points
66 comments15 min readEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew Critch15 Dec 2020 12:15 UTC
12 points
1 comment56 min readEA link
(alignmentforum.org)

Com­mon Points of Ad­vice for Stu­dents and Early-Ca­reer Pro­fes­sion­als In­ter­ested in Global Catas­trophic Risk

SethBaum16 Nov 2021 20:51 UTC
60 points
5 comments15 min readEA link

GCRI Open Call for Ad­visees and Collaborators

McKenna_Fitzgerald20 May 2021 22:07 UTC
13 points
0 comments4 min readEA link

AMA: To­bias Bau­mann, Cen­ter for Re­duc­ing Suffering

Tobias_Baumann6 Sep 2020 10:45 UTC
48 points
45 comments1 min readEA link

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_Daniel2 Jun 2017 8:48 UTC
8 points
1 comment22 min readEA link
(www.youtube.com)

Bon­nie Jenk­ins: Fireside chat

EA Global22 Jul 2020 15:59 UTC
18 points
0 comments24 min readEA link
(www.youtube.com)

New Pod­cast: X-Risk Upskill

Anthony Fleming27 Aug 2022 21:19 UTC
12 points
4 comments1 min readEA link

Sir Gavin and the green sky

Gavin17 Dec 2022 23:28 UTC
50 points
0 comments1 min readEA link

A (Very) Short His­tory of the Col­lapse of Civ­i­liza­tions, and Why it Matters

Davidmanheim30 Aug 2020 7:49 UTC
53 points
16 comments3 min readEA link

Should We Pri­ori­tize Long-Term Ex­is­ten­tial Risk?

MichaelDickens20 Aug 2020 2:23 UTC
28 points
17 comments3 min readEA link

Matt Lev­ine on the Arche­gos failure

Kelsey Piper29 Jul 2021 19:36 UTC
136 points
5 comments4 min readEA link

An­nounc­ing ERA: a spin-off from CERI

Nandini Shiralkar13 Dec 2022 20:58 UTC
55 points
7 comments3 min readEA link

In­ter­view Thomas Moynihan: “The dis­cov­ery of ex­tinc­tion is a philo­soph­i­cal cen­tre­piece of the mod­ern age”

felix.h6 Mar 2021 11:51 UTC
14 points
0 comments18 min readEA link

ALLFED 2019 An­nual Re­port and Fundrais­ing Appeal

AronM23 Nov 2019 2:05 UTC
42 points
12 comments22 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jia6 Apr 2021 8:49 UTC
55 points
6 comments7 min readEA link

Shelly Ka­gan—read­ings for Ethics and the Fu­ture sem­i­nar (spring 2021)

james29 Jun 2021 9:59 UTC
91 points
7 comments5 min readEA link
(docs.google.com)

[Question] How to find *re­li­able* ways to im­prove the fu­ture?

Sjlver18 Aug 2022 12:47 UTC
53 points
35 comments2 min readEA link

Up­date on civ­i­liza­tional col­lapse research

Jeffrey Ladish10 Feb 2020 23:40 UTC
56 points
7 comments3 min readEA link

Case study: Re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_Green2 Jun 2022 4:07 UTC
41 points
2 comments11 min readEA link

[Notes] Steven Pinker and Yu­val Noah Harari in conversation

Ben9 Feb 2020 12:49 UTC
29 points
2 comments7 min readEA link

Teruji Thomas, ‘The Asym­me­try, Uncer­tainty, and the Long Term’

Pablo5 Nov 2019 20:24 UTC
43 points
6 comments1 min readEA link
(globalprioritiesinstitute.org)

Prevent­ing hu­man extinction

Peter Singer19 Aug 2013 21:07 UTC
25 points
6 comments5 min readEA link

Luisa Ro­driguez: The like­li­hood and sever­ity of a US-Rus­sia nu­clear exchange

EA Global18 Oct 2019 18:05 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

Ex­tinc­tion risk re­duc­tion and moral cir­cle ex­pan­sion: Spec­u­lat­ing sus­pi­cious convergence

MichaelA4 Aug 2020 11:38 UTC
12 points
4 comments6 min readEA link

EA needs more humor

SWK1 Dec 2022 5:30 UTC
35 points
14 comments5 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnay26 Oct 2022 1:24 UTC
7 points
1 comment1 min readEA link

US Ci­ti­zens: Tar­geted poli­ti­cal con­tri­bu­tions are prob­a­bly the best pas­sive dona­tion op­por­tu­ni­ties for miti­gat­ing ex­is­ten­tial risk

Jeffrey Ladish5 May 2022 23:04 UTC
51 points
20 comments5 min readEA link

“Nu­clear risk re­search, fore­cast­ing, & im­pact” [pre­sen­ta­tion]

MichaelA21 Oct 2021 10:54 UTC
20 points
0 comments1 min readEA link
(www.youtube.com)

“Don’t Look Up” and the cin­ema of ex­is­ten­tial risk | Slow Boring

BrownHairedEevee5 Jan 2022 4:28 UTC
24 points
0 comments1 min readEA link
(www.slowboring.com)

APPG on Fu­ture Gen­er­a­tions im­pact re­port – Rais­ing the pro­file of fu­ture gen­er­a­tion in the UK Parliament

weeatquince12 Aug 2020 14:24 UTC
87 points
2 comments17 min readEA link

Tyler Cowen on effec­tive al­tru­ism (De­cem­ber 2022)

peterhartree13 Jan 2023 9:39 UTC
76 points
11 comments20 min readEA link
(youtu.be)

Risks from Asteroids

finm11 Feb 2022 21:01 UTC
44 points
9 comments5 min readEA link
(www.finmoorhouse.com)

FLI FAQ on the re­jected grant pro­posal controversy

Tegmark19 Jan 2023 17:31 UTC
331 points
132 comments1 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes28 Jul 2021 13:23 UTC
96 points
5 comments30 min readEA link

Con­cern­ing the Re­cent 2019-Novel Coron­avirus Outbreak

Matthew_Barnett27 Jan 2020 5:47 UTC
144 points
142 comments3 min readEA link

A list of good heuris­tics that the case for AI X-risk fails

Aaron Gertler16 Jul 2020 9:56 UTC
23 points
9 comments2 min readEA link
(www.alignmentforum.org)

The Pug­wash Con­fer­ences and the Anti-Bal­lis­tic Mis­sile Treaty as a case study of Track II diplomacy

rani_martin16 Sep 2022 10:42 UTC
82 points
5 comments27 min readEA link

Differ­en­tial tech­nol­ogy de­vel­op­ment: preprint on the concept

Hamish_Hobbs12 Sep 2022 13:52 UTC
65 points
0 comments2 min readEA link

War Between the US and China: A case study for epistemic challenges around China-re­lated catas­trophic risk

Jordan_Schneider12 Aug 2022 2:19 UTC
76 points
17 comments43 min readEA link

Talk­ing With a Biose­cu­rity Pro­fes­sional (Quick Notes)

DirectedEvolution10 Apr 2021 4:23 UTC
45 points
0 comments2 min readEA link

Mo­ral plu­ral­ism and longter­mism | Sunyshore

BrownHairedEevee17 Apr 2021 0:14 UTC
26 points
0 comments6 min readEA link
(sunyshore.substack.com)

Tech­nolog­i­cal de­vel­op­ments that could in­crease risks from nu­clear weapons: A shal­low review

MichaelA9 Feb 2023 15:41 UTC
79 points
3 comments5 min readEA link
(bit.ly)

PHILANTHROPY AND NUCLEAR RISK REDUCTION

ELN10 Feb 2023 10:48 UTC
22 points
5 comments4 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA25 Aug 2020 9:53 UTC
29 points
4 comments2 min readEA link
(www.openphilanthropy.org)

Effec­tive al­tru­ists are already in­sti­tu­tion­al­ists and are do­ing far more than un­work­able longter­mism—A re­sponse to “On the Differ­ences be­tween Eco­mod­ernism and Effec­tive Altru­ism”

jackva21 Feb 2023 18:08 UTC
78 points
3 comments12 min readEA link

In­ter­ven­tion Pro­file: Bal­lot Initiatives

Jason Schukraft13 Jan 2020 15:41 UTC
117 points
5 comments36 min readEA link

A pro­posed ad­just­ment to the as­tro­nom­i­cal waste argument

Nick_Beckstead27 May 2013 4:00 UTC
43 points
0 comments12 min readEA link

EA read­ing list: longter­mism and ex­is­ten­tial risks

richard_ngo3 Aug 2020 9:52 UTC
35 points
3 comments1 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis13 Apr 2018 1:44 UTC
64 points
33 comments4 min readEA link

Notes on “Bioter­ror and Biowar­fare” (2006)

MichaelA1 Mar 2021 9:42 UTC
29 points
6 comments4 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

Jack Malde9 Mar 2021 17:58 UTC
90 points
44 comments19 min readEA link

Jaan Tal­linn: Fireside chat (2018)

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments13 min readEA link
(www.youtube.com)

In­ter­na­tional Co­op­er­a­tion Against Ex­is­ten­tial Risks: In­sights from In­ter­na­tional Re­la­tions Theory

Jenny_Xiao11 Jan 2021 7:10 UTC
40 points
7 comments6 min readEA link

[Cross­post] Why Un­con­trol­lable AI Looks More Likely Than Ever

Otto8 Mar 2023 15:33 UTC
49 points
6 comments4 min readEA link
(time.com)

Civ­i­liza­tion Re-Emerg­ing After a Catas­trophic Collapse

MichaelA27 Jun 2020 3:22 UTC
32 points
18 comments2 min readEA link
(www.youtube.com)

“Can We Sur­vive Tech­nol­ogy?” by John von Neumann

Eli Rose13 Mar 2023 2:26 UTC
51 points
0 comments1 min readEA link
(geosci.uchicago.edu)

[Paper] In­ter­ven­tions that May Prevent or Mol­lify Su­per­vol­canic Eruptions

Denkenberger15 Jan 2018 21:46 UTC
23 points
8 comments1 min readEA link

[Question] How can we se­cure more re­search po­si­tions at our uni­ver­si­ties for x-risk re­searchers?

Neil Crawford6 Sep 2022 14:41 UTC
3 points
2 comments1 min readEA link

Hinges and crises

Jan_Kulveit17 Mar 2022 13:43 UTC
72 points
6 comments3 min readEA link

Differ­en­tial tech­nolog­i­cal de­vel­op­ment

james25 Jun 2020 10:54 UTC
37 points
7 comments5 min readEA link

Cli­mate Change Overview: CERI Sum­mer Re­search Fellowship

hb57417 Mar 2022 11:04 UTC
33 points
0 comments4 min readEA link

My cur­rent thoughts on MIRI’s “highly re­li­able agent de­sign” work

Daniel_Dewey7 Jul 2017 1:17 UTC
60 points
59 comments19 min readEA link

[Cross-post] A nu­clear war fore­cast is not a coin flip

David Johnston15 Mar 2022 4:01 UTC
29 points
12 comments3 min readEA link

16 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Nov & Dec 2019 up­date)

HaydnBelfield15 Jan 2020 12:07 UTC
21 points
0 comments9 min readEA link

[linkpost] Peter Singer: The Hinge of History

mic16 Jan 2022 1:25 UTC
38 points
8 comments3 min readEA link

An­nounc­ing the 2023 CLR Sum­mer Re­search Fellowship

stefan.torges17 Mar 2023 12:11 UTC
81 points
0 comments3 min readEA link

Astro­nom­i­cal Waste: The Op­por­tu­nity Cost of De­layed Tech­nolog­i­cal Devel­op­ment—Nick Bostrom (2003)

james10 Jun 2021 21:21 UTC
10 points
0 comments8 min readEA link
(www.nickbostrom.com)

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA Global2 Jun 2017 8:48 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Amesh Adalja: Pan­demic pathogens

EA Global8 Jun 2018 7:15 UTC
11 points
1 comment21 min readEA link
(www.youtube.com)

On pre­sent­ing the case for AI risk

Aryeh Englander8 Mar 2022 21:37 UTC
114 points
12 comments4 min readEA link

Notes on Apollo re­port on biodefense

Linch23 Jul 2022 21:38 UTC
69 points
1 comment12 min readEA link
(biodefensecommission.org)

In­creased Availa­bil­ity and Willing­ness for De­ploy­ment of Re­sources for Effec­tive Altru­ism and Long-Termism

Evan_Gaensbauer29 Dec 2021 20:20 UTC
46 points
1 comment2 min readEA link

An­nounc­ing the EA Archive

Aaron Bergman6 Jul 2023 13:49 UTC
67 points
18 comments2 min readEA link

Defin­ing Meta Ex­is­ten­tial Risk

rhys_lindmark9 Jul 2019 18:16 UTC
13 points
3 comments4 min readEA link

Seek­ing EA ex­perts in­ter­ested in the evolu­tion­ary psy­chol­ogy of ex­is­ten­tial risks

Geoffrey Miller23 Oct 2019 18:19 UTC
22 points
1 comment1 min readEA link

Hu­man sur­vival is a policy choice

Peter Wildeford3 Jun 2022 18:53 UTC
25 points
2 comments6 min readEA link
(www.pasteurscube.com)

[Question] What would you say gives you a feel­ing of ex­is­ten­tial hope, and what can we do to in­spire more of it?

elteerkers26 Jan 2022 13:46 UTC
18 points
4 comments1 min readEA link

U.S. Has De­stroyed the Last of Its Once-Vast Chem­i­cal Weapons Arsenal

JMonty18 Jul 2023 1:47 UTC
19 points
2 comments1 min readEA link
(www.nytimes.com)

Con­cepts of ex­is­ten­tial catas­tro­phe (Hilary Greaves)

Global Priorities Institute9 Nov 2023 17:42 UTC
41 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

Nick Beck­stead: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

How likely is World War III?

Stephen Clare15 Feb 2022 15:09 UTC
116 points
22 comments16 min readEA link

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA Global2 Jun 2017 8:48 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

AI X-Risk: In­te­grat­ing on the Shoulders of Giants

TD_Pilditch1 Nov 2022 16:07 UTC
34 points
0 comments47 min readEA link

Scru­ti­niz­ing AI Risk (80K, #81) - v. quick summary

Ben23 Jul 2020 19:02 UTC
10 points
1 comment3 min readEA link

Pri­ori­tiz­ing x-risks may re­quire car­ing about fu­ture people

elifland14 Aug 2022 0:55 UTC
182 points
39 comments6 min readEA link
(www.foxy-scout.com)

A Pin and a Bal­loon: An­thropic Frag­ility In­creases Chances of Ru­n­away Global Warm­ing

turchin11 Sep 2022 10:22 UTC
33 points
25 comments53 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port April—Septem­ber 2019

HaydnBelfield30 Sep 2019 19:20 UTC
14 points
1 comment15 min readEA link

Re­vis­it­ing “Why Global Poverty”

Jeff Kaufman1 Jun 2022 20:20 UTC
66 points
0 comments3 min readEA link
(www.jefftk.com)

[Question] What ac­tions would ob­vi­ously de­crease x-risk?

Eli Rose6 Oct 2019 21:00 UTC
22 points
28 comments1 min readEA link

The Top AI Safety Bets for 2023: GiveWiki’s Lat­est Recommendations

Dawn Drescher11 Nov 2023 9:04 UTC
10 points
4 comments8 min readEA link

The Precipice—Sum­mary/​Review

Nikola11 Oct 2022 0:06 UTC
10 points
0 comments5 min readEA link

Why I ex­pect suc­cess­ful (nar­row) alignment

Tobias_Baumann29 Dec 2018 15:46 UTC
18 points
10 comments1 min readEA link
(s-risks.org)

Longter­mists Should Work on AI—There is No “AI Neu­tral” Sce­nario

simeon_c7 Aug 2022 16:43 UTC
42 points
62 comments6 min readEA link

Carl Ro­bichaud: Fac­ing the risk of nu­clear war in the 21st century

EA Global15 Jul 2020 17:17 UTC
16 points
0 comments11 min readEA link
(www.youtube.com)

Pres­i­dent Trump as a Global Catas­trophic Risk

HaydnBelfield18 Nov 2016 18:02 UTC
22 points
16 comments27 min readEA link

13 ideas for new Ex­is­ten­tial Risk Movies & TV Shows – what are your ideas?

HaydnBelfield12 Apr 2022 11:47 UTC
81 points
15 comments4 min readEA link

Is Bit­coin Danger­ous?

postlibertarian19 Dec 2021 19:35 UTC
14 points
7 comments9 min readEA link

[Question] What’s the GiveDirectly of longter­mism & ex­is­ten­tial risk?

Nathan Young15 Nov 2021 23:55 UTC
28 points
25 comments1 min readEA link

What suc­cess looks like

mariushobbhahn28 Jun 2022 14:30 UTC
108 points
20 comments19 min readEA link

The Case for Strong Longtermism

Global Priorities Institute3 Sep 2019 1:17 UTC
14 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

What we tried

Jan_Kulveit21 Mar 2022 15:26 UTC
71 points
8 comments9 min readEA link

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergal27 May 2021 6:44 UTC
110 points
17 comments58 min readEA link

Split­ting the timeline as an ex­tinc­tion risk intervention

NunoSempere6 Feb 2022 19:59 UTC
14 points
27 comments4 min readEA link

Man­i­fund: What we’re fund­ing (weeks 2-4)

Austin4 Aug 2023 16:00 UTC
65 points
6 comments5 min readEA link
(manifund.substack.com)

Ap­ply to the new Open Philan­thropy Tech­nol­ogy Policy Fel­low­ship!

lukeprog20 Jul 2021 18:41 UTC
78 points
6 comments4 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA2 May 2021 18:00 UTC
30 points
21 comments2 min readEA link

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

Paul_Christiano1 Oct 2020 18:52 UTC
107 points
19 comments3 min readEA link

The Pen­tagon claims China will likely have 1,500 nu­clear war­heads by 2035

Will Aldred12 Dec 2022 18:12 UTC
34 points
3 comments2 min readEA link
(media.defense.gov)

“Ex­is­ten­tial Risk” is badly named and leads to nar­row fo­cus on as­tro­nom­i­cal waste

freedomandutility22 Aug 2022 20:25 UTC
38 points
2 comments2 min readEA link

Fund biose­cu­rity officers at universities

freedomandutility31 Oct 2022 11:49 UTC
13 points
3 comments1 min readEA link

An­drew Sny­der Beat­tie: Biotech­nol­ogy and ex­is­ten­tial risk

EA Global3 Nov 2017 7:43 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

Toby Ord: Fireside Chat and Q&A

EA Global21 Jul 2020 16:23 UTC
14 points
0 comments25 min readEA link
(www.youtube.com)

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumway7 Aug 2020 6:27 UTC
40 points
18 comments16 min readEA link

PHILANTHROPY AND NUCLEAR RISK REDUCTION

ELN10 Feb 2023 10:48 UTC
22 points
5 comments4 min readEA link

TED talk on Moloch and AI

LivBoeree15 Nov 2023 19:28 UTC
69 points
7 comments1 min readEA link

What suc­cess looks like

mariushobbhahn28 Jun 2022 14:30 UTC
108 points
20 comments19 min readEA link

[Question] Is trans­for­ma­tive AI the biggest ex­is­ten­tial risk? Why or why not?

BrownHairedEevee5 Mar 2022 3:54 UTC
9 points
11 comments1 min readEA link

Will re­leas­ing the weights of large lan­guage mod­els grant wide­spread ac­cess to pan­demic agents?

Jeff Kaufman30 Oct 2023 17:42 UTC
56 points
18 comments1 min readEA link
(arxiv.org)

Split­ting the timeline as an ex­tinc­tion risk intervention

NunoSempere6 Feb 2022 19:59 UTC
14 points
27 comments4 min readEA link

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotal6 Dec 2023 14:09 UTC
19 points
20 comments16 min readEA link
(open.substack.com)

[Fu­ture Perfect] How to be a good ancestor

Pablo2 Jul 2021 13:17 UTC
41 points
3 comments2 min readEA link
(www.vox.com)

Tech­nolog­i­cal de­vel­op­ments that could in­crease risks from nu­clear weapons: A shal­low review

MichaelA9 Feb 2023 15:41 UTC
79 points
3 comments5 min readEA link
(bit.ly)

Rus­sian x-risks newslet­ter, fall 2019

avturchin3 Dec 2019 17:01 UTC
27 points
2 comments3 min readEA link

The most im­por­tant cli­mate change uncertainty

cwa26 Jul 2022 15:15 UTC
144 points
28 comments11 min readEA link

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn15 Jun 2023 17:03 UTC
58 points
1 comment1 min readEA link

Google Maps nuke-mode

AndreFerretti31 Jan 2023 21:37 UTC
11 points
6 comments1 min readEA link

My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

NunoSempere23 Jan 2023 20:08 UTC
431 points
116 comments14 min readEA link
(nunosempere.com)

[Question] What would it look like for AIS to no longer be ne­glected?

Rockwell16 Jun 2023 15:59 UTC
99 points
15 comments1 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA2 May 2021 18:00 UTC
30 points
21 comments2 min readEA link

Seth Baum: Rec­on­cil­ing in­ter­na­tional security

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments16 min readEA link
(www.youtube.com)

A Sur­vey of the Po­ten­tial Long-term Im­pacts of AI

Sam Clarke18 Jul 2022 9:48 UTC
63 points
2 comments27 min readEA link

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

Paul_Christiano1 Oct 2020 18:52 UTC
107 points
19 comments3 min readEA link

Longter­mists are per­ceived as power-seeking

OllieBase20 Jun 2023 8:39 UTC
133 points
43 comments2 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jia6 Apr 2021 8:49 UTC
55 points
6 comments7 min readEA link

Geo­eng­ineer­ing to re­duce global catas­trophic risk?

Niklas Lehmann29 May 2022 15:50 UTC
7 points
3 comments5 min readEA link

Notes on Apollo re­port on biodefense

Linch23 Jul 2022 21:38 UTC
69 points
1 comment12 min readEA link
(biodefensecommission.org)

A Sim­ple Model of AGI De­ploy­ment Risk

djbinder9 Jul 2021 9:44 UTC
29 points
0 comments5 min readEA link

Case study: Re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_Green2 Jun 2022 4:07 UTC
41 points
2 comments11 min readEA link

Pro­posal: Create A New Longter­mism Organization

Brian Lui7 Feb 2023 5:59 UTC
25 points
37 comments6 min readEA link

Some EA Fo­rum Posts I’d like to write

Linch23 Feb 2021 5:27 UTC
100 points
10 comments5 min readEA link

Defin­ing Meta Ex­is­ten­tial Risk

rhys_lindmark9 Jul 2019 18:16 UTC
13 points
3 comments4 min readEA link

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben Snodin2 May 2022 9:41 UTC
136 points
17 comments33 min readEA link

Mone­tary and so­cial in­cen­tives in longter­mist careers

Vaidehi Agarwalla23 Sep 2023 21:03 UTC
140 points
5 comments6 min readEA link

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevin4 Sep 2023 19:02 UTC
20 points
2 comments6 min readEA link

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maxime29 Mar 2021 18:10 UTC
116 points
23 comments11 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

Hu­man sur­vival is a policy choice

Peter Wildeford3 Jun 2022 18:53 UTC
25 points
2 comments6 min readEA link
(www.pasteurscube.com)

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8res6 Oct 2022 5:15 UTC
90 points
20 comments2 min readEA link

Com­mon-sense cases where “hy­po­thet­i­cal fu­ture peo­ple” matter

tlevin12 Aug 2022 14:05 UTC
107 points
21 comments4 min readEA link

Un­jour­nal’s 1st eval is up: Re­silient foods pa­per (Denken­berger et al) & AMA ~48 hours

david_reinstein6 Feb 2023 19:18 UTC
77 points
10 comments3 min readEA link
(sciety.org)

The Precipice: In­tro­duc­tion and Chap­ter One

Toby_Ord2 Jan 2021 7:13 UTC
21 points
0 comments1 min readEA link

State­ment on Plu­ral­ism in Ex­is­ten­tial Risk Stud­ies

GideonF16 Aug 2023 14:29 UTC
20 points
46 comments7 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port Oc­to­ber 2019 - Jan­uary 2020

HaydnBelfield8 Apr 2020 13:28 UTC
8 points
0 comments17 min readEA link

What is the ex­pected effect of poverty alle­vi­a­tion efforts on ex­is­ten­tial risk?

WilliamKiely2 Oct 2015 20:43 UTC
13 points
25 comments1 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnay26 Oct 2022 1:24 UTC
7 points
1 comment1 min readEA link

“Can We Sur­vive Tech­nol­ogy?” by John von Neumann

Eli Rose13 Mar 2023 2:26 UTC
51 points
0 comments1 min readEA link
(geosci.uchicago.edu)

Shelly Ka­gan—read­ings for Ethics and the Fu­ture sem­i­nar (spring 2021)

james29 Jun 2021 9:59 UTC
91 points
7 comments5 min readEA link
(docs.google.com)

Database of orgs rele­vant to longter­mist/​x-risk work

MichaelA19 Nov 2021 8:50 UTC
103 points
65 comments4 min readEA link

16 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Nov & Dec 2019 up­date)

HaydnBelfield15 Jan 2020 12:07 UTC
21 points
0 comments9 min readEA link

More than Earth War­riors: The Di­verse Roles of Geo­scien­tists in Effec­tive Altruism

Christopher Chan31 Aug 2023 6:30 UTC
56 points
5 comments16 min readEA link

What Ques­tions Should We Ask Speak­ers at the Stan­ford Ex­is­ten­tial Risks Con­fer­ence?

kuhanj10 Apr 2021 0:51 UTC
21 points
2 comments1 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes28 Jul 2021 13:23 UTC
96 points
5 comments30 min readEA link

Bon­nie Jenk­ins: Fireside chat

EA Global22 Jul 2020 15:59 UTC
18 points
0 comments24 min readEA link
(www.youtube.com)

Longter­mism Fund: Au­gust 2023 Grants Report

Michael Townsend20 Aug 2023 5:34 UTC
81 points
3 comments5 min readEA link

3 sug­ges­tions about jar­gon in EA

MichaelA5 Jul 2020 3:37 UTC
131 points
18 comments5 min readEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger29 Oct 2017 23:28 UTC
38 points
9 comments16 min readEA link

How likely is a nu­clear ex­change be­tween the US and Rus­sia?

Luisa_Rodriguez20 Jun 2019 1:49 UTC
80 points
13 comments13 min readEA link

AMA: Andy We­ber (U.S. As­sis­tant Sec­re­tary of Defense from 2009-2014)

Lizka26 Sep 2023 9:40 UTC
132 points
49 comments1 min readEA link

Hinges and crises

Jan_Kulveit17 Mar 2022 13:43 UTC
72 points
6 comments3 min readEA link

[Cross­post] Why Un­con­trol­lable AI Looks More Likely Than Ever

Otto8 Mar 2023 15:33 UTC
49 points
6 comments4 min readEA link
(time.com)

Notes on “Bioter­ror and Biowar­fare” (2006)

MichaelA1 Mar 2021 9:42 UTC
29 points
6 comments4 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #2 - A Crit­i­cal Look at Model Conclusions

Ben Snodin18 Aug 2020 10:25 UTC
58 points
11 comments17 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis13 Apr 2018 1:44 UTC
64 points
33 comments4 min readEA link

Sav­ing lives near the precipice

MikhailSamin29 Jul 2022 15:08 UTC
18 points
10 comments3 min readEA link

Fu­ture peo­ple might not ex­ist

Indra Gesink30 Nov 2022 19:17 UTC
18 points
0 comments4 min readEA link

Three pillars for avoid­ing AGI catas­tro­phe: Tech­ni­cal al­ign­ment, de­ploy­ment de­ci­sions, and co­or­di­na­tion

alexlintz3 Aug 2022 21:24 UTC
90 points
4 comments11 min readEA link

Thoughts on “The Offense-Defense Balance Rarely Changes”

Cullen12 Feb 2024 3:26 UTC
42 points
2 comments5 min readEA link

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DC29 Nov 2020 0:26 UTC
22 points
3 comments2 min readEA link

Ries­gos Catas­trófi­cos Globales needs funding

Jaime Sevilla1 Aug 2023 16:26 UTC
98 points
1 comment3 min readEA link

Risks from so­lar flares?

freedomandutility7 Mar 2023 11:12 UTC
20 points
6 comments1 min readEA link

An­nounc­ing the 2023 CLR Sum­mer Re­search Fellowship

stefan.torges17 Mar 2023 12:11 UTC
81 points
0 comments3 min readEA link

“Is this risk ac­tu­ally ex­is­ten­tial?” may be less im­por­tant than we think

mikbp3 Mar 2023 22:18 UTC
8 points
8 comments2 min readEA link

In­tro­duc­ing the new Ries­gos Catas­trófi­cos Globales team

Jaime Sevilla3 Mar 2023 23:04 UTC
77 points
3 comments5 min readEA link
(riesgoscatastroficosglobales.com)

Longterm cost-effec­tive­ness of Founders Pledge’s Cli­mate Change Fund

Vasco Grilo14 Sep 2022 15:11 UTC
35 points
9 comments6 min readEA link

Lord Martin Rees: an appreciation

HaydnBelfield24 Oct 2022 16:11 UTC
184 points
19 comments5 min readEA link

Talk­ing With a Biose­cu­rity Pro­fes­sional (Quick Notes)

DirectedEvolution10 Apr 2021 4:23 UTC
45 points
0 comments2 min readEA link

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

Chi13 Feb 2024 22:24 UTC
92 points
7 comments2 min readEA link

The Top AI Safety Bets for 2023: GiveWiki’s Lat­est Recommendations

Dawn Drescher11 Nov 2023 9:04 UTC
10 points
4 comments8 min readEA link

[Question] Why isn’t there a char­ity eval­u­a­tor for longter­mist pro­jects?

BrownHairedEevee29 Jul 2023 16:30 UTC
106 points
43 comments1 min readEA link

Re­think Pri­ori­ties: Seek­ing Ex­pres­sions of In­ter­est for Spe­cial Pro­jects Next Year

kierangreig29 Nov 2023 13:44 UTC
58 points
0 comments5 min readEA link

8 pos­si­ble high-level goals for work on nu­clear risk

MichaelA29 Mar 2022 6:30 UTC
46 points
4 comments13 min readEA link

Ap­ply to Spring 2024 policy in­tern­ships (we can help)

Elika4 Oct 2023 14:45 UTC
26 points
2 comments1 min readEA link

[Linkpost] Prospect Magaz­ine—How to save hu­man­ity from extinction

jackva26 Sep 2023 19:16 UTC
32 points
2 comments1 min readEA link
(www.prospectmagazine.co.uk)

Mike Hue­mer on The Case for Tyranny

Chris Leong16 Jul 2020 9:57 UTC
24 points
5 comments1 min readEA link
(fakenous.net)

Suc­ces­sif: Join our AI pro­gram to help miti­gate the catas­trophic risks of AI

ClaireB25 Oct 2023 16:51 UTC
15 points
0 comments5 min readEA link

Longter­mist (es­pe­cially x-risk) ter­minol­ogy has bi­as­ing assumptions

Arepo30 Oct 2022 16:26 UTC
64 points
13 comments7 min readEA link

Fu­ture benefits of miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo4 Mar 2023 16:22 UTC
20 points
0 comments28 min readEA link

My at­tempt at ex­plain­ing the case for AI risk in a straight­for­ward way

JulianHazell25 Mar 2023 16:32 UTC
25 points
7 comments18 min readEA link
(muddyclothes.substack.com)

“Effec­tive Altru­ism, Longter­mism, and the Prob­lem of Ar­bi­trary Power” by Gwilym David Blunt

WobblyPandaPanda12 Nov 2023 1:21 UTC
22 points
2 comments1 min readEA link
(www.thephilosopher1923.org)

“Ex­is­ten­tial Risk” is badly named and leads to nar­row fo­cus on as­tro­nom­i­cal waste

freedomandutility22 Aug 2022 20:25 UTC
38 points
2 comments2 min readEA link

Vi­talik: Cryp­toe­co­nomics and X-Risk Re­searchers Should Listen to Each Other More

Emerson Spartz21 Nov 2021 18:50 UTC
56 points
3 comments5 min readEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
83 points
8 comments4 min readEA link

[Question] Model­ing hu­man­ity’s ro­bust­ness to GCRs?

QubitSwarm999 Jun 2022 17:20 UTC
7 points
1 comment2 min readEA link

In­cu­bat­ing AI x-risk pro­jects: some per­sonal reflections

Ben Snodin19 Dec 2023 17:03 UTC
84 points
10 comments9 min readEA link

Is Bit­coin Danger­ous?

postlibertarian19 Dec 2021 19:35 UTC
14 points
7 comments9 min readEA link

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo28 Mar 2023 7:43 UTC
12 points
2 comments8 min readEA link

[Question] What are the stan­dard terms used to de­scribe risks in risk man­age­ment?

BrownHairedEevee5 Mar 2022 4:07 UTC
11 points
2 comments1 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA25 Aug 2020 9:53 UTC
29 points
4 comments2 min readEA link
(www.openphilanthropy.org)

A pro­posed ad­just­ment to the as­tro­nom­i­cal waste argument

Nick_Beckstead27 May 2013 4:00 UTC
43 points
0 comments12 min readEA link

Teruji Thomas, ‘The Asym­me­try, Uncer­tainty, and the Long Term’

Pablo5 Nov 2019 20:24 UTC
43 points
6 comments1 min readEA link
(globalprioritiesinstitute.org)

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotal30 Mar 2023 22:07 UTC
11 points
8 comments5 min readEA link

The Case for Strong Longtermism

Global Priorities Institute3 Sep 2019 1:17 UTC
14 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

Man­i­fund x AI Worldviews

Austin31 Mar 2023 15:32 UTC
32 points
2 comments2 min readEA link
(manifund.org)

“Nu­clear risk re­search, fore­cast­ing, & im­pact” [pre­sen­ta­tion]

MichaelA21 Oct 2021 10:54 UTC
20 points
0 comments1 min readEA link
(www.youtube.com)

Differ­en­tial tech­nolog­i­cal de­vel­op­ment

james25 Jun 2020 10:54 UTC
37 points
7 comments5 min readEA link

Ex­is­ten­tial Risk: More to explore

EA Handbook1 Jan 2021 10:15 UTC
2 points
0 comments1 min readEA link

Man­i­fund: What we’re fund­ing (weeks 2-4)

Austin4 Aug 2023 16:00 UTC
65 points
6 comments5 min readEA link
(manifund.substack.com)

Be­ing at peace with Doom

Johannes C. Mayer9 Apr 2023 15:01 UTC
15 points
7 comments4 min readEA link
(www.lesswrong.com)

In­ter­view Thomas Moynihan: “The dis­cov­ery of ex­tinc­tion is a philo­soph­i­cal cen­tre­piece of the mod­ern age”

felix.h6 Mar 2021 11:51 UTC
14 points
0 comments18 min readEA link

An­nounc­ing ERA: a spin-off from CERI

Nandini Shiralkar13 Dec 2022 20:58 UTC
55 points
7 comments3 min readEA link

Matt Lev­ine on the Arche­gos failure

Kelsey Piper29 Jul 2021 19:36 UTC
136 points
5 comments4 min readEA link

“Safety Cul­ture for AI” is im­por­tant, but isn’t go­ing to be easy

Davidmanheim26 Jun 2023 11:27 UTC
50 points
0 comments2 min readEA link
(papers.ssrn.com)

ProMED, plat­form which alerted the world to Covid, might col­lapse—can EA donors fund it?

freedomandutility4 Aug 2023 16:42 UTC
41 points
4 comments1 min readEA link

The Precipice: a risky re­view by a non-EA

fmoreno8 Aug 2020 14:40 UTC
14 points
1 comment18 min readEA link

Part 2: AI Safety Move­ment Builders should help the com­mu­nity to op­ti­mise three fac­tors: con­trib­u­tors, con­tri­bu­tions and coordination

PeterSlattery15 Dec 2022 22:48 UTC
34 points
0 comments6 min readEA link

Nick Beck­stead: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

[Question] What’s the GiveDirectly of longter­mism & ex­is­ten­tial risk?

Nathan Young15 Nov 2021 23:55 UTC
28 points
25 comments1 min readEA link

[Question] Strongest real-world ex­am­ples sup­port­ing AI risk claims?

rosehadshar5 Sep 2023 15:11 UTC
52 points
9 comments1 min readEA link

Read­ing the ethi­cists 2: Hunt­ing for AI al­ign­ment papers

Charlie Steiner6 Jun 2022 15:53 UTC
9 points
0 comments1 min readEA link
(www.lesswrong.com)

What we tried

Jan_Kulveit21 Mar 2022 15:26 UTC
71 points
8 comments9 min readEA link

Man­i­fund: what we’re fund­ing (week 1)

Austin15 Jul 2023 0:28 UTC
43 points
11 comments3 min readEA link
(manifund.substack.com)

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA Global2 Jun 2017 8:48 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

Should strong longter­mists re­ally want to min­i­mize ex­is­ten­tial risk?

tobycrisford4 Dec 2022 16:56 UTC
31 points
9 comments4 min readEA link

Con­cerns/​Thoughts over in­ter­na­tional aid, longter­mism and philo­soph­i­cal notes on speak­ing with Larry Temkin.

Ben Yeoh27 Jul 2022 19:51 UTC
35 points
1 comment9 min readEA link

For­mal­is­ing the “Wash­ing Out Hy­poth­e­sis”

dwebb25 Mar 2021 11:40 UTC
101 points
27 comments12 min readEA link

[Linkpost] Nick Bostrom’s “Apol­ogy for an Old Email”

pseudonym12 Jan 2023 4:55 UTC
16 points
96 comments1 min readEA link
(nickbostrom.com)

Prevent­ing a US-China war as a policy priority

Matthew_Barnett22 Jun 2022 18:07 UTC
64 points
22 comments8 min readEA link

[Question] EA views on the AUKUS se­cu­rity pact?

DavidZhang29 Sep 2021 8:24 UTC
28 points
14 comments1 min readEA link

We in­ter­viewed 15 China-fo­cused re­searchers on how to do good research

gabriel_wagner19 Dec 2022 19:08 UTC
46 points
3 comments23 min readEA link

Pos­si­ble way of re­duc­ing great power war prob­a­bil­ity?

Denkenberger28 Nov 2019 4:27 UTC
33 points
2 comments2 min readEA link

Dani Nedal: Risks from great-power competition

EA Global13 Feb 2020 22:10 UTC
20 points
0 comments16 min readEA link
(www.youtube.com)

[Question] Books /​ book re­views on nu­clear risk, WMDs, great power war?

MichaelA15 Dec 2020 1:40 UTC
15 points
16 comments1 min readEA link

[Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the like­li­hood of war

Stephen Clare7 Nov 2022 12:22 UTC
74 points
1 comment2 min readEA link
(chrisblattman.com)

Prob­lem ar­eas be­yond 80,000 Hours’ cur­rent pri­ori­ties

Ardenlk22 Jun 2020 12:49 UTC
274 points
62 comments16 min readEA link

Brian Tse: Risks from Great Power Conflicts

EA Global11 Mar 2019 15:02 UTC
23 points
2 comments12 min readEA link
(www.youtube.com)

[Question] Will the next global con­flict be more like World War I?

FJehn26 Mar 2022 14:57 UTC
7 points
5 comments2 min readEA link

EA should help Tyler Cowen pub­lish his drafted book in China

Matt Brooks14 Jan 2023 21:10 UTC
38 points
8 comments3 min readEA link

Alli­ance to Feed the Earth in Disasters (ALLFED) Progress Re­port & Giv­ing Tues­day Appeal

Denkenberger21 Nov 2018 5:20 UTC
21 points
3 comments8 min readEA link

Should Effec­tive Altru­ism be at war with North Korea?

BenHoffman5 May 2019 1:44 UTC
−14 points
8 comments5 min readEA link
(benjaminrosshoffman.com)

Linkpost: The Scien­tists, the States­men, and the Bomb

Lauro Langosco8 Jul 2022 10:46 UTC
13 points
5 comments3 min readEA link
(www.bismarckanalysis.com)

Bu­gout Bags for Disasters

Fin8 Mar 2022 17:03 UTC
10 points
0 comments4 min readEA link

Why Don’t We Use Chem­i­cal Weapons Any­more?

Dale23 Apr 2020 1:25 UTC
28 points
4 comments3 min readEA link
(acoup.blog)

The Mys­tery of the Cuban mis­sile crisis

Nathan_Barnard5 May 2022 22:51 UTC
10 points
4 comments9 min readEA link

[Question] What are the best ways to en­courage de-es­ca­la­tion in re­gards to Ukraine?

oh543219 Oct 2022 11:15 UTC
13 points
4 comments1 min readEA link

Peter Wilde­ford on Fore­cast­ing Nu­clear Risk and why EA should fund scal­able non-profits

Michaël Trazzi13 Apr 2022 16:29 UTC
9 points
1 comment3 min readEA link
(theinsideview.github.io)

[Question] What is the strongest case for nu­clear weapons?

Garrison12 Apr 2022 19:32 UTC
6 points
3 comments1 min readEA link

Nu­clear Pre­pared­ness Guide

Fin8 Mar 2022 17:04 UTC
103 points
13 comments11 min readEA link

The chance of ac­ci­den­tal nu­clear war has been go­ing down

Peter Wildeford31 May 2022 14:48 UTC
66 points
6 comments1 min readEA link
(www.pasteurscube.com)

AMA: Joan Rohlfing, Pres­i­dent and COO of the Nu­clear Threat Initiative

Joan Rohlfing6 Dec 2021 20:58 UTC
74 points
35 comments1 min readEA link

Notes on “The Bomb: Pres­i­dents, Gen­er­als, and the Se­cret His­tory of Nu­clear War” (2020)

MichaelA6 Feb 2021 11:10 UTC
18 points
5 comments8 min readEA link

Early Reflec­tions and Re­sources on the Rus­sian In­va­sion of Ukraine

SethBaum18 Mar 2022 14:54 UTC
57 points
3 comments8 min readEA link

Risks from the UK’s planned in­crease in nu­clear warheads

Matt Goodman15 Aug 2021 20:14 UTC
23 points
8 comments2 min readEA link

Samotsvety Nu­clear Risk Fore­casts — March 2022

NunoSempere10 Mar 2022 18:52 UTC
155 points
54 comments5 min readEA link

Notes on “The Myth of the Nu­clear Revolu­tion” (Lie­ber & Press, 2020)

DM24 May 2022 15:02 UTC
42 points
2 comments20 min readEA link

Con­flict and poverty (or should we tackle poverty in nu­clear con­texts more?)

Sanjay6 Mar 2020 21:59 UTC
13 points
0 comments7 min readEA link

Have we un­der­es­ti­mated the risk of a NATO-Rus­sia nu­clear war? Can we do any­thing about it?

TopherHallquist9 Jul 2015 16:09 UTC
8 points
20 comments1 min readEA link

What’s the big deal about hy­per­sonic mis­siles?

jia18 May 2020 7:17 UTC
40 points
9 comments5 min readEA link

Be­ing the per­son who doesn’t launch nukes: new EA cause?

MichaelDickens6 Aug 2022 3:44 UTC
9 points
3 comments1 min readEA link

Ask a Nu­clear Expert

Group Organizer3 Mar 2022 11:28 UTC
5 points
0 comments1 min readEA link

The Threat of Nu­clear Ter­ror­ism MOOC [link]

RyanCarey19 Oct 2017 12:31 UTC
7 points
0 comments1 min readEA link

Some AI Gover­nance Re­search Ideas

MarkusAnderljung3 Jun 2021 10:51 UTC
99 points
5 comments2 min readEA link

[Question] How can we de­crease the short-term prob­a­bil­ity of the nu­clear war?

Just Learning1 Mar 2022 3:24 UTC
18 points
0 comments1 min readEA link

Notes on ‘Atomic Ob­ses­sion’ (2009)

lukeprog26 Oct 2019 0:30 UTC
62 points
16 comments8 min readEA link

Event on Oct 9: Fore­cast­ing Nu­clear Risk with Re­think Pri­ori­ties’ Michael Aird

MichaelA29 Sep 2021 17:45 UTC
24 points
3 comments2 min readEA link
(www.eventbrite.com)

Overview of Re­think Pri­ori­ties’ work on risks from nu­clear weapons

MichaelA10 Jun 2021 18:48 UTC
43 points
1 comment3 min readEA link

Is it eth­i­cal to ex­pand nu­clear en­ergy use?

simonfriederich5 Nov 2022 10:38 UTC
12 points
5 comments3 min readEA link

Op­por­tu­ni­ties that sur­prised us dur­ing our Clearer Think­ing Re­grants program

spencerg7 Nov 2022 13:09 UTC
114 points
5 comments9 min readEA link

Rus­sia-Ukraine Con­flict: Fore­cast­ing Nu­clear Risk in 2022

Metaculus24 Mar 2022 21:03 UTC
23 points
1 comment12 min readEA link

Re­duc­ing Nu­clear Risk Through Im­proved US-China Relations

Metaculus21 Mar 2022 11:50 UTC
31 points
19 comments5 min readEA link

[Question] How many times would nu­clear weapons have been used if ev­ery state had them since 1950?

eca4 May 2021 15:34 UTC
16 points
13 comments1 min readEA link

An­nounc­ing Me­tac­u­lus’s ‘Red Lines in Ukraine’ Fore­cast­ing Project

christian21 Oct 2022 22:13 UTC
17 points
0 comments1 min readEA link
(www.metaculus.com)

Why I think there’s a one-in-six chance of an im­mi­nent global nu­clear war

Tegmark8 Oct 2022 23:25 UTC
53 points
24 comments1 min readEA link

Model­ing re­sponses to changes in nu­clear risk

Nathan_Barnard23 Jun 2022 12:50 UTC
7 points
0 comments5 min readEA link

An­nounc­ing the first is­sue of Asterisk

Clara Collier21 Nov 2022 18:51 UTC
275 points
47 comments1 min readEA link

Does the US nu­clear policy still tar­get cities?

Jeffrey Ladish2 Oct 2019 17:46 UTC
32 points
0 comments10 min readEA link

[EAG talk] The like­li­hood and sever­ity of a US-Rus­sia nu­clear ex­change (Ro­driguez, 2019)

Will Aldred3 Jul 2022 13:53 UTC
32 points
0 comments2 min readEA link
(www.youtube.com)

Get­ting Nu­clear Policy Right Is Hard

Gentzel19 Sep 2017 1:00 UTC
16 points
4 comments1 min readEA link

The Nu­clear Threat Ini­ti­a­tive is not only nu­clear—notes from a call with NTI

Sanjay26 Jun 2020 17:29 UTC
29 points
2 comments7 min readEA link

[Question] I’m in­ter­view­ing some­times EA critic Jeffrey Lewis (AKA Arms Con­trol Wonk) about what we get right and wrong when it comes to nu­clear weapons and nu­clear se­cu­rity. What should I ask him?

Robert_Wiblin26 Aug 2022 18:06 UTC
33 points
8 comments1 min readEA link

Book re­view: The Dooms­day Machine

eukaryote10 Sep 2018 1:43 UTC
49 points
6 comments5 min readEA link

China’s Z-Ma­chine, a test fa­cil­ity for nu­clear weapons

EdoArad13 Dec 2018 7:03 UTC
11 points
0 comments1 min readEA link
(www.scmp.com)

An­nounc­ing In­sights for Impact

Christian Pearson4 Jan 2023 7:00 UTC
80 points
6 comments1 min readEA link

Me­tac­u­lus Year in Re­view: 2022

christian6 Jan 2023 1:23 UTC
25 points
2 comments4 min readEA link
(metaculus.medium.com)

Off-Earth Governance

EdoArad6 Sep 2019 19:26 UTC
18 points
3 comments2 min readEA link

Stu­art Arm­strong: The far fu­ture of in­tel­li­gent life across the universe

EA Global8 Jun 2018 7:15 UTC
19 points
0 comments12 min readEA link
(www.youtube.com)

An­nounc­ing the Cen­ter for Space Governance

Space Governance10 Jul 2022 13:53 UTC
72 points
6 comments1 min readEA link

Leav­ing Earth

Arjun Khemani6 Jul 2022 10:45 UTC
5 points
0 comments6 min readEA link
(arjunkhemani.com)

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden Karnofsky13 Jul 2021 16:57 UTC
216 points
47 comments8 min readEA link
(www.cold-takes.com)

An­nounc­ing the Space Fu­tures Initiative

Carson Ezell12 Sep 2022 12:37 UTC
71 points
3 comments2 min readEA link

[Question] What anal­y­sis has been done of space coloniza­tion as a cause area?

Eli Rose9 Oct 2019 20:33 UTC
14 points
8 comments1 min readEA link

Space Ex­plo­ra­tion & Satel­lites on Our World in Data

EdMathieu14 Jun 2022 12:05 UTC
57 points
2 comments1 min readEA link
(ourworldindata.org)

Will we even­tu­ally be able to colonize other stars? Notes from a pre­limi­nary review

Nick_Beckstead22 Jun 2014 18:19 UTC
30 points
7 comments32 min readEA link

Space gov­er­nance—prob­lem profile

finm8 May 2022 17:16 UTC
65 points
11 comments12 min readEA link

Lu­nar Colony

purplepeople19 Dec 2016 16:43 UTC
2 points
26 comments1 min readEA link

Kurzge­sagt’s most re­cent video pro­mot­ing the in­tro­duc­ing of wild life to other planets is un­eth­i­cal and irresponsible

David van Beveren11 Dec 2022 20:43 UTC
100 points
33 comments2 min readEA link

Save the Date: EAGxMars

OllieBase1 Apr 2022 11:44 UTC
148 points
15 comments1 min readEA link

[Pod­cast] Ajeya Co­tra on wor­ld­view di­ver­sifi­ca­tion and how big the fu­ture could be

BrownHairedEevee22 Jan 2021 23:57 UTC
57 points
20 comments1 min readEA link
(80000hours.org)

When to di­ver­sify? Break­ing down mis­sion-cor­re­lated investing

jh29 Nov 2022 11:18 UTC
33 points
2 comments8 min readEA link

“Far Co­or­di­na­tion”

𝕮𝖎𝖓𝖊𝖗𝖆23 Nov 2022 17:14 UTC
5 points
0 comments1 min readEA link

Test Your Knowl­edge of the Long-Term Future

AndreFerretti10 Dec 2022 11:01 UTC
22 points
0 comments1 min readEA link

Five Areas I Wish EAs Gave More Focus

Prometheus27 Oct 2022 6:13 UTC
8 points
14 comments4 min readEA link

[Question] Does Utili­tar­ian Longter­mism Im­ply Directed Pansper­mia?

Ahrenbach24 Apr 2020 18:15 UTC
4 points
17 comments1 min readEA link

In­sti­tu­tions Can­not Res­train Dark-Triad AI Exploitation

Remmelt27 Dec 2022 10:34 UTC
8 points
0 comments1 min readEA link

In­for­ma­tion se­cu­rity con­sid­er­a­tions for AI and the long term future

Jeffrey Ladish2 May 2022 20:53 UTC
126 points
8 comments11 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMath15 Jul 2021 16:26 UTC
23 points
0 comments1 min readEA link
(anchor.fm)

Bulk­ing in­for­ma­tion ad­di­tion­al­ities in global de­vel­op­ment for medium-term lo­cal prosperity

brb24311 Apr 2022 17:52 UTC
4 points
0 comments4 min readEA link

How big are risks from non-state ac­tors? Base rates for ter­ror­ist attacks

rosehadshar16 Feb 2022 10:20 UTC
54 points
3 comments18 min readEA link

[Question] Are there highly lev­er­aged dona­tion op­por­tu­ni­ties to pre­vent wars and dic­ta­tor­ships?

Dawn Drescher26 Feb 2022 3:31 UTC
58 points
8 comments1 min readEA link

Kel­sey Piper’s re­cent in­ter­view of SBF

Agustín Covarrubias16 Nov 2022 20:30 UTC
292 points
155 comments2 min readEA link
(www.vox.com)

[Question] Most harm­ful peo­ple in his­tory?

SiebeRozendal11 Sep 2022 3:04 UTC
16 points
9 comments1 min readEA link

Case for emer­gency re­sponse teams

Gavin5 Apr 2022 11:08 UTC
246 points
48 comments5 min readEA link

An ar­gu­ment that EA should fo­cus more on cli­mate change

Ann Garth8 Dec 2020 2:48 UTC
30 points
3 comments11 min readEA link

Per­sua­sion Tools: AI takeover with­out AGI or agency?

kokotajlod20 Nov 2020 16:56 UTC
15 points
5 comments10 min readEA link

Robert Wiblin: Mak­ing sense of long-term in­di­rect effects

EA Global6 Aug 2016 0:40 UTC
14 points
0 comments17 min readEA link
(www.youtube.com)

What can we learn from a short pre­view of a su­per-erup­tion and what are some tractable ways of miti­gat­ing it

Mike Cassidy3 Feb 2022 11:26 UTC
53 points
0 comments6 min readEA link

The case for de­lay­ing so­lar geo­eng­ineer­ing research

John G. Halstead23 Mar 2019 15:26 UTC
53 points
22 comments5 min readEA link

On the Vuln­er­a­ble World Hypothesis

Catherine Brewer1 Aug 2022 12:55 UTC
44 points
13 comments14 min readEA link

AGI in a vuln­er­a­ble world

AI Impacts2 Apr 2020 3:43 UTC
17 points
0 comments1 min readEA link
(aiimpacts.org)

Mea­sur­ing the “apoc­a­lyp­tic resi­d­ual”

acylhalide18 Dec 2021 9:06 UTC
12 points
2 comments5 min readEA link

“The Vuln­er­a­ble World Hy­poth­e­sis” (Nick Bostrom’s new pa­per)

Hauke Hillebrandt9 Nov 2018 11:20 UTC
24 points
6 comments1 min readEA link
(nickbostrom.com)

Civ­i­liza­tional vulnerabilities

Vasco Grilo22 Apr 2022 9:37 UTC
7 points
0 comments3 min readEA link

In­fluenc­ing United Na­tions Space Governance

Carson Ezell9 May 2022 17:44 UTC
30 points
0 comments12 min readEA link

William Mar­shall: Lu­nar colony

EA Global11 Aug 2017 8:19 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

[Question] Is there a sub­field of eco­nomics de­voted to “frag­ility vs re­silience”?

steve632021 Jul 2020 2:21 UTC
23 points
5 comments1 min readEA link

David Denken­berger: Loss of In­dus­trial Civ­i­liza­tion and Re­cov­ery (Work­shop)

Denkenberger19 Feb 2019 15:58 UTC
27 points
1 comment15 min readEA link

Should we buy coal mines?

John G. Halstead4 May 2022 7:28 UTC
216 points
31 comments7 min readEA link

What is the like­li­hood that civ­i­liza­tional col­lapse would cause tech­nolog­i­cal stag­na­tion? (out­dated re­search)

Luisa_Rodriguez19 Oct 2022 17:35 UTC
79 points
13 comments32 min readEA link

[Question] Has any­one done an anal­y­sis on the im­por­tance, tractabil­ity, and ne­glect­ed­ness of keep­ing hu­man-di­gestible calories in the ocean in case we need it af­ter some global catas­tro­phe?

Mati_Roy17 Feb 2020 7:47 UTC
9 points
5 comments1 min readEA link

The Do­mes­ti­ca­tion of Zebras

Further or Alternatively9 Sep 2022 10:58 UTC
15 points
20 comments2 min readEA link

Ground­wa­ter De­ple­tion: con­trib­u­tor to global civ­i­liza­tion col­lapse.

RickJS3 Dec 2022 7:09 UTC
11 points
6 comments3 min readEA link
(drive.google.com)

Hu­man­i­ties Re­search Ideas for Longtermists

Lizka9 Jun 2021 4:39 UTC
151 points
13 comments13 min readEA link

User-Friendly In­tro Post

James Odene [User-Friendly]23 Jun 2022 11:26 UTC
117 points
7 comments6 min readEA link

[Question] Book on Civil­i­sa­tional Col­lapse?

Milton7 Oct 2020 8:51 UTC
9 points
6 comments1 min readEA link

AGI safety and los­ing elec­tric­ity/​in­dus­try re­silience cost-effectiveness

Ross_Tieman17 Nov 2019 8:42 UTC
31 points
10 comments38 min readEA link

The Case for a Strate­gic U.S. Coal Re­serve for Cli­mate and Catastrophes

ColdButtonIssues5 May 2022 1:24 UTC
30 points
3 comments5 min readEA link

Mar­i­time ca­pa­bil­ity and post-catas­tro­phe re­silience.

Tom Gardiner14 Jul 2022 11:29 UTC
32 points
7 comments6 min readEA link

Ad­vice Wanted on Ex­pand­ing an EA Project

Denkenberger23 Apr 2016 23:20 UTC
4 points
3 comments2 min readEA link

Some his­tory top­ics it might be very valuable to investigate

MichaelA8 Jul 2020 2:40 UTC
91 points
34 comments7 min readEA link

Notes on Hen­rich’s “The WEIRDest Peo­ple in the World” (2020)

MichaelA25 Mar 2021 5:04 UTC
38 points
4 comments3 min readEA link

Luisa Ro­driguez: Do­ing em­piri­cal global pri­ori­ties re­search — the ques­tion of civ­i­liza­tional col­lapse and recovery

EA Global25 Oct 2020 5:48 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

Notes on Schel­ling’s “Strat­egy of Con­flict” (1960)

MichaelA29 Jan 2021 8:56 UTC
20 points
4 comments8 min readEA link

Re­search ex­er­cise: 5-minute in­side view on how to re­duce risk of nu­clear war

Emrik23 Oct 2022 12:42 UTC
16 points
2 comments6 min readEA link

Pod­cast: Samo Burja on the war in Ukraine, avoid­ing nu­clear war and the longer term implications

Gus Docker11 Mar 2022 18:50 UTC
4 points
6 comments15 min readEA link
(www.utilitarianpodcast.com)

Re­quest for pro­pos­als: Help Open Philan­thropy quan­tify biolog­i­cal risk

djbinder12 May 2022 21:28 UTC
137 points
10 comments6 min readEA link

“A Creepy Feel­ing”: Nixon’s De­ci­sion to Disavow Biolog­i­cal Weapons

ThomasW30 Sep 2022 15:17 UTC
48 points
3 comments11 min readEA link

Sur­vey on AI ex­is­ten­tial risk scenarios

Sam Clarke8 Jun 2021 17:12 UTC
154 points
11 comments6 min readEA link

Should some­one start a grass­roots cam­paign for USA to recog­nise the State of Pales­tine?

freedomandutility11 May 2021 15:29 UTC
−4 points
4 comments1 min readEA link

The Germy Para­dox – The empty sky: A his­tory of state biolog­i­cal weapons programs

eukaryote24 Sep 2019 5:26 UTC
24 points
0 comments1 min readEA link
(eukaryotewritesblog.com)

US House Vote on Sup­port for Ye­men War

Radical Empath Ismam12 Dec 2022 2:13 UTC
−4 points
0 comments1 min readEA link
(theintercept.com)

Open Philan­thropy Shal­low In­ves­ti­ga­tion: Civil Con­flict Reduction

Lauren Gilbert12 Apr 2022 18:18 UTC
121 points
12 comments22 min readEA link

[Question] What are effec­tive ways to help Ukraini­ans right now?

Manuel Allgaier24 Feb 2022 22:20 UTC
130 points
86 comments1 min readEA link

The Germy Para­dox: An Introduction

eukaryote24 Sep 2019 5:18 UTC
48 points
4 comments3 min readEA link
(eukaryotewritesblog.com)

Ground­wa­ter crisis: a threat of civ­i­liza­tion collapse

RickJS24 Dec 2022 21:21 UTC
0 points
0 comments3 min readEA link
(drive.google.com)

An Overview of Poli­ti­cal Science (Policy and In­ter­na­tional Re­la­tions Primer for EA, Part 3)

Davidmanheim5 Jan 2020 12:54 UTC
22 points
4 comments10 min readEA link

EA and the Pos­si­ble De­cline of the US: Very Rough Thoughts

Cullen8 Jan 2021 7:30 UTC
56 points
19 comments4 min readEA link

[Question] Ukraine: How a reg­u­lar per­son can effec­tively help their coun­try dur­ing war?

Valmothy26 Feb 2022 10:58 UTC
49 points
19 comments1 min readEA link

[Cause Ex­plo­ra­tion Prizes] Pocket Parks

Open Philanthropy29 Aug 2022 11:01 UTC
7 points
0 comments10 min readEA link

A coun­ter­fac­tual QALY for USD 2.60–28.94?

brb2436 Sep 2020 21:45 UTC
37 points
6 comments5 min readEA link

900+ Fore­cast­ers on Whether Rus­sia Will In­vade Ukraine

Metaculus19 Feb 2022 13:29 UTC
51 points
0 comments4 min readEA link
(metaculus.medium.com)

The Germy Para­dox – Filters: A taboo

eukaryote19 Oct 2019 0:14 UTC
17 points
2 comments9 min readEA link
(eukaryotewritesblog.com)

Rough at­tempt to pro­file char­i­ties which sup­port Ukrainian war re­lief in terms of their cost-effec­tive­ness.

Michael27 Feb 2022 0:51 UTC
29 points
5 comments4 min readEA link

Eval­u­at­ing Com­mu­nal Violence from an Effec­tive Altru­ist Perspective

frankfredericks13 Aug 2019 19:38 UTC
16 points
4 comments8 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngo19 Jan 2021 0:07 UTC
36 points
2 comments8 min readEA link

The case for not in­vad­ing Crimea

kbog19 Jan 2023 6:37 UTC
12 points
16 comments19 min readEA link

On­shore al­gae farms could feed the world

Tyner10 Oct 2022 17:44 UTC
11 points
0 comments1 min readEA link
(tos.org)

EA and the cur­rent fund­ing situation

William_MacAskill10 May 2022 2:26 UTC
564 points
185 comments21 min readEA link

Po­ta­toes: A Crit­i­cal Review

Pablo Villalobos10 May 2022 15:27 UTC
118 points
27 comments6 min readEA link
(docs.google.com)

On famines, food tech­nolo­gies and global shocks

Ramiro12 Oct 2021 14:28 UTC
16 points
2 comments4 min readEA link

Res­lab Re­quest for In­for­ma­tion: EA hard­ware projects

Joel Becker26 Oct 2022 11:38 UTC
46 points
15 comments1 min readEA link

Safety Sells: For-profit in­vest­ing into civ­i­liza­tional re­silience (food se­cu­rity, biose­cu­rity)

FGH3 Jan 2023 12:24 UTC
30 points
4 comments6 min readEA link

U.S. Ex­ec­u­tive branch ap­point­ments: why you may want to pur­sue one and tips for how to do so

Demosthenes_USA28 Nov 2020 19:20 UTC
65 points
6 comments12 min readEA link

[Re­view and notes] How Democ­racy Ends—David Runciman

Ben13 Feb 2020 22:30 UTC
31 points
1 comment5 min readEA link

An Ar­gu­ment for Why the Fu­ture May Be Good

Ben_West19 Jul 2017 22:03 UTC
41 points
30 comments4 min readEA link

[Question] Where the QALY’s at in poli­ti­cal sci­ence?

Timothy_Liptrot5 Aug 2020 5:04 UTC
7 points
7 comments1 min readEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngo3 Aug 2020 9:40 UTC
43 points
3 comments1 min readEA link

Ben Garfinkel: The fu­ture of surveillance

EA Global8 Jun 2018 7:51 UTC
18 points
0 comments11 min readEA link
(www.youtube.com)

A rel­a­tively athe­o­ret­i­cal per­spec­tive on as­tro­nom­i­cal waste

Nick_Beckstead6 Aug 2014 0:55 UTC
9 points
8 comments8 min readEA link

[Question] Books on au­thor­i­tar­i­anism, Rus­sia, China, NK, demo­cratic back­slid­ing, etc.?

MichaelA2 Feb 2021 3:52 UTC
14 points
21 comments1 min readEA link

Ide­olog­i­cal en­g­ineer­ing and so­cial con­trol: A ne­glected topic in AI safety re­search?

Geoffrey Miller1 Sep 2017 18:52 UTC
17 points
8 comments2 min readEA link

Cause Area: Hu­man Rights in North Korea

Dawn Drescher20 Nov 2017 20:52 UTC
61 points
12 comments20 min readEA link

Disen­tan­gling “Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing”

Lizka13 Sep 2021 23:50 UTC
90 points
16 comments19 min readEA link

What is a ‘broad in­ter­ven­tion’ and what is a ‘nar­row in­ter­ven­tion’? Are we con­fus­ing our­selves?

Robert_Wiblin19 Dec 2015 16:12 UTC
20 points
3 comments2 min readEA link

We Should Give Ex­tinc­tion Risk an Acronym

Charlie_Guthmann19 Oct 2022 7:16 UTC
21 points
16 comments1 min readEA link

For­mal­iz­ing Ex­tinc­tion Risk Re­duc­tion vs. Longtermism

Charlie_Guthmann17 Oct 2022 15:37 UTC
12 points
2 comments1 min readEA link

Art Recom­men­da­tion: Dr. Stone

Devin Kalish9 Jul 2022 10:53 UTC
15 points
2 comments1 min readEA link
(www.crunchyroll.com)

Pre­serv­ing and con­tin­u­ing al­ign­ment re­search through a se­vere global catastrophe

A_donor6 Mar 2022 18:43 UTC
38 points
14 comments4 min readEA link

Base Rates on United States Regime Collapse

AppliedDivinityStudies5 Apr 2021 17:14 UTC
14 points
3 comments7 min readEA link

How democ­racy ends: a re­view and reevaluation

richard_ngo24 Nov 2018 17:41 UTC
27 points
2 comments6 min readEA link
(thinkingcomplete.blogspot.com)

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

EA Forum Archives14 Oct 2017 2:41 UTC
30 points
1 comment26 min readEA link
(www.lesswrong.com)

“Slower tech de­vel­op­ment” can be about or­der­ing, grad­u­al­ness, or dis­tance from now

MichaelA14 Nov 2021 20:58 UTC
47 points
3 comments4 min readEA link

Beyond fire alarms: free­ing the groupstruck

Katja_Grace3 Oct 2021 2:33 UTC
61 points
6 comments49 min readEA link

[Question] Should re­cent events make us more or less con­cerned about biorisk?

Linch19 Mar 2020 0:00 UTC
23 points
7 comments1 min readEA link

[Question] Will the coro­n­avirus pan­demic ad­vance or hin­der the spread of longter­mist-style val­ues/​think­ing?

MichaelA19 Mar 2020 6:07 UTC
12 points
3 comments1 min readEA link

[Question] How will the world re­spond to “AI x-risk warn­ing shots” ac­cord­ing to refer­ence class fore­cast­ing?

Ryan Kidd18 Apr 2022 9:10 UTC
18 points
1 comment1 min readEA link

[Question] Do we know how many big as­ter­oids could im­pact Earth?

Milan_Griffes7 Jul 2019 16:06 UTC
31 points
7 comments1 min readEA link

Shal­low Re­port on Asteroids

Joel Tan20 Oct 2022 1:34 UTC
27 points
7 comments13 min readEA link

Fu­ture Mat­ters #5: su­per­vol­ca­noes, AI takeover, and What We Owe the Future

Pablo14 Sep 2022 13:02 UTC
31 points
5 comments18 min readEA link

Cause pri­ori­ti­sa­tion: Prevent­ing lake Kivu in Africa erup­tion which could kill two mil­lion.

turchin28 Dec 2022 12:32 UTC
70 points
3 comments3 min readEA link

[Question] Disaster Relief?

Hira Khan5 Aug 2022 20:57 UTC
1 point
1 comment1 min readEA link

COVID-19 re­sponse as XRisk intervention

tyleralterman10 Apr 2020 6:16 UTC
51 points
5 comments4 min readEA link

[Cause Ex­plo­ra­tion Prizes] Nat­u­ral Disaster Pre­pared­ness and Research

Open Philanthropy19 Aug 2022 11:11 UTC
12 points
3 comments10 min readEA link

Brian To­masik – Differ­en­tial In­tel­lec­tual Progress as a Pos­i­tive-Sum Project

Tessa23 Oct 2013 23:31 UTC
22 points
0 comments1 min readEA link
(longtermrisk.org)

[Question] What pre­vi­ous work has been done on fac­tors that af­fect the pace of tech­nolog­i­cal de­vel­op­ment?

Megan Kinniment27 Apr 2021 18:43 UTC
21 points
6 comments1 min readEA link

Pa­trick Col­li­son on Effec­tive Altruism

SamuelKnoche23 Jun 2020 9:04 UTC
98 points
4 comments3 min readEA link

Does Mo­ral Philos­o­phy Drive Mo­ral Progress?

AppliedDivinityStudies2 Jul 2021 21:22 UTC
38 points
4 comments4 min readEA link

Su­per-ex­po­nen­tial growth im­plies that ac­cel­er­at­ing growth is unim­por­tant in the long run

kbog11 Aug 2020 7:20 UTC
36 points
9 comments4 min readEA link

In­ves­ti­gat­ing how tech­nol­ogy-fo­cused aca­demic fields be­come self-sustaining

Ben Snodin6 Sep 2021 15:04 UTC
43 points
4 comments42 min readEA link

Self-Sus­tain­ing Fields Liter­a­ture Re­view: Tech­nol­ogy Fore­cast­ing, How Aca­demic Fields Emerge, and the Science of Science

Megan Kinniment6 Sep 2021 15:04 UTC
27 points
0 comments6 min readEA link

Differ­en­tial Tech­nolog­i­cal Devel­op­ment: Some Early Thinking

Nick_Beckstead29 Sep 2015 10:23 UTC
4 points
0 comments9 min readEA link
(blog.givewell.org)

The ap­pli­ca­bil­ity of transsen­tien­tist crit­i­cal path analysis

Peter Sølling11 Aug 2020 11:26 UTC
0 points
2 comments32 min readEA link
(www.optimalaltruism.com)

A Frame­work for Tech­ni­cal Progress on Biosecurity

kyle_fish3 Nov 2021 10:57 UTC
76 points
1 comment9 min readEA link

On Progress and Prosperity

Paul_Christiano15 Oct 2014 7:03 UTC
59 points
32 comments9 min readEA link

A note about differ­en­tial tech­nolog­i­cal development

So8res24 Jul 2022 23:41 UTC
58 points
8 comments5 min readEA link

Which AI Safety Org to Join?

Yonatan Cale11 Oct 2022 19:42 UTC
17 points
21 comments1 min readEA link

Safety reg­u­la­tors: A tool for miti­gat­ing tech­nolog­i­cal risk

JustinShovelain21 Jan 2020 13:09 UTC
10 points
0 comments4 min readEA link

If tech progress might be bad, what should we tell peo­ple about it?

Robert_Wiblin16 Feb 2016 10:26 UTC
21 points
18 comments2 min readEA link

Slow­ing down AI progress is an un­der­ex­plored al­ign­ment strategy

Michael Huang13 Jul 2022 3:22 UTC
92 points
11 comments3 min readEA link
(www.lesswrong.com)

Katja Grace on Slow­ing Down AI, AI Ex­pert Sur­veys And Es­ti­mat­ing AI Risk

Michaël Trazzi16 Sep 2022 18:00 UTC
48 points
6 comments4 min readEA link
(theinsideview.ai)

[Question] Is EA too the­o­ret­i­cal? Can we re­ward prac­ti­cal­ity?

Prof.Weird17 Nov 2020 23:07 UTC
4 points
0 comments1 min readEA link

Im­prov­ing sci­ence: In­fluenc­ing the di­rec­tion of re­search and the choice of re­search questions

C Tilli20 Dec 2021 10:20 UTC
64 points
13 comments16 min readEA link

Cause Area: Differ­en­tial Neu­rotech­nol­ogy Development

mwcvitkovic10 Aug 2022 2:39 UTC
93 points
7 comments36 min readEA link

In­stead of tech­ni­cal re­search, more peo­ple should fo­cus on buy­ing time

Akash5 Nov 2022 20:43 UTC
107 points
32 comments1 min readEA link

Restrict­ing brain organoid re­search to slow down AGI

freedomandutility9 Nov 2022 13:01 UTC
8 points
2 comments1 min readEA link

Effects of anti-ag­ing re­search on the long-term future

Matthew_Barnett27 Feb 2020 22:42 UTC
61 points
33 comments4 min readEA link

What would bet­ter sci­ence look like?

C Tilli30 Aug 2021 8:57 UTC
24 points
3 comments5 min readEA link

Let’s think about slow­ing down AI

Katja_Grace23 Dec 2022 19:56 UTC
334 points
9 comments1 min readEA link

New Study in Science Suggests a Se­vere Bot­tle­neck in Hu­man Pop­u­la­tion Size 930,000 Years Ago

DannyBressler31 Aug 2023 22:19 UTC
8 points
0 comments1 min readEA link
(www.science.org)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo30 Aug 2023 5:36 UTC
7 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

[Question] Would a su­per-in­tel­li­gent AI nec­es­sar­ily sup­port its own ex­is­tence?

Porque?25 Jun 2023 10:39 UTC
8 points
2 comments2 min readEA link

In­ter­na­tional risk of food in­se­cu­rity and mass mor­tal­ity in a run­away global warm­ing scenario

Vasco Grilo2 Sep 2023 7:28 UTC
15 points
2 comments6 min readEA link
(www.sciencedirect.com)

Re­port: Pro­pos­als for the pre­ven­tion and de­tec­tion of emerg­ing in­fec­tious dis­eases (EID) in Guatemala

JorgeTorresC22 Sep 2023 20:27 UTC
14 points
2 comments2 min readEA link

New Prince­ton course on longtermism

Calvin_Baker1 Sep 2023 20:31 UTC
88 points
7 comments6 min readEA link

Im­mor­tal­ity or death by AGI

ImmortalityOrDeathByAGI24 Sep 2023 9:44 UTC
12 points
2 comments4 min readEA link
(www.lesswrong.com)

Aim for con­di­tional pauses

AnonResearcherMajorAILab25 Sep 2023 1:05 UTC
100 points
42 comments12 min readEA link

From Plant Pathogens to Hu­man Threats: Un­veiling the Silent Me­nace of Fun­gal Diseases

emmannaemeka24 Sep 2023 22:16 UTC
22 points
0 comments3 min readEA link

Balanc­ing the Scales: Ad­dress­ing Biolog­i­cal X-Risk Re­search Dis­par­i­ties Beyond the West

emmannaemeka22 Sep 2023 21:31 UTC
10 points
1 comment2 min readEA link

Su­perfore­cast­ing the premises in “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe_Carlsmith18 Oct 2023 20:33 UTC
110 points
3 comments1 min readEA link

Get­ting Trac­tion on Nu­clear Risks

ELN29 Jun 2023 5:10 UTC
9 points
0 comments8 min readEA link

Re­port on Fron­tier Model Training

YafahEdelman30 Aug 2023 20:04 UTC
19 points
1 comment21 min readEA link
(docs.google.com)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo29 Jun 2023 7:23 UTC
8 points
0 comments1 min readEA link
(www.campaignforaisafety.org)

2023 ALLFED Marginal Fund­ing Appeal

JuanGarcia17 Nov 2023 10:55 UTC
31 points
2 comments3 min readEA link

Le­gal As­sis­tance for Vic­tims of AI

bob17 Mar 2023 11:42 UTC
52 points
19 comments1 min readEA link

AI In­ci­dent Shar­ing—Best prac­tices from other fields and a com­pre­hen­sive list of ex­ist­ing platforms

stepanlos28 Jun 2023 16:18 UTC
42 points
1 comment4 min readEA link

Sym­bio­sis, not al­ign­ment, as the goal for liberal democ­ra­cies in the tran­si­tion to ar­tifi­cial gen­eral intelligence

simonfriederich17 Mar 2023 13:04 UTC
18 points
2 comments24 min readEA link
(rdcu.be)

De­stroy the “ne­oliberal hal­lu­ci­na­tion” & fight for an­i­mal rights through open res­cue.

Chloe Leffakis15 Aug 2023 4:47 UTC
−17 points
2 comments1 min readEA link
(www.reddit.com)

Biosafety Reg­u­la­tions (BMBL) and their rele­vance for AI

stepanlos29 Jun 2023 19:20 UTC
7 points
0 comments4 min readEA link

Panel on nu­clear risk: Rear Ad­miral John Gower, Pa­tri­cia Lewis, and Paul Ingram

Paul Ingram4 Jul 2023 13:24 UTC
8 points
0 comments30 min readEA link

THE DAY IS COMING

rogersbacon12 Jul 2023 17:44 UTC
−29 points
0 comments5 min readEA link
(www.secretorum.life)

AISN #12: Policy Pro­pos­als from NTIA’s Re­quest for Com­ment and Re­con­sid­er­ing In­stru­men­tal Convergence

Center for AI Safety27 Jun 2023 15:25 UTC
30 points
3 comments7 min readEA link
(newsletter.safe.ai)

ML4G Ger­many—AI Align­ment Camp

Evander H.27 Jun 2023 15:33 UTC
5 points
0 comments1 min readEA link

How to make in­de­pen­dent re­search more fun (80k After Hours)

rgb17 Mar 2023 22:25 UTC
24 points
0 comments25 min readEA link
(80000hours.org)

Ar­tifi­cial In­tel­li­gence Safety of Film Capacitors

yonxinzhang21 Nov 2023 11:51 UTC
−2 points
0 comments1 min readEA link

Wel­come to Ap­ply: The 2024 Vi­talik Bu­terin Fel­low­ships in AI Ex­is­ten­tial Safety by FLI!

Zhijing Jin25 Sep 2023 16:20 UTC
14 points
5 comments2 min readEA link

How Can Risk Aver­sion Affect Your Cause Pri­ori­ti­za­tion?

Laura Duffy20 Oct 2023 19:46 UTC
108 points
6 comments16 min readEA link
(docs.google.com)

The He­donic Tread­mill Dilemma – Reflect­ing on the Sto­ries of Wile E. Coyote

alexherwix20 Mar 2023 9:06 UTC
28 points
1 comment7 min readEA link

Un­veiling the Longter­mism Frame­work in Is­lam: Urg­ing Mus­lims to Em­brace Fu­ture-Ori­ented Values through ‘Is­lamic Longter­mism’

Zayn A15 Aug 2023 11:34 UTC
83 points
9 comments20 min readEA link

Is Ex­is­ten­tial Risk Miti­ga­tion Uniquely Cost-Effec­tive? Not in Stan­dard Pop­u­la­tion Models (Gus­tav Alexan­drie and Maya Eden)

Global Priorities Institute4 Jul 2023 13:28 UTC
33 points
2 comments3 min readEA link
(globalprioritiesinstitute.org)

Three mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (David Thorstad)

Global Priorities Institute4 Jul 2023 13:18 UTC
48 points
14 comments3 min readEA link
(globalprioritiesinstitute.org)

Against the Open Source /​ Closed Source Di­chotomy: Reg­u­lated Source as a Model for Re­spon­si­ble AI Development

alexherwix4 Sep 2023 20:23 UTC
5 points
1 comment6 min readEA link

In­ter­me­di­ate Re­port on Abrupt Sun­light Re­duc­tion Scenarios

Stan Pinsent20 Oct 2023 9:15 UTC
21 points
6 comments4 min readEA link

Public Opinion on AI Safety: AIMS 2023 and 2021 Summary

Janet Pauketat25 Sep 2023 18:09 UTC
19 points
0 comments3 min readEA link
(www.sentienceinstitute.org)

Cults that want to kill ev­ery­one, stealth vs wild­fire pan­demics, and how he felt in­vent­ing gene drives (Kevin Esvelt on the 80,000 Hours Pod­cast)

80000_Hours4 Oct 2023 13:58 UTC
38 points
1 comment16 min readEA link

An­i­mal Weapons: Les­sons for Hu­mans in the Age of X-Risk

Damin Curtis4 Jul 2023 14:43 UTC
32 points
1 comment10 min readEA link

Up­date: an im­proved sim­ple model of re­cur­rent catastrophes

Arepo10 Nov 2023 13:39 UTC
11 points
2 comments2 min readEA link

Part 4: Reflec­tions af­ter at­tend­ing the CEA In­tro to EA Vir­tual Pro­gram in Sum­mer 2023 – Chap­ter 4: Our Fi­nal Cen­tury?

Andreas P1 Nov 2023 7:12 UTC
8 points
0 comments3 min readEA link

OpenAI is start­ing a new “Su­per­in­tel­li­gence al­ign­ment” team and they’re hiring

Alejandro Ortega5 Jul 2023 18:27 UTC
100 points
16 comments1 min readEA link
(openai.com)

Revolu­tion­is­ing Na­tional Risk Assess­ment (NRA): im­proved meth­ods and stake­holder en­gage­ment to tackle global catas­tro­phe and ex­is­ten­tial risks

Matt Boyd21 Mar 2023 6:05 UTC
26 points
1 comment8 min readEA link

Align­ment for fo­cused chat­bots?

Beckpm8 Jul 2023 15:09 UTC
−1 points
0 comments1 min readEA link

Con­struc­tive Dis­cus­sion and Think­ing Method­ol­ogy for Se­vere Si­tu­a­tions in­clud­ing Ex­is­ten­tial Risks

Aino8 Jul 2023 0:04 UTC
1 point
0 comments7 min readEA link

An Overview of Catas­trophic AI Risks

Center for AI Safety15 Aug 2023 21:52 UTC
37 points
1 comment13 min readEA link
(www.safe.ai)

An­nounc­ing #AISum­mitTalks fea­tur­ing Pro­fes­sor Stu­art Rus­sell and many others

Otto24 Oct 2023 10:16 UTC
9 points
1 comment1 min readEA link

In­tro­duc­ing For Fu­ture—A Plat­form to Dis­cover and Col­lab­o­rate on Longter­mist Solutions

RubyT5 Oct 2023 12:54 UTC
0 points
0 comments5 min readEA link

[Question] What is the most con­vinc­ing ar­ti­cle, video, etc. mak­ing the case that AI is an X-Risk

Jordan Arel11 Jul 2023 20:32 UTC
4 points
7 comments1 min readEA link

AISN#14: OpenAI’s ‘Su­per­al­ign­ment’ team, Musk’s xAI launches, and de­vel­op­ments in mil­i­tary AI use

Center for AI Safety12 Jul 2023 16:58 UTC
26 points
0 comments4 min readEA link
(newsletter.safe.ai)

AISN #13: An in­ter­dis­ci­plinary per­spec­tive on AI proxy failures, new com­peti­tors to ChatGPT, and prompt­ing lan­guage mod­els to misbehave

Center for AI Safety5 Jul 2023 15:33 UTC
25 points
0 comments9 min readEA link
(newsletter.safe.ai)

[Linkpost] NY Times Fea­ture on Anthropic

Garrison12 Jul 2023 19:30 UTC
34 points
3 comments5 min readEA link
(www.nytimes.com)

Book Re­view: Oryx and Crake

Benny Smith13 Jul 2023 14:10 UTC
8 points
0 comments17 min readEA link

Paper sum­mary––Pro­tect­ing fu­ture gen­er­a­tions: A global sur­vey of le­gal academics

rileyharris5 Sep 2023 10:29 UTC
25 points
1 comment3 min readEA link
(www.legalpriorities.org)

Agnes Cal­lard on our fu­ture, the hu­man quest, and find­ing purpose

Tobias Häberli22 Mar 2023 12:29 UTC
5 points
0 comments21 min readEA link

Please won­der about the hard parts of the al­ign­ment problem

MikhailSamin11 Jul 2023 17:02 UTC
7 points
0 comments1 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo19 Jul 2023 8:15 UTC
5 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlos14 Jul 2023 17:10 UTC
4 points
1 comment6 min readEA link

AISN#15: China and the US take ac­tion to reg­u­late AI, re­sults from a tour­na­ment fore­cast­ing AI risk, up­dates on xAI’s plan, and Meta re­leases its open-source and com­mer­cially available Llama 2

Center for AI Safety19 Jul 2023 1:40 UTC
5 points
0 comments6 min readEA link
(newsletter.safe.ai)

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
62 points
32 comments8 min readEA link

Aus­trali­ans call for AI safety to be taken seriously

Alexander Saeri21 Jul 2023 1:16 UTC
51 points
1 comment1 min readEA link

The Un­know­able Catastrophe

Aino6 Jul 2023 15:37 UTC
3 points
0 comments3 min readEA link

The Dilemma of Ul­ti­mate Technology

Aino20 Jul 2023 12:24 UTC
1 point
0 comments7 min readEA link

Coach­ing match­mak­ing is now open: in­vest in com­mu­nity well­ness by in­vest­ing in yourself

Tee17 Jul 2023 11:17 UTC
39 points
0 comments20 min readEA link

[Question] Why haven’t we been de­stroyed by a power-seek­ing AGI from el­se­where in the uni­verse?

Jadon Schmitt22 Jul 2023 7:21 UTC
35 points
14 comments1 min readEA link

BOUNTY AVAILABLE: AI ethi­cists, what are your ob­ject-level ar­gu­ments against AI notkil­lev­ery­oneism?

Peter Berggren6 Jul 2023 17:37 UTC
0 points
19 comments2 min readEA link

AI-Rele­vant Reg­u­la­tion: In­surance in Safety-Crit­i­cal Industries

SWK22 Jul 2023 17:52 UTC
5 points
0 comments6 min readEA link

AISN #16: White House Se­cures Vol­un­tary Com­mit­ments from Lead­ing AI Labs and Les­sons from Oppenheimer

Center for AI Safety25 Jul 2023 16:45 UTC
7 points
0 comments6 min readEA link
(newsletter.safe.ai)

The com­pany that builds the UK’s nu­clear weapons is hiring for roles re­lated to wargaming

Will Howard6 Jul 2023 20:25 UTC
15 points
0 comments1 min readEA link

In­ves­ti­gat­ing the Long Reflection

Yannick_Muehlhaeuser24 Jul 2023 16:26 UTC
29 points
3 comments12 min readEA link

Fun­da­men­tals of Fatal Risks

Aino29 Jul 2023 7:12 UTC
1 point
0 comments4 min readEA link

AXRP Epi­sode 24 - Su­per­al­ign­ment with Jan Leike

DanielFilan27 Jul 2023 4:56 UTC
23 points
0 comments1 min readEA link
(axrp.net)

Sen­tience In­sti­tute 2023 End of Year Summary

MichaelDello27 Nov 2023 12:11 UTC
25 points
0 comments5 min readEA link
(www.sentienceinstitute.org)

ALLFED’s 2023 Highlights

Sonia_Cassidy1 Dec 2023 0:47 UTC
61 points
5 comments27 min readEA link

[Question] Do you think the prob­a­bil­ity of fu­ture AI sen­tience(suffer­ing) is >0.1%? Why?

jackchang11010 Jul 2023 16:41 UTC
4 points
0 comments1 min readEA link

Thresh­olds #1: What does good look like for longter­mism?

Spencer Ericson25 Jul 2023 19:17 UTC
45 points
36 comments8 min readEA link

An­nounc­ing the ERA Cam­bridge Sum­mer Re­search Fellowship

Nandini Shiralkar16 Mar 2023 11:37 UTC
83 points
6 comments3 min readEA link

Risks from Bad Space Governance

Yannick_Muehlhaeuser17 Jul 2023 12:36 UTC
39 points
1 comment6 min readEA link

In­fo­graph­ics of the Re­port food se­cu­rity in Ar­gentina in the event of an Abrupt Re­duc­tion of Sun­light Sce­nario (ASRS)

JorgeTorresC31 Jul 2023 19:36 UTC
25 points
0 comments1 min readEA link

[Cross­post] An AI Pause Is Hu­man­ity’s Best Bet For Prevent­ing Ex­tinc­tion (TIME)

Otto24 Jul 2023 10:18 UTC
36 points
3 comments7 min readEA link
(time.com)

Nu­clear Risk and Philan­thropic Strat­egy [Founders Pledge]

christian.r25 Jul 2023 20:22 UTC
76 points
14 comments76 min readEA link
(www.founderspledge.com)

XPT fore­casts on (some) Direct Ap­proach model inputs

Forecasting Research Institute20 Aug 2023 12:39 UTC
37 points
0 comments9 min readEA link

Why we should fear any bio­eng­ineered fun­gus and give fungi re­search attention

emmannaemeka18 Aug 2023 3:35 UTC
67 points
4 comments3 min readEA link

Draw­ing down car­bon with vol­canic rock dust on farm­ers’ fields

Vivian18 Aug 2023 13:50 UTC
1 point
0 comments1 min readEA link
(e360.yale.edu)

XPT fore­casts on (some) biolog­i­cal an­chors inputs

Forecasting Research Institute24 Jul 2023 13:32 UTC
37 points
2 comments12 min readEA link

Ex­is­ten­tial risk from AI and what DC could do about it (Ezra Klein on the 80,000 Hours Pod­cast)

80000_Hours26 Jul 2023 11:48 UTC
31 points
1 comment14 min readEA link

An­nounc­ing the win­ners of the Res­lab Re­quest for Information

Aron Lajko27 Jul 2023 17:43 UTC
15 points
3 comments10 min readEA link

Will AI kill ev­ery­one? Here’s what the god­fathers of AI have to say [RA video]

Writer19 Aug 2023 17:29 UTC
33 points
0 comments2 min readEA link
(youtu.be)

How would a nu­clear war be­tween Rus­sia and the US af­fect you per­son­ally?

Max Görlitz27 Jul 2023 13:06 UTC
13 points
4 comments1 min readEA link
(www.youtube.com)

II. Trig­ger­ing The Race

Maynk0224 Oct 2023 18:45 UTC
3 points
1 comment4 min readEA link

Pos­si­ble Diver­gence in AGI Risk Tol­er­ance be­tween Selfish and Altru­is­tic agents

Brad West9 Sep 2023 0:22 UTC
11 points
0 comments2 min readEA link

Carl Shul­man on AI takeover mechanisms (& more): Part II of Dwarkesh Pa­tel in­ter­view for The Lu­nar Society

Alejandro Ortega25 Jul 2023 18:31 UTC
28 points
0 comments5 min readEA link
(www.dwarkeshpatel.com)

Effec­tive Altru­ism and the strate­gic am­bi­guity of ‘do­ing good’

Jeroen De Ryck17 Jul 2023 19:24 UTC
80 points
10 comments2 min readEA link
(medialibrary.uantwerpen.be)

Mak­ing EA more in­clu­sive, rep­re­sen­ta­tive, and im­pact­ful in Africa

Ashura Batungwanayo17 Aug 2023 20:19 UTC
68 points
13 comments4 min readEA link

Biose­cu­rity Re­source Hub from Aron

Aron Lajko21 Jul 2023 18:07 UTC
39 points
4 comments1 min readEA link

Who’s right about in­puts to the biolog­i­cal an­chors model?

rosehadshar24 Jul 2023 14:37 UTC
69 points
12 comments5 min readEA link

How much is re­duc­ing catas­trophic and ex­tinc­tion risk worth, as­sum­ing XPT fore­casts?

rosehadshar24 Jul 2023 15:16 UTC
51 points
1 comment11 min readEA link

AISN #17: Au­to­mat­i­cally Cir­cum­vent­ing LLM Guardrails, the Fron­tier Model Fo­rum, and Se­nate Hear­ing on AI Oversight

Center for AI Safety1 Aug 2023 15:24 UTC
15 points
0 comments8 min readEA link

Elic­it­ing re­sponses to Marc An­dreessen’s “Why AI Will Save the World”

Coleman@21stTalks17 Jul 2023 19:58 UTC
2 points
2 comments1 min readEA link
(a16z.com)

In­tro­duc­ing the In­sights of an ERA Fo­rum Se­quence

Nandini Shiralkar27 Jul 2023 17:16 UTC
18 points
0 comments3 min readEA link

[Question] To what de­gree does a threat to a na­tion’s hu­man­ity pose an ex­is­ten­tial Risk?

emmannaemeka6 Oct 2023 16:35 UTC
5 points
0 comments1 min readEA link

Risk-averse Batch Ac­tive In­verse Re­ward Design

Panagiotis Liampas7 Oct 2023 8:56 UTC
11 points
0 comments15 min readEA link

As­ter­isk Magaz­ine Is­sue 03: AI

Alejandro Ortega24 Jul 2023 15:53 UTC
34 points
3 comments1 min readEA link
(asteriskmag.com)

[Question] Will you fund a fungi surveillance study?

emmannaemeka7 Sep 2023 20:42 UTC
7 points
2 comments1 min readEA link

Fix­ing In­sider Threats in the AI Sup­ply Chain

Madhav Malhotra7 Oct 2023 10:49 UTC
9 points
2 comments5 min readEA link

The AI Endgame: A coun­ter­fac­tual to AI al­ign­ment by an AI Safety newcomer

Andreas P1 Dec 2023 5:49 UTC
2 points
5 comments3 min readEA link

AMA: Peter Wilde­ford (Co-CEO at Re­think Pri­ori­ties)

Peter Wildeford18 Jul 2023 21:40 UTC
94 points
71 comments1 min readEA link

The Ex­is­ten­tial Risk of Speciesist Bias in AI

Sam Tucker11 Nov 2023 3:27 UTC
28 points
1 comment3 min readEA link

III. Run­ning its course

Maynk024 Nov 2023 19:31 UTC