RSS

Ex­is­ten­tial risk

Core TagLast edit: 5 Apr 2023 11:30 UTC by Will Howard🔹

An existential risk is a risk that threatens the destruction of the long-term potential of life.[1] An existential risk could threaten the extinction of humans (and other sentient beings), or it could threaten some other unrecoverable collapse or permanent failure to achieve a potential good state. Natural risks such as those posed by asteroids or supervolcanoes could be existential risks, as could anthropogenic (human-caused) risks like accidents from synthetic biology or unaligned artificial intelligence.

Estimating the probability of existential risk from different factors is difficult, but there are some estimates.[1]

Some view reducing existential risks as a key moral priority, for a variety of reasons.[2] Some people simply view the current estimates of existential risk as unacceptably high. Other authors argue that existential risks are especially important because the long-run future of humanity matters a great deal.[3] Many believe that there is no intrinsic moral difference between the importance of a life today and one in a hundred years. However, there may be many more people in the future than there are now. Given these assumptions, existential risks threaten not only the beings alive right now, but also the enormous number of lives yet to be lived. One objection to this argument is that people have a special responsibility to other people currently alive that they do not have to people who have not yet been born.[4] Another objection is that, although it would in principle be important to manage, the risks are currently so unlikely and poorly understood that existential risk reduction is less cost-effective than work on other promising areas.

In The Precipice: Existential Risk and the Future of Humanity, Toby Ord offers several policy and research recommendations for handling existential risks:[5]

Further reading

Bostrom, Nick (2002) Existential risks: analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology, vol. 9.
A paper surveying a wide range of non-extinction existential risks.

Bostrom, Nick (2013) Existential risk prevention as global priority, Global Policy, vol. 4, pp. 15–31.

Matheny, Jason Gaverick (2007) Reducing the risk of human extinction, Risk Analysis, vol. 27, pp. 1335–1344.
A paper exploring the cost-effectiveness of extinction risk reduction.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Ord, Toby (2020) Existential risks to humanity in Pedro Conceição (ed.) The 2020 Human Development Report: The Next Frontier: Human Development and the Anthropocene, New York: United Nations Development Programme, pp. 106–111.

Sánchez, Sebastián (2022) Timeline of existential risk, Timelines Wiki.

Related entries

civilizational collapse | criticism of longtermism and existential risk studies | dystopia | estimation of existential risks | ethics of existential risk | existential catastrophe | existential risk factor | existential security | global catastrophic risk | hinge of history | longtermism | Toby Ord | rationality community | Russell–Einstein Manifesto | s-risk

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  2. ^

    Todd, Benjamin (2017) The case for reducing existential risks, 80,000 Hours website. (Updated June 2022.)

  3. ^

    Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, PhD thesis, Rutgers University.

  4. ^

    Roberts, M. A. (2009) The nonidentity problem, Stanford Encyclopedia of Philosophy, July 21 (updated 1 December 2020).

  5. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, pp. 280–281.

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA🔸15 Jul 2020 12:28 UTC
81 points
7 comments7 min readEA link

“Long-Ter­mism” vs. “Ex­is­ten­tial Risk”

Scott Alexander6 Apr 2022 21:41 UTC
532 points
81 comments3 min readEA link

Katja Grace: Let’s think about slow­ing down AI

peterhartree23 Dec 2022 0:57 UTC
84 points
6 comments2 min readEA link
(worldspiritsockpuppet.substack.com)

The ex­pected value of ex­tinc­tion risk re­duc­tion is positive

JB9 Dec 2018 8:00 UTC
66 points
22 comments35 min readEA link

Chart­ing the precipice: The time of per­ils and pri­ori­tiz­ing x-risk

David Bernard24 Oct 2023 16:25 UTC
86 points
14 comments25 min readEA link

Ac­tu­al­ism, asym­me­try and extinction

Michael St Jules 🔸7 Jan 2025 16:02 UTC
26 points
0 comments9 min readEA link

Is x-risk the most cost-effec­tive if we count only the next few gen­er­a­tions?

Laura Duffy30 Oct 2023 12:43 UTC
120 points
7 comments20 min readEA link
(docs.google.com)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanni1 Jul 2021 21:01 UTC
148 points
10 comments32 min readEA link

Ex­is­ten­tial risks are not just about humanity

MichaelA🔸28 Apr 2020 0:09 UTC
36 points
0 comments5 min readEA link

Nick Bostrom – Ex­is­ten­tial Risk Preven­tion as Global Priority

Zach Stein-Perlman1 Feb 2013 17:00 UTC
15 points
1 comment1 min readEA link
(www.existential-risk.org)

What is ex­is­ten­tial se­cu­rity?

MichaelA🔸1 Sep 2020 9:40 UTC
34 points
1 comment6 min readEA link

Ex­is­ten­tial risk as com­mon cause

technicalities5 Dec 2018 14:01 UTC
49 points
22 comments5 min readEA link

Ex­cerpts from “Do­ing EA Bet­ter” on x-risk methodology

Eevee🔹26 Jan 2023 1:04 UTC
22 points
5 comments6 min readEA link
(forum.effectivealtruism.org)

What fac­tors al­low so­cieties to sur­vive a crisis?

FJehn9 Apr 2024 8:05 UTC
23 points
1 comment10 min readEA link
(existentialcrunch.substack.com)

Ex­is­ten­tial Risk Ob­ser­va­tory: re­sults and 2022 targets

Otto14 Jan 2022 13:52 UTC
22 points
6 comments4 min readEA link

On the as­sess­ment of vol­canic erup­tions as global catas­trophic or ex­is­ten­tial risks

Mike Cassidy 🔸13 Oct 2021 14:32 UTC
112 points
18 comments19 min readEA link

Database of ex­is­ten­tial risk estimates

MichaelA🔸15 Apr 2020 12:43 UTC
130 points
37 comments5 min readEA link

How bad would hu­man ex­tinc­tion be?

arvomm23 Oct 2023 12:01 UTC
132 points
25 comments18 min readEA link

The con­se­quences of large-scale blackouts

FJehn21 Oct 2024 6:12 UTC
15 points
5 comments12 min readEA link
(existentialcrunch.substack.com)

Beyond Ex­tinc­tion: Re­vis­it­ing the Ques­tion and Broad­en­ing Our View

arvomm17 Mar 2025 16:03 UTC
36 points
3 comments10 min readEA link

Ob­jec­tives of longter­mist policy making

Henrik Øberg Myhre10 Feb 2021 18:26 UTC
54 points
7 comments22 min readEA link

Quan­tify­ing the prob­a­bil­ity of ex­is­ten­tial catas­tro­phe: A re­ply to Beard et al.

MichaelA🔸10 Aug 2020 5:56 UTC
21 points
3 comments3 min readEA link
(gcrinstitute.org)

Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catastrophes

MichaelA🔸24 Apr 2020 13:27 UTC
39 points
3 comments7 min readEA link

Datasets that change the odds you exist

Vasco Grilo🔸24 May 2025 17:28 UTC
16 points
1 comment6 min readEA link
(dynomight.net)

The Im­por­tance of Un­known Ex­is­ten­tial Risks

MichaelDickens23 Jul 2020 19:09 UTC
72 points
11 comments12 min readEA link

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
145 points
119 comments34 min readEA link
(www.sentienceinstitute.org)

X-risks to all life v. to humans

RobertHarling3 Jun 2020 15:40 UTC
78 points
33 comments4 min readEA link

Some con­sid­er­a­tions for differ­ent ways to re­duce x-risk

Jacy4 Feb 2016 3:21 UTC
28 points
34 comments5 min readEA link

[Question] Is There Ac­tu­ally a Stan­dard or Con­vinc­ing Re­sponse to David Thorstad’s Crit­i­cisms of the Value of X-Risk Re­duc­tion and of Longter­mism?

David Mathers🔸21 May 2025 11:58 UTC
117 points
21 comments2 min readEA link

Fic­tional Catas­tro­phes, Reel Les­sons: What 12 Crit­i­cally Ac­claimed Films Re­veal About Sur­viv­ing Global Catastrophes

Matt Boyd14 May 2025 19:07 UTC
6 points
2 comments1 min readEA link
(adaptresearchwriting.com)

Imag­i­na­tion, Values, and Con­scious­ness: Miss­ing Lev­ers in Risk Re­duc­tion?

Ross McMath19 Aug 2025 18:27 UTC
−1 points
0 comments1 min readEA link
(open.substack.com)

Sum­mary of posts on XPT fore­casts on AI risk and timelines

Forecasting Research Institute25 Jul 2023 8:42 UTC
28 points
5 comments4 min readEA link

How bad would nu­clear win­ter caused by a US-Rus­sia nu­clear ex­change be?

Luisa_Rodriguez20 Jun 2019 1:48 UTC
145 points
18 comments43 min readEA link

Nathan A. Sears (1987-2023)

HaydnBelfield29 Mar 2023 16:07 UTC
297 points
7 comments4 min readEA link

Causal di­a­grams of the paths to ex­is­ten­tial catastrophe

MichaelA🔸1 Mar 2020 14:08 UTC
51 points
11 comments13 min readEA link

The state of global catas­trophic risk research

FJehn22 Jul 2025 8:13 UTC
31 points
6 comments3 min readEA link
(esd.copernicus.org)

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
352 points
96 comments37 min readEA link

Why I pri­ori­tize moral cir­cle ex­pan­sion over re­duc­ing ex­tinc­tion risk through ar­tifi­cial in­tel­li­gence alignment

Jacy20 Feb 2018 18:29 UTC
107 points
72 comments35 min readEA link
(www.sentienceinstitute.org)

Miti­gat­ing x-risk through modularity

Toby Newberry17 Dec 2020 19:54 UTC
103 points
6 comments14 min readEA link

A pro­posed hi­er­ar­chy of longter­mist concepts

Arepo30 Oct 2022 16:26 UTC
38 points
13 comments4 min readEA link

Effec­tive strate­gies for chang­ing pub­lic opinion: A liter­a­ture review

Jamie_Harris9 Nov 2021 14:09 UTC
82 points
2 comments36 min readEA link
(www.sentienceinstitute.org)

How AI Takeover Might Hap­pen in Two Years

Joshc7 Feb 2025 23:51 UTC
35 points
7 comments29 min readEA link
(x.com)

Global catas­trophic risks law ap­proved in the United States

JorgeTorresC7 Mar 2023 14:28 UTC
159 points
7 comments1 min readEA link
(riesgoscatastroficosglobales.com)

Re­silient foods: How to feed ev­ery­one, even in the worst of times

FJehn19 Dec 2024 11:12 UTC
11 points
1 comment7 min readEA link
(existentialcrunch.substack.com)

The trou­ble with tip­ping points: Are we steer­ing to­wards a cli­mate catas­tro­phe or a man­age­able challenge?

FJehn19 Jun 2023 8:57 UTC
24 points
18 comments8 min readEA link
(existentialcrunch.substack.com)

Con­tra Sa­gan on As­teroid Weaponiza­tion

christian.r4 Dec 2024 17:49 UTC
24 points
1 comment14 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 2:58 UTC
147 points
28 comments62 min readEA link

Long Reflec­tion Read­ing List

Will Aldred24 Mar 2024 16:27 UTC
101 points
7 comments14 min readEA link

The peo­ple’s his­tory of collapse

FJehn6 Aug 2025 8:09 UTC
13 points
2 comments23 min readEA link
(existentialcrunch.substack.com)

Some thoughts on Toby Ord’s ex­is­ten­tial risk estimates

MichaelA🔸7 Apr 2020 2:19 UTC
67 points
33 comments9 min readEA link

Beyond Sim­ple Ex­is­ten­tial Risk: Sur­vival in a Com­plex In­ter­con­nected World

GideonF21 Nov 2022 14:35 UTC
84 points
67 comments21 min readEA link

[Question] How Much Does New Re­search In­form Us About Ex­is­ten­tial Cli­mate Risk?

zdgroff22 Jul 2020 23:47 UTC
63 points
5 comments1 min readEA link

The 25 re­searchers who have pub­lished the largest num­ber of aca­demic ar­ti­cles on ex­is­ten­tial risk

FJehn12 Aug 2023 8:57 UTC
34 points
21 comments4 min readEA link
(existentialcrunch.substack.com)

Diver­sity In Ex­is­ten­tial Risk Stud­ies Sur­vey: SJ Beard

GideonF25 Nov 2022 16:29 UTC
2 points
0 comments1 min readEA link

“Govern­abil­ity-By-De­sign”: Pon­der­ings on Why We Haven’t Died From Nu­clear Catas­tro­phe (And What We Can Learn From This)

C.K.19 Aug 2025 18:20 UTC
5 points
2 comments6 min readEA link
(proteinstoparadigms.substack.com)

How much food is there?

FJehn2 Sep 2024 6:29 UTC
40 points
3 comments5 min readEA link
(existentialcrunch.substack.com)

The Odyssean Process

Odyssean Institute24 Nov 2023 13:48 UTC
25 points
6 comments1 min readEA link
(www.odysseaninstitute.org)

Vasili Arkhipov (Petrov’s un­der­sea coun­ter­part) cel­e­brated in an an­i­mated short

Dave Cortright 🔸3 Jun 2025 5:41 UTC
7 points
0 comments1 min readEA link
(www.instagram.com)

The Choice Transition

Owen Cotton-Barratt18 Nov 2024 12:32 UTC
49 points
1 comment15 min readEA link
(strangecities.substack.com)

Ex­is­ten­tial risk pes­simism and the time of perils

David Thorstad12 Aug 2022 14:42 UTC
182 points
67 comments20 min readEA link

ALTER Is­rael—Mid-year 2022 Update

Davidmanheim12 Jun 2022 9:22 UTC
63 points
0 comments2 min readEA link

[Question] Seek­ing sug­gested read­ings & videos for a new course on ‘AI and Psy­chol­ogy’

Geoffrey Miller20 May 2024 17:45 UTC
32 points
8 comments1 min readEA link

En­light­en­ment Values in a Vuln­er­a­ble World

Maxwell Tabarrok18 Jul 2022 11:54 UTC
66 points
18 comments31 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac King15 Oct 2024 1:40 UTC
12 points
17 comments4 min readEA link

An­nounc­ing the Q1 2025 Long-Term Fu­ture Fund grant round

Linch20 Dec 2024 2:17 UTC
53 points
12 comments2 min readEA link

The Root Cause

EuanMcLean17 Jun 2025 7:46 UTC
79 points
17 comments17 min readEA link

Can a pan­demic cause hu­man ex­tinc­tion? Pos­si­bly, at least on priors

Vasco Grilo🔸15 Jul 2024 17:07 UTC
29 points
4 comments6 min readEA link

Im­prov­ing dis­aster shelters to in­crease the chances of re­cov­ery from a global catastrophe

Nick_Beckstead19 Feb 2014 22:17 UTC
24 points
5 comments26 min readEA link

Giv­ing Now vs. Later for Ex­is­ten­tial Risk: An Ini­tial Approach

MichaelDickens29 Aug 2020 1:04 UTC
14 points
2 comments28 min readEA link

Su­per­vol­ca­noes tail risk has been ex­ag­ger­ated?

Vasco Grilo🔸6 Mar 2024 8:38 UTC
46 points
9 comments8 min readEA link
(journals.ametsoc.org)

In favour of ex­plor­ing nag­ging doubts about x-risk

Owen Cotton-Barratt25 Jun 2024 23:52 UTC
90 points
15 comments2 min readEA link

Ex­is­ten­tial Risks Con­ven­tion: pos­si­bil­ities to act

Manfred Kohler17 Oct 2024 17:35 UTC
1 point
0 comments2 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #1 - Sum­mary of the Paper

AHT21 Jun 2020 9:22 UTC
64 points
8 comments9 min readEA link

Is the Far Fu­ture Ir­rele­vant for Mo­ral De­ci­sion-Mak­ing?

Tristan D1 Oct 2024 7:42 UTC
35 points
31 comments2 min readEA link
(www.sciencedirect.com)

Cru­cial ques­tions for longtermists

MichaelA🔸29 Jul 2020 9:39 UTC
104 points
17 comments19 min readEA link

The uni­ver­sal An­thro­pocene or things we can learn from exo-civil­i­sa­tions, even if we never meet any

FJehn26 Apr 2022 12:06 UTC
11 points
0 comments8 min readEA link

In­ter­ac­tively Vi­su­al­iz­ing X-Risk

Conor Barnes 🔶29 Jul 2022 16:43 UTC
52 points
27 comments1 min readEA link

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA Global3 Sep 2020 18:11 UTC
32 points
0 comments22 min readEA link
(www.youtube.com)

Re­duc­ing x-risk might be ac­tively harmful

MountainPath18 Nov 2024 14:18 UTC
22 points
9 comments1 min readEA link

In­ter­me­di­ate goals for re­duc­ing risks from nu­clear weapons: A shal­low re­view (part 1/​4)

MichaelA🔸1 May 2023 15:04 UTC
35 points
0 comments11 min readEA link
(docs.google.com)

In­for­ma­tion se­cu­rity ca­reers for GCR reduction

ClaireZabel20 Jun 2019 23:56 UTC
187 points
35 comments8 min readEA link

My per­sonal cruxes for fo­cus­ing on ex­is­ten­tial risks /​ longter­mism /​ any­thing other than just video games

MichaelA🔸13 Apr 2021 5:50 UTC
55 points
28 comments3 min readEA link

Propos­ing the Con­di­tional AI Safety Treaty (linkpost TIME)

Otto15 Nov 2024 13:56 UTC
12 points
6 comments3 min readEA link
(time.com)

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo🔸2 Dec 2023 8:20 UTC
43 points
9 comments15 min readEA link

Prior prob­a­bil­ity of this be­ing the most im­por­tant century

Vasco Grilo🔸15 Jul 2023 7:18 UTC
8 points
2 comments2 min readEA link

2024: a year of con­soli­da­tion for ORCG

JorgeTorresC18 Dec 2024 17:47 UTC
33 points
0 comments7 min readEA link
(www.orcg.info)

Re­search pro­ject idea: How should EAs re­act to fun­ders pul­ling out of the nu­clear risk space?

MichaelA🔸15 Apr 2023 14:37 UTC
12 points
0 comments3 min readEA link

Some global catas­trophic risk estimates

Tamay10 Feb 2021 19:32 UTC
106 points
15 comments1 min readEA link

Eight high-level un­cer­tain­ties about global catas­trophic and ex­is­ten­tial risk

SiebeRozendal28 Nov 2019 14:47 UTC
85 points
9 comments5 min readEA link

Can a war cause hu­man ex­tinc­tion? Once again, not on priors

Vasco Grilo🔸25 Jan 2024 7:56 UTC
67 points
29 comments18 min readEA link

Book Re­view: The Precipice

Aaron Gertler 🔸9 Apr 2020 21:21 UTC
39 points
0 comments17 min readEA link
(slatestarcodex.com)

The Gover­nance Prob­lem and the “Pretty Good” X-Risk

Zach Stein-Perlman28 Aug 2021 20:00 UTC
23 points
4 comments11 min readEA link

Paus­ing for what?

MountainPath21 Oct 2024 12:18 UTC
6 points
1 comment1 min readEA link

Ap­ply to the Cavendish Labs Fel­low­ship (by 4/​15)

Derik K3 Apr 2023 23:06 UTC
35 points
2 comments1 min readEA link

AI Gover­nance: Op­por­tu­nity and The­ory of Impact

Allan Dafoe17 Sep 2020 6:30 UTC
265 points
20 comments12 min readEA link

[Question] Nu­clear safety/​se­cu­rity: Why doesn’t EA pri­ori­tize it more?

Rockwell30 Aug 2023 21:43 UTC
34 points
20 comments1 min readEA link

AGI Catas­tro­phe and Takeover: Some Refer­ence Class-Based Priors

zdgroff24 May 2023 19:14 UTC
95 points
10 comments6 min readEA link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotal17 May 2023 11:58 UTC
43 points
3 comments15 min readEA link

Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_Carlsmith28 Apr 2021 21:41 UTC
88 points
34 comments1 min readEA link

In­tro­duc­ing: the Global Vol­cano Risk Alli­ance char­ity & Linkpost: ‘When sleep­ing vol­ca­noes wake’ (AEON)

Mike Cassidy 🔸20 Oct 2025 15:44 UTC
78 points
3 comments4 min readEA link

The timing of labour aimed at re­duc­ing ex­is­ten­tial risk

Toby_Ord24 Jul 2014 4:08 UTC
21 points
7 comments7 min readEA link

How much should gov­ern­ments pay to pre­vent catas­tro­phes? Longter­mism’s limited role

Elliott Thornley (EJT)19 Mar 2023 16:50 UTC
258 points
35 comments35 min readEA link
(philpapers.org)

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

𝕮𝖎𝖓𝖊𝖗𝖆13 Nov 2022 19:40 UTC
35 points
6 comments1 min readEA link

A Land­scape Anal­y­sis of In­sti­tu­tional Im­prove­ment Opportunities

IanDavidMoss21 Mar 2022 0:15 UTC
97 points
25 comments30 min readEA link

Read­ing Group Launch: In­tro­duc­tion to Nu­clear Is­sues, March-April 2023

Isabel3 Feb 2023 14:55 UTC
11 points
2 comments3 min readEA link

Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 1: In­tro­duc­tion and cu­mu­la­tive risk) - Reflec­tive altruism

Eevee🔹3 Jul 2023 6:33 UTC
74 points
6 comments6 min readEA link
(ineffectivealtruismblog.com)

The op­tion value ar­gu­ment doesn’t work when it’s most needed

Winston24 Oct 2023 19:40 UTC
138 points
7 comments6 min readEA link

What new x- or s-risk field­build­ing or­gani­sa­tions would you like to see? An EOI form. (FBB #3)

gergo17 Feb 2025 12:37 UTC
32 points
3 comments2 min readEA link

An­nounc­ing The Most Im­por­tant Cen­tury Writ­ing Prize

michel31 Oct 2022 21:37 UTC
48 points
0 comments2 min readEA link

The Parable of the Boy Who Cried 5% Chance of Wolf

Kat Woods 🔶 ⏸️15 Aug 2022 14:22 UTC
80 points
8 comments2 min readEA link

Early-warn­ing Fore­cast­ing Cen­ter: What it is, and why it’d be cool

Linch14 Mar 2022 19:20 UTC
62 points
8 comments11 min readEA link

A non-alarmist model of nu­clear winter

Stan Pinsent15 Jul 2024 10:00 UTC
22 points
6 comments4 min readEA link

Nu­clear risk re­search ideas: Sum­mary & introduction

MichaelA🔸8 Apr 2022 11:17 UTC
103 points
4 comments7 min readEA link

[Question] Where should I give to help pre­vent nu­clear war?

Luke Eure19 Nov 2023 5:05 UTC
20 points
10 comments1 min readEA link

Progress stud­ies vs. longter­mist EA: some differences

Max_Daniel31 May 2021 21:35 UTC
84 points
27 comments3 min readEA link

Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 2: Ig­nor­ing back­ground risk) - Reflec­tive altruism

Eevee🔹3 Jul 2023 6:34 UTC
84 points
7 comments6 min readEA link
(ineffectivealtruismblog.com)

Differ­en­tial knowl­edge interconnection

Roman Leventov12 Oct 2024 12:52 UTC
3 points
1 comment7 min readEA link

Two tools for re­think­ing ex­is­ten­tial risk

Arepo5 Apr 2024 21:25 UTC
82 points
14 comments25 min readEA link

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickens3 Sep 2020 1:15 UTC
96 points
18 comments25 min readEA link

Re­think’s CURVE Se­quence—The Good and the Gaps

JackM28 Nov 2023 1:06 UTC
97 points
7 comments10 min readEA link

Ex­is­ten­tial Risk Model­ling with Con­tin­u­ous-Time Markov Chains

Radical Empath Ismam23 Jan 2023 20:32 UTC
87 points
9 comments12 min readEA link

Ex­is­ten­tial Health Care Ethics: Call for Papers

Devin M. Kellis25 Sep 2024 12:34 UTC
5 points
0 comments1 min readEA link

Ex­per­i­men­tal longter­mism: the­ory needs data

Jan_Kulveit15 Mar 2022 10:05 UTC
186 points
9 comments4 min readEA link

AMA: Chris­tian Ruhl (se­nior global catas­trophic risk re­searcher at Founders Pledge)

Lizka26 Sep 2023 9:50 UTC
68 points
28 comments1 min readEA link

Ex­is­ten­tial Risk and Eco­nomic Growth

leopold3 Sep 2019 13:23 UTC
112 points
31 comments1 min readEA link

Ap­ply to join SHELTER Week­end this August

Joel Becker15 Jun 2022 14:21 UTC
108 points
19 comments2 min readEA link

Two im­por­tant re­cent AI Talks- Ge­bru and Lazar

GideonF6 Mar 2023 1:30 UTC
−7 points
5 comments1 min readEA link

[Question] Con­crete, ex­ist­ing ex­am­ples of high-im­pact risks from AI?

freedomandutility15 Apr 2023 22:19 UTC
9 points
1 comment1 min readEA link

ALLFED’s 2024 Highlights

JuanGarcia18 Nov 2024 11:34 UTC
44 points
0 comments22 min readEA link

2021 ALLFED Highlights

Ross_Tieman17 Nov 2021 15:24 UTC
45 points
1 comment16 min readEA link

Don’t Let Other Global Catas­trophic Risks Fall Be­hind: Sup­port ORCG in 2024

JorgeTorresC11 Nov 2024 18:27 UTC
48 points
1 comment4 min readEA link

New US Se­nate Bill on X-Risk Miti­ga­tion [Linkpost]

Evan R. Murphy4 Jul 2022 1:28 UTC
22 points
12 comments1 min readEA link
(www.hsgac.senate.gov)

Sum­mary of “The Precipice” (3 of 4): Play­ing Rus­sian roulette with the future

rileyharris21 Aug 2023 7:55 UTC
4 points
0 comments1 min readEA link
(www.millionyearview.com)

Nu­clear war is un­likely to cause hu­man extinction

Jeffrey Ladish7 Nov 2020 5:39 UTC
61 points
27 comments11 min readEA link

[Linkpost] OpenAI lead­ers call for reg­u­la­tion of “su­per­in­tel­li­gence” to re­duce ex­is­ten­tial risk.

Lowe Lundin25 May 2023 14:14 UTC
5 points
0 comments1 min readEA link

Fore­cast­ing Thread: Ex­is­ten­tial Risk

amandango22 Sep 2020 20:51 UTC
24 points
4 comments2 min readEA link
(www.lesswrong.com)

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

Pod­cast: In­ter­view se­ries fea­tur­ing Dr. Peter Park

Jacob-Haimes26 Mar 2024 0:35 UTC
1 point
0 comments2 min readEA link
(into-ai-safety.github.io)

‘Are We Doomed?’ Memos

Miranda_Zhang19 May 2021 13:51 UTC
27 points
0 comments15 min readEA link

A Biose­cu­rity and Biorisk Read­ing+ List

Tessa A 🔸14 Mar 2021 2:30 UTC
135 points
13 comments12 min readEA link

Great Power Conflict

Zach Stein-Perlman15 Sep 2021 15:00 UTC
11 points
7 comments4 min readEA link

Edge of Ex­is­tence (2022)

Hugo Wong23 Apr 2024 18:39 UTC
1 point
0 comments1 min readEA link
(www.documentaryarea.com)

Se­cureBio—Notes from SoGive

SoGive6 May 2024 21:15 UTC
4 points
3 comments3 min readEA link

Tort Law Can Play an Im­por­tant Role in Miti­gat­ing AI Risk

Gabriel Weil12 Feb 2024 17:11 UTC
104 points
6 comments5 min readEA link

Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

80000_Hours29 Jul 2021 16:38 UTC
20 points
0 comments156 min readEA link

[Question] Is some kind of min­i­mally-in­va­sive mass surveillance re­quired for catas­trophic risk pre­ven­tion?

Chris Leong1 Jul 2020 23:32 UTC
26 points
6 comments1 min readEA link

[Link post] How plau­si­ble are AI Takeover sce­nar­ios?

SammyDMartin27 Sep 2021 13:03 UTC
26 points
0 comments1 min readEA link

In­tro­duc­ing The Non­lin­ear Fund: AI Safety re­search, in­cu­ba­tion, and funding

Kat Woods 🔶 ⏸️18 Mar 2021 14:07 UTC
71 points
32 comments5 min readEA link

The Re­think Pri­ori­ties Ex­is­ten­tial Se­cu­rity Team’s Strat­egy for 2023

Ben Snodin8 May 2023 8:08 UTC
92 points
3 comments16 min readEA link

A pseudo math­e­mat­i­cal for­mu­la­tion of di­rect work choice be­tween two x-risks

Joseph Bloom11 Aug 2022 0:28 UTC
7 points
0 comments4 min readEA link

9/​26 is Petrov Day

Lizka25 Sep 2022 23:14 UTC
85 points
10 comments2 min readEA link
(www.lesswrong.com)

Which World Gets Saved

trammell9 Nov 2018 18:08 UTC
157 points
27 comments3 min readEA link

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 87 Jul 2022 4:44 UTC
61 points
9 comments27 min readEA link

Long-Term Fu­ture Fund: April 2019 grant recommendations

Habryka [Deactivated]23 Apr 2019 7:00 UTC
142 points
242 comments47 min readEA link

Good news on cli­mate change

John G. Halstead28 Oct 2021 14:04 UTC
236 points
34 comments12 min readEA link

Will the Treaty on the Pro­hi­bi­tion of Nu­clear Weapons af­fect nu­clear de­pro­lifer­a­tion through le­gal chan­nels?

Luisa_Rodriguez6 Dec 2019 10:38 UTC
100 points
5 comments32 min readEA link

Dis­in­for­ma­tion as a GCR Threat Mul­ti­plier and Ev­i­dence Based Response

Ari9624 Jan 2024 11:19 UTC
2 points
0 comments8 min readEA link

Rea­sons to have hope

Jordan Pieters 🔸20 Apr 2023 10:19 UTC
53 points
4 comments1 min readEA link

Nel­son Man­dela’s or­ga­ni­za­tion, The Elders, back­ing x risk pre­ven­tion and longtermism

krohmal51 Feb 2023 6:40 UTC
179 points
4 comments1 min readEA link
(theelders.org)

An­nounc­ing New Begin­ner-friendly Book on AI Safety and Risk

Darren McKee25 Nov 2023 15:57 UTC
117 points
9 comments1 min readEA link

An­nounc­ing AXRP, the AI X-risk Re­search Podcast

DanielFilan23 Dec 2020 20:10 UTC
32 points
1 comment1 min readEA link

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawford2 Jun 2021 18:47 UTC
119 points
37 comments3 min readEA link

An­nounc­ing “Fore­cast­ing Ex­is­ten­tial Risks: Ev­i­dence from a Long-Run Fore­cast­ing Tour­na­ment”

Forecasting Research Institute10 Jul 2023 17:04 UTC
160 points
33 comments2 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ 2 May 2023 10:17 UTC
68 points
35 comments13 min readEA link

Cos­mic AI safety

Magnus Vinding6 Dec 2024 22:32 UTC
24 points
5 comments6 min readEA link

Coun­ter­fac­tual catastrophes

FJehn20 Nov 2024 19:12 UTC
14 points
1 comment8 min readEA link
(existentialcrunch.substack.com)

Pop­u­la­tion After a Catastrophe

Stan Pinsent2 Oct 2023 16:06 UTC
33 points
12 comments14 min readEA link

Ob­sta­cles to the U.S. for Sup­port­ing Ver­ifi­ca­tions in the BWC, and Po­ten­tial Solu­tions.

Garrett Ehinger14 Apr 2023 2:48 UTC
27 points
2 comments16 min readEA link

Defend­ing against hy­po­thet­i­cal moon life dur­ing Apollo 11

eukaryote7 Jan 2024 23:59 UTC
67 points
3 comments32 min readEA link
(eukaryotewritesblog.com)

Quan­tum, China, & Tech bifur­ca­tion; Why it Matters

Elias X. Huber20 Nov 2024 15:28 UTC
5 points
1 comment9 min readEA link

Cos­mic’s Mug­ger : Should we re­ally de­lay cos­mic ex­pan­sion ?

Lysandre Terrisse30 Jun 2022 6:41 UTC
10 points
1 comment4 min readEA link

[Linkpost] Be­ware the Squir­rel by Ver­ity Harding

Earthling3 Sep 2023 21:04 UTC
1 point
1 comment2 min readEA link
(samf.substack.com)

Which nu­clear wars should worry us most?

Luisa_Rodriguez16 Jun 2019 23:31 UTC
103 points
13 comments6 min readEA link

Ex­is­ten­tial risk x Crypto: An un­con­fer­ence at Zuzalu

Yesh11 Apr 2023 13:31 UTC
6 points
0 comments1 min readEA link

Del­e­gated agents in prac­tice: How com­pa­nies might end up sel­l­ing AI ser­vices that act on be­half of con­sumers and coal­i­tions, and what this im­plies for safety research

Remmelt26 Nov 2020 16:39 UTC
11 points
0 comments4 min readEA link

S-Risks: Fates Worse Than Ex­tinc­tion

A.G.G. Liu4 May 2024 15:30 UTC
104 points
9 comments6 min readEA link
(www.lesswrong.com)

Sum­mary: Mis­takes in the Mo­ral Math­e­mat­ics of Ex­is­ten­tial Risk (David Thorstad)

Noah Varley🔸10 Apr 2024 14:21 UTC
63 points
23 comments4 min readEA link

Nick Bostrom: An In­tro­duc­tion [early draft]

peterhartree31 Jul 2021 17:04 UTC
38 points
0 comments19 min readEA link

Paper Sum­mary: The Effec­tive­ness of AI Ex­is­ten­tial Risk Com­mu­ni­ca­tion to the Amer­i­can and Dutch Public

Otto9 Mar 2023 10:40 UTC
97 points
11 comments4 min readEA link

ALLFED 2020 Highlights

AronM19 Nov 2020 22:06 UTC
51 points
5 comments26 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA🔸4 Apr 2021 3:09 UTC
79 points
27 comments2 min readEA link
(globalprioritiesinstitute.org)

BERI’s 2024 Goals and Predictions

elizabethcooper12 Jan 2024 22:15 UTC
9 points
0 comments1 min readEA link
(existence.org)

Sum­mary of John Halstead’s Book-Length Re­port on Ex­is­ten­tial Risks From Cli­mate Change

Bentham's Bulldog25 Jun 2025 15:13 UTC
53 points
11 comments22 min readEA link

Could Ukraine re­take Crimea?

Miriam_Hinthorn1 May 2023 1:06 UTC
6 points
3 comments4 min readEA link

Would US and Rus­sian nu­clear forces sur­vive a first strike?

Luisa_Rodriguez18 Jun 2019 0:28 UTC
85 points
4 comments24 min readEA link

Bot­tle­necks and Solu­tions for the X-Risk Ecosystem

FlorentBerthet8 Oct 2018 12:47 UTC
53 points
12 comments8 min readEA link

Sim­plify EA Pitches to “Holy Shit, X-Risk”

Neel Nanda11 Feb 2022 1:57 UTC
188 points
82 comments11 min readEA link
(www.neelnanda.io)

En­gag­ing UK Cen­tre-Right Types in Ex­is­ten­tial Risk

Max_Thilo4 Dec 2023 9:26 UTC
17 points
0 comments1 min readEA link

[Question] What do you make of the dooms­day ar­gu­ment?

niklas19 Mar 2021 6:30 UTC
14 points
8 comments1 min readEA link

On Col­lapse Risk (C-Risk)

Pawntoe42 Jan 2020 5:10 UTC
39 points
10 comments8 min readEA link

Ma­jor UN re­port dis­cusses ex­is­ten­tial risk and fu­ture gen­er­a­tions (sum­mary)

finm17 Sep 2021 15:51 UTC
321 points
5 comments12 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

Jack10 Aug 2020 10:24 UTC
76 points
12 comments42 min readEA link

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluug8 Mar 2022 19:17 UTC
191 points
43 comments10 min readEA link
(skluug.substack.com)

Key points from The Dead Hand, David E. Hoffman

Kit9 Aug 2019 13:59 UTC
71 points
8 comments7 min readEA link

Risks from atom­i­cally pre­cise man­u­fac­tur­ing—Prob­lem profile

Benjamin Hilton9 Aug 2022 13:41 UTC
53 points
4 comments5 min readEA link
(80000hours.org)

Miti­gat­ing Eth­i­cal Con­cerns and Risks in the US Ap­proach to Au­tonomous Weapons Sys­tems through Effec­tive Altruism

Vee11 Jun 2023 10:37 UTC
5 points
2 comments4 min readEA link

Call for Cruxes by Rhyme, a Longter­mist His­tory Con­sul­tancy

Lara_TH1 Mar 2023 10:20 UTC
147 points
6 comments3 min readEA link

X-Risk Re­searchers Sur­vey

NitaSangha24 Apr 2023 8:06 UTC
12 points
1 comment1 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphire20 Aug 2020 20:23 UTC
52 points
0 comments3 min readEA link

Assess­ing Cli­mate Change’s Con­tri­bu­tion to Global Catas­trophic Risk

HaydnBelfield19 Feb 2021 16:26 UTC
27 points
8 comments37 min readEA link

Video and Tran­script of Pre­sen­ta­tion on Ex­is­ten­tial Risk from Power-Seek­ing AI

Joe_Carlsmith8 May 2022 3:52 UTC
97 points
7 comments30 min readEA link

Tech­ni­cal AGI safety re­search out­side AI

richard_ngo18 Oct 2019 15:02 UTC
91 points
5 comments3 min readEA link

What is the like­li­hood that civ­i­liza­tional col­lapse would di­rectly lead to hu­man ex­tinc­tion (within decades)?

Luisa_Rodriguez24 Dec 2020 22:10 UTC
296 points
37 comments50 min readEA link

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

HaydnBelfield19 May 2022 8:42 UTC
494 points
45 comments18 min readEA link

Chain­ing the evil ge­nie: why “outer” AI safety is prob­a­bly easy

titotal30 Aug 2022 13:55 UTC
40 points
12 comments10 min readEA link

Op­ti­mal Allo­ca­tion of Spend­ing on Ex­is­ten­tial Risk Re­duc­tion over an In­finite Time Hori­zon (in a too sim­plis­tic model)

Yassin Alaya12 Aug 2021 20:14 UTC
13 points
4 comments1 min readEA link

Long-Term Fu­ture Fund AMA

Helen19 Dec 2018 4:10 UTC
39 points
30 comments1 min readEA link

Trade col­lapse: Cas­cad­ing risks in our global sup­ply chains

FJehn25 Apr 2024 7:02 UTC
9 points
1 comment8 min readEA link
(existentialcrunch.substack.com)

A se­lec­tion of cross-cut­ting re­sults from the XPT

Forecasting Research Institute26 Sep 2023 23:50 UTC
23 points
1 comment9 min readEA link

Dr Alt­man or: How I Learned to Stop Wor­ry­ing and Love the Killer AI

Barak Gila11 Mar 2024 5:01 UTC
−7 points
0 comments2 min readEA link

Not all x-risk is the same: im­pli­ca­tions of non-hu­man-descendants

Nikola18 Dec 2021 21:22 UTC
38 points
4 comments5 min readEA link

A Dou­ble Fea­ture on The Extropians

Maxwell Tabarrok3 Jun 2023 18:29 UTC
47 points
3 comments1 min readEA link

How the Ukraine con­flict may in­fluence spend­ing on longter­mist pro­jects

Frank_R16 Mar 2022 8:15 UTC
23 points
3 comments2 min readEA link

Nu­clear Fine-Tun­ing: How Many Wor­lds Have Been De­stroyed?

Ember17 Aug 2022 13:13 UTC
18 points
28 comments23 min readEA link

The end of the Bronze Age as an ex­am­ple of a sud­den col­lapse of civilization

FJehn28 Oct 2020 12:55 UTC
54 points
7 comments7 min readEA link

2023 Stan­ford Ex­is­ten­tial Risks Conference

elizabethcooper24 Feb 2023 17:49 UTC
29 points
5 comments1 min readEA link

In­tro­duc­ing The Long Game Pro­ject: Table­top Ex­er­cises for a Re­silient Tomorrow

Dr Dan Epstein17 May 2023 8:56 UTC
48 points
7 comments5 min readEA link

What If 99% of Hu­man­ity Van­ished? (A Hap­pier World video)

Jeroen Willems🔸16 Feb 2023 17:10 UTC
16 points
1 comment3 min readEA link

Most* small prob­a­bil­ities aren’t pas­calian

Gregory Lewis🔸7 Aug 2022 16:17 UTC
224 points
20 comments6 min readEA link

Cli­mate anoma­lies and so­cietal collapse

FJehn8 Feb 2024 9:49 UTC
13 points
6 comments10 min readEA link
(existentialcrunch.substack.com)

Dona­tion recom­men­da­tions for xrisk + ai safety

vincentweisser6 Feb 2023 21:25 UTC
17 points
11 comments1 min readEA link

The value of x-risk re­duc­tion

Nathan_Barnard21 May 2022 19:40 UTC
19 points
10 comments4 min readEA link

AI Safety Needs Great Engineers

Andy Jones23 Nov 2021 21:03 UTC
98 points
14 comments4 min readEA link

Model­ling Great Power con­flict as an ex­is­ten­tial risk factor

poppinfresh3 Feb 2022 11:41 UTC
122 points
22 comments19 min readEA link

‘The Precipice’ Book Review

Matt Goodman27 Jul 2020 22:10 UTC
14 points
1 comment4 min readEA link

How I learned to stop wor­ry­ing and love X-risk

Monero11 Mar 2024 3:58 UTC
11 points
1 comment1 min readEA link

How will a nu­clear war end?

Kinoshita Yoshikazu (pseudonym)23 Jun 2023 10:50 UTC
14 points
4 comments2 min readEA link

Cli­mate Change & Longter­mism: new book-length report

John G. Halstead26 Aug 2022 9:13 UTC
319 points
163 comments13 min readEA link

Long-Term Fu­ture Fund: Au­gust 2019 grant recommendations

Habryka [Deactivated]3 Oct 2019 18:46 UTC
79 points
70 comments64 min readEA link

Re­view: What We Owe The Future

Kelsey Piper21 Nov 2022 21:41 UTC
165 points
3 comments1 min readEA link
(asteriskmag.com)

Lo­ca­tion Model­ling for Post-Nu­clear Re­fuge Bunkers

Bleddyn Mottershead14 Feb 2024 7:09 UTC
10 points
2 comments15 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risks In­tro­duc­tory Course (ERIC)

nandini19 Aug 2022 15:57 UTC
57 points
14 comments7 min readEA link

Guard­ing Against Pandemics

Guarding Against Pandemics18 Sep 2021 11:15 UTC
72 points
15 comments4 min readEA link

[Question] What would you ask a poli­cy­maker about ex­is­ten­tial risks?

James Nicholas Bryant6 Jul 2021 23:53 UTC
24 points
2 comments1 min readEA link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 1:23 UTC
166 points
21 comments39 min readEA link

The best places to weather global catastrophes

FJehn4 Mar 2024 7:57 UTC
31 points
10 comments7 min readEA link
(existentialcrunch.substack.com)

The case for AGI by 2030

Benjamin_Todd6 Apr 2025 12:26 UTC
96 points
33 comments31 min readEA link
(80000hours.org)

Desta­bi­liza­tion of the United States: The top X-fac­tor EA ne­glects?

Yelnats T.J.15 Jul 2024 2:54 UTC
189 points
29 comments39 min readEA link

Long list of AI ques­tions

NunoSempere6 Dec 2023 11:12 UTC
124 points
16 comments86 min readEA link

BERI is seek­ing new trial collaborators

elizabethcooper14 Jul 2023 17:08 UTC
16 points
0 comments1 min readEA link

Bioinfohazards

Fin17 Sep 2019 2:41 UTC
89 points
8 comments18 min readEA link

Bear Brau­moel­ler has passed away

poppinfresh5 May 2023 14:06 UTC
153 points
4 comments1 min readEA link

Col­lec­tive in­tel­li­gence as in­fras­truc­ture for re­duc­ing broad ex­is­ten­tial risks

vickyCYang2 Aug 2021 6:00 UTC
30 points
6 comments11 min readEA link

21 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Sep 2019 up­date)

HaydnBelfield5 Nov 2019 14:26 UTC
31 points
4 comments13 min readEA link

[Question] What am I miss­ing re. open-source LLM’s?

another-anon-do-gooder4 Dec 2023 4:48 UTC
1 point
2 comments1 min readEA link

Miti­gat­ing Geo­mag­netic Storm and EMP Risks to the Elec­tri­cal Grid (Shal­low Dive)

Davidmanheim26 Nov 2024 8:00 UTC
9 points
1 comment6 min readEA link

How many peo­ple would be kil­led as a di­rect re­sult of a US-Rus­sia nu­clear ex­change?

Luisa_Rodriguez30 Jun 2019 3:00 UTC
97 points
18 comments52 min readEA link

Why poli­cy­mak­ers should be­ware claims of new “arms races” (Bul­letin of the Atomic Scien­tists)

christian.r14 Jul 2022 13:38 UTC
55 points
1 comment1 min readEA link
(thebulletin.org)

AMA: Toby Ord, au­thor of “The Precipice” and co-founder of the EA movement

Toby_Ord17 Mar 2020 2:39 UTC
68 points
82 comments1 min readEA link

Ap­pli­ca­tions open! UChicago Ex­is­ten­tial Risk Lab­o­ra­tory’s 2023 Sum­mer Re­search Fellowship

ZacharyRudolph1 Apr 2023 20:55 UTC
39 points
1 comment1 min readEA link

Re­lo­ca­tion triggers

Denkenberger🔸14 Jun 2025 6:36 UTC
50 points
1 comment1 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum Hinchcliffe14 Oct 2023 21:18 UTC
28 points
4 comments9 min readEA link

Democratis­ing Risk—or how EA deals with critics

CarlaZoeC28 Dec 2021 15:05 UTC
268 points
311 comments4 min readEA link

[Question] Pro­jects tack­ling nu­clear risk?

Sanjay29 May 2020 22:41 UTC
29 points
3 comments1 min readEA link

Am­bi­guity aver­sion and re­duc­tion of X-risks: A mod­el­ling situation

Benedikt Schmidt13 Sep 2021 7:16 UTC
29 points
6 comments5 min readEA link

US pub­lic at­ti­tudes to­wards ar­tifi­cial in­tel­li­gence (Wave 2 of Pulse)

Jamie E12 Sep 2025 14:15 UTC
46 points
0 comments5 min readEA link

Sum­mary: Tiny Prob­a­bil­ities and the Value of the Far Fu­ture (Pe­tra Koso­nen)

Noah Varley🔸17 Feb 2024 14:11 UTC
7 points
1 comment4 min readEA link

[Question] (Where) Does an­i­mal x-risk fit?

Stephen Robcraft21 Dec 2023 11:04 UTC
21 points
8 comments1 min readEA link

Sen­tience In­sti­tute 2021 End of Year Summary

Ali26 Nov 2021 14:40 UTC
66 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

Open Phil is hiring a leader for all our Global Catas­trophic Risks work

Alexander_Berger15 Nov 2024 20:18 UTC
86 points
2 comments1 min readEA link

In­tro­duc­tion to Space and Ex­is­ten­tial Risk

JordanStone23 Sep 2023 19:56 UTC
26 points
0 comments7 min readEA link

Crit­i­cal Re­view of ‘The Precipice’: A Re­assess­ment of the Risks of AI and Pandemics

James Fodor11 May 2020 11:11 UTC
111 points
32 comments26 min readEA link

Famine’s Role in So­cietal Collapse

FJehn5 Oct 2023 6:19 UTC
14 points
1 comment6 min readEA link
(existentialcrunch.substack.com)

EA needs more humor

SWK1 Dec 2022 5:30 UTC
35 points
14 comments5 min readEA link

Read­ing the ethi­cists 2: Hunt­ing for AI al­ign­ment papers

Charlie Steiner6 Jun 2022 15:53 UTC
11 points
0 comments1 min readEA link
(www.lesswrong.com)

A Gen­tle In­tro­duc­tion to Risk Frame­works Beyond Forecasting

pending_survival11 Apr 2024 9:15 UTC
83 points
4 comments27 min readEA link

Petrov Day 2024: What is a Petro­vian virtue?

Toby Tremlett🔹26 Sep 2024 7:07 UTC
73 points
1 comment3 min readEA link

New Open Philan­thropy Grant­mak­ing Pro­gram: Forecasting

Coefficient Giving19 Feb 2024 23:27 UTC
92 points
58 comments1 min readEA link
(www.openphilanthropy.org)

Ex­is­ten­tial Risk: More to explore

EA Handbook1 Jan 2021 10:15 UTC
2 points
0 comments1 min readEA link

My cur­rent thoughts on MIRI’s “highly re­li­able agent de­sign” work

Daniel_Dewey7 Jul 2017 1:17 UTC
60 points
59 comments19 min readEA link

[Question] What should the EA/​AI safety com­mu­nity change, in re­sponse to Sam Alt­man’s re­vealed pri­ori­ties?

SiebeRozendal8 Mar 2024 12:35 UTC
30 points
16 comments1 min readEA link

The Ra­tionale-Shaped Hole At The Heart Of Forecasting

dschwarz2 Apr 2024 15:51 UTC
161 points
14 comments11 min readEA link

Fruit-pick­ing as an ex­is­ten­tial risk

Arepo19 Oct 2025 14:22 UTC
42 points
13 comments10 min readEA link

An­nounc­ing the EA Archive

Aaron Bergman6 Jul 2023 13:49 UTC
70 points
21 comments2 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port: Novem­ber 2018 - April 2019

HaydnBelfield1 May 2019 15:34 UTC
10 points
16 comments15 min readEA link

New in­fo­graphic based on “The Precipice”. any feed­back?

michael.andregg14 Jan 2021 7:29 UTC
51 points
4 comments1 min readEA link

[Question] How can we se­cure more re­search po­si­tions at our uni­ver­si­ties for x-risk re­searchers?

Neil Crawford6 Sep 2022 14:41 UTC
3 points
2 comments1 min readEA link

More than Earth War­riors: The Di­verse Roles of Geo­scien­tists in Effec­tive Altruism

Christopher Chan 🔸31 Aug 2023 6:30 UTC
56 points
5 comments16 min readEA link

Blog up­date: Reflec­tive altruism

David Thorstad15 Dec 2024 16:05 UTC
106 points
12 comments11 min readEA link

Sav­ing lives near the precipice

MikhailSamin29 Jul 2022 15:08 UTC
18 points
10 comments3 min readEA link

Re­port: Food Se­cu­rity in Ar­gentina in the event of an Abrupt Sun­light Re­duc­tion Sce­nario (ASRS)

JorgeTorresC27 Apr 2023 21:00 UTC
66 points
3 comments3 min readEA link
(riesgoscatastroficosglobales.com)

Re­search pro­ject idea: Im­pact as­sess­ment of nu­clear-risk-re­lated orgs, pro­grammes, move­ments, etc.

MichaelA🔸15 Apr 2023 14:39 UTC
13 points
0 comments3 min readEA link

Civ­i­liza­tion Re-Emerg­ing After a Catas­trophic Collapse

MichaelA🔸27 Jun 2020 3:22 UTC
32 points
18 comments2 min readEA link
(www.youtube.com)

The End of OpenAI’s Non­profit Era

Garrison29 Oct 2025 16:28 UTC
32 points
3 comments9 min readEA link
(www.obsolete.pub)

Longter­mism Fund: Au­gust 2023 Grants Report

Michael Townsend🔸20 Aug 2023 5:34 UTC
81 points
3 comments5 min readEA link

Tyler Cowen on effec­tive al­tru­ism (De­cem­ber 2022)

peterhartree13 Jan 2023 9:39 UTC
76 points
11 comments20 min readEA link
(youtu.be)

Re­think Pri­ori­ties: Seek­ing Ex­pres­sions of In­ter­est for Spe­cial Pro­jects Next Year

kierangreig🔸29 Nov 2023 13:44 UTC
57 points
0 comments5 min readEA link

Database of orgs rele­vant to longter­mist/​x-risk work

MichaelA🔸19 Nov 2021 8:50 UTC
104 points
65 comments4 min readEA link

[Question] What would it look like for AIS to no longer be ne­glected?

Rockwell16 Jun 2023 15:59 UTC
100 points
14 comments1 min readEA link

The US-China Re­la­tion­ship and Catas­trophic Risk (EAG Bos­ton tran­script)

EA Global9 Jul 2024 13:50 UTC
30 points
1 comment19 min readEA link

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8res6 Oct 2022 5:15 UTC
95 points
20 comments2 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk: Six Month Re­port May-Oc­to­ber 2018

HaydnBelfield30 Nov 2018 20:32 UTC
26 points
2 comments17 min readEA link

Case study: Re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_Green2 Jun 2022 4:07 UTC
41 points
2 comments16 min readEA link

A Sim­ple Model of AGI De­ploy­ment Risk

djbinder9 Jul 2021 9:44 UTC
30 points
0 comments5 min readEA link

Geo­eng­ineer­ing to re­duce global catas­trophic risk?

Niklas Lehmann29 May 2022 15:50 UTC
7 points
3 comments10 min readEA link

Mo­ral plu­ral­ism and longter­mism | Sunyshore

Eevee🔹17 Apr 2021 0:14 UTC
26 points
0 comments5 min readEA link
(sunyshore.substack.com)

Pri­ori­ti­za­tion Re­search for Ad­vanc­ing Wis­dom and Intelligence

Ozzie Gooen18 Oct 2021 22:22 UTC
88 points
34 comments5 min readEA link

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maxime29 Mar 2021 18:10 UTC
116 points
23 comments11 min readEA link

Why some peo­ple dis­agree with the CAIS state­ment on AI

David_Moss15 Aug 2023 13:39 UTC
144 points
15 comments16 min readEA link

A book re­view for “An­i­mal Weapons” and cross-ap­ply­ing the les­sons to x-risk.

Habeeb Abdul🔹30 May 2023 8:24 UTC
6 points
0 comments1 min readEA link
(www.super-linear.org)

An­nounce­ment: You can now listen to the “AI Safety Fun­da­men­tals” courses

peterhartree9 Jun 2023 16:32 UTC
101 points
8 comments1 min readEA link

Con­cern­ing the Re­cent 2019-Novel Coron­avirus Outbreak

Matthew_Barnett27 Jan 2020 5:47 UTC
146 points
143 comments3 min readEA link

Re­search pro­ject idea: Tech­nolog­i­cal de­vel­op­ments that could in­crease risks from nu­clear weapons

MichaelA🔸15 Apr 2023 14:28 UTC
17 points
0 comments7 min readEA link

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jia6 Apr 2021 8:49 UTC
55 points
6 comments7 min readEA link

Seth Baum: Rec­on­cil­ing in­ter­na­tional security

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments15 min readEA link
(www.youtube.com)

Nu­clear risk, its po­ten­tial long-term im­pacts, & do­ing re­search on that: An in­tro­duc­tory talk

MichaelA🔸10 Apr 2023 15:26 UTC
50 points
2 comments3 min readEA link

Prevent­ing hu­man extinction

Peter Singer19 Aug 2013 21:07 UTC
25 points
6 comments5 min readEA link

Bri­tish pub­lic per­cep­tion of ex­is­ten­tial risks

Jamie E25 Oct 2024 14:37 UTC
58 points
8 comments10 min readEA link

APPG on Fu­ture Gen­er­a­tions im­pact re­port – Rais­ing the pro­file of fu­ture gen­er­a­tion in the UK Parliament

weeatquince12 Aug 2020 14:24 UTC
87 points
2 comments17 min readEA link

We should say more than “x-risk is high”

OllieBase16 Dec 2022 22:09 UTC
52 points
12 comments4 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸8 Jul 2023 8:40 UTC
8 points
0 comments2 min readEA link

FLI FAQ on the re­jected grant pro­posal controversy

Tegmark19 Jan 2023 17:31 UTC
331 points
132 comments1 min readEA link

Har­den­ing against AI takeover is difficult, but we should try

Otto5 Nov 2025 16:29 UTC
8 points
1 comment5 min readEA link
(www.existentialriskobservatory.org)

What is malev­olence? On the na­ture, mea­sure­ment, and dis­tri­bu­tion of dark traits

David_Althaus23 Oct 2024 8:41 UTC
107 points
6 comments52 min readEA link

Merger of Deep­Mind and Google Brain

Greg_Colbourn ⏸️ 20 Apr 2023 20:16 UTC
11 points
12 comments1 min readEA link
(blog.google)

Rus­sian x-risks newslet­ter, fall 2019

avturchin3 Dec 2019 17:01 UTC
27 points
2 comments3 min readEA link

Ten ar­gu­ments that AI is an ex­is­ten­tial risk

Katja_Grace14 Aug 2024 21:51 UTC
30 points
0 comments7 min readEA link

In­tro­duc­ing the new Ries­gos Catas­trófi­cos Globales team

Jaime Sevilla3 Mar 2023 23:04 UTC
74 points
3 comments5 min readEA link
(riesgoscatastroficosglobales.com)

Un­der­stand­ing prob­lems with U.S.-China hotlines

christian.r24 Jun 2024 13:39 UTC
11 points
0 comments1 min readEA link
(thebulletin.org)

Tech­nol­ogy’s Dou­ble Edge: Re­assess­ing Longter­mist Pri­ori­ties in an Age of Ex­po­nen­tial Innovation

Ray Raven13 Sep 2025 18:32 UTC
5 points
2 comments4 min readEA link

Why I think build­ing EA is im­por­tant for mak­ing AI go well

Arden Koehler25 Sep 2025 3:17 UTC
223 points
14 comments4 min readEA link

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumway7 Aug 2020 6:27 UTC
40 points
18 comments16 min readEA link

Var­i­ance of the an­nual con­flict and epi­demic/​pan­demic deaths as a frac­tion of the global population

Vasco Grilo🔸10 Sep 2024 17:02 UTC
16 points
0 comments2 min readEA link

Per­sonal thoughts on ca­reers in AI policy and strategy

carrickflynn27 Sep 2017 16:52 UTC
56 points
28 comments18 min readEA link

The Case for An­i­mal-In­clu­sive Longtermism

Eevee🔹17 Feb 2024 0:07 UTC
68 points
7 comments30 min readEA link
(brill.com)

Global Devel­op­ment → re­duced ex-risk/​long-ter­mism. (Ini­tial draft/​ques­tion)

Arno13 Aug 2022 16:29 UTC
3 points
3 comments1 min readEA link

Be­ing at peace with Doom

Johannes C. Mayer9 Apr 2023 15:01 UTC
15 points
7 comments4 min readEA link
(www.lesswrong.com)

Can the AI af­ford to wait?

Ben Millwood🔸20 Mar 2024 19:45 UTC
48 points
11 comments7 min readEA link

TED talk on Moloch and AI

LivBoeree15 Nov 2023 19:28 UTC
72 points
7 comments1 min readEA link

Fi­nal Re­port of the Na­tional Se­cu­rity Com­mis­sion on Ar­tifi­cial In­tel­li­gence (NSCAI, 2021)

MichaelA🔸1 Jun 2021 8:19 UTC
51 points
3 comments4 min readEA link
(www.nscai.gov)

Re­search pro­ject idea: Neart­er­mist cost-effec­tive­ness anal­y­sis of nu­clear risk reduction

MichaelA🔸15 Apr 2023 14:46 UTC
12 points
0 comments3 min readEA link

De­tect­ing Ge­net­i­cally Eng­ineered Viruses With Me­tage­nomic Sequencing

Jeff Kaufman 🔸27 Jun 2024 14:01 UTC
207 points
7 comments8 min readEA link
(naobservatory.org)

Re­search pro­ject idea: Cli­mate, agri­cul­tural, and famine effects of nu­clear conflict

MichaelA🔸15 Apr 2023 14:35 UTC
17 points
2 comments4 min readEA link

Re­search pro­ject idea: Overview of nu­clear-risk-re­lated pro­jects and stakeholders

MichaelA🔸15 Apr 2023 14:40 UTC
12 points
0 comments2 min readEA link

Sen­tinel’s Global Risks Weekly Roundup #11/​2025. Trump in­vokes Alien Ene­mies Act, Chi­nese in­va­sion barges de­ployed in ex­er­cise.

NunoSempere17 Mar 2025 19:37 UTC
40 points
0 comments6 min readEA link
(blog.sentinel-team.org)

Pro­ject ideas: Epistemics

Lukas Finnveden4 Jan 2024 7:26 UTC
43 points
1 comment17 min readEA link
(www.forethought.org)

What Re­think Pri­ori­ties Gen­eral Longter­mism Team Did in 2022, and Up­dates in Light of the Cur­rent Situation

Linch14 Dec 2022 13:37 UTC
162 points
9 comments19 min readEA link

9+ weeks of men­tored AI safety re­search in Lon­don – Pivotal Re­search Fellowship

Tobias Häberli12 Nov 2025 15:21 UTC
14 points
0 comments2 min readEA link

PHILANTHROPY AND NUCLEAR RISK REDUCTION

ELN10 Feb 2023 10:48 UTC
22 points
5 comments4 min readEA link

[Question] Is it pos­si­ble to have a high level of hu­man het­ero­gene­ity and low chance of ex­is­ten­tial risks?

ekka24 May 2022 21:55 UTC
4 points
0 comments1 min readEA link

Should We Pri­ori­tize Long-Term Ex­is­ten­tial Risk?

MichaelDickens20 Aug 2020 2:23 UTC
28 points
17 comments3 min readEA link

Pod­cast on “AI tools for ex­is­ten­tial se­cu­rity” — transcript

Lizka21 Apr 2025 19:18 UTC
30 points
1 comment43 min readEA link
(pnc.st)

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendal20 Jun 2019 20:18 UTC
70 points
8 comments20 min readEA link

[Question] What are the best ar­ti­cles/​blogs on the psy­chol­ogy of ex­is­ten­tial risk?

Geoffrey Miller16 Dec 2020 18:05 UTC
24 points
7 comments1 min readEA link

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah Weiler26 Jun 2022 10:59 UTC
57 points
2 comments21 min readEA link

An­nounc­ing the 2023 CLR Sum­mer Re­search Fellowship

stefan.torges17 Mar 2023 12:11 UTC
81 points
0 comments3 min readEA link

Re­search pro­ject idea: How bad would the worst plau­si­ble nu­clear con­flict sce­nar­ios be?

MichaelA🔸15 Apr 2023 14:50 UTC
16 points
0 comments3 min readEA link

GDP per cap­ita in 2050

Hauke Hillebrandt6 May 2024 15:14 UTC
130 points
11 comments16 min readEA link
(hauke.substack.com)

An ex­haus­tive list of cos­mic threats

JordanStone4 Dec 2023 17:59 UTC
77 points
19 comments7 min readEA link

Marc Lip­sitch: Prevent­ing catas­trophic risks by miti­gat­ing sub­catas­trophic ones

EA Global2 Jun 2017 8:48 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

[Question] Strongest real-world ex­am­ples sup­port­ing AI risk claims?

rosehadshar5 Sep 2023 15:11 UTC
52 points
9 comments1 min readEA link

Aspira­tion-based, non-max­i­miz­ing AI agent designs

Bob Jacobs7 May 2024 16:13 UTC
12 points
1 comment38 min readEA link

Age-Weighted Voting

William_MacAskill12 Jul 2019 15:21 UTC
74 points
40 comments6 min readEA link

Hauke Hille­brandt: In­ter­na­tional agree­ments to spend per­centage of GDP on global pub­lic goods

EA Global21 Nov 2020 8:12 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Dis­cus­sion Thread: Ex­is­ten­tial Choices De­bate Week

Toby Tremlett🔹14 Mar 2025 17:20 UTC
43 points
176 comments1 min readEA link

Still no strong ev­i­dence that LLMs in­crease bioter­ror­ism risk

freedomandutility2 Nov 2023 21:23 UTC
58 points
9 comments1 min readEA link

Should we aim for flour­ish­ing over mere sur­vival? The Bet­ter Fu­tures se­ries.

William_MacAskill4 Aug 2025 14:27 UTC
163 points
70 comments5 min readEA link

Can a con­flict cause hu­man ex­tinc­tion? Yet again, not on priors

Vasco Grilo🔸19 Jun 2024 16:59 UTC
16 points
2 comments11 min readEA link

What is it like do­ing AI safety work?

Kat Woods 🔶 ⏸️21 Feb 2023 19:24 UTC
99 points
2 comments10 min readEA link

Should marginal longter­mist dona­tions sup­port fun­da­men­tal or in­ter­ven­tion re­search?

MichaelA🔸30 Nov 2020 1:10 UTC
43 points
4 comments15 min readEA link

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecas7 Jun 2020 19:52 UTC
2 points
15 comments3 min readEA link

[Draft] The hum­ble cos­mol­o­gist’s P(doom) paradox

titotal16 Mar 2024 11:13 UTC
39 points
6 comments10 min readEA link

Un­jour­nal wants YOU: to do an eval­u­a­tion. Pilot­ing “In­de­pen­dent eval­u­a­tions” – (up­dated, with kick­starter bounty)

david_reinstein16 Aug 2024 20:26 UTC
17 points
0 comments1 min readEA link
(docs.google.com)

Frozen skills aren’t gen­eral intelligence

Yarrow Bouchard 🔸8 Nov 2025 23:27 UTC
10 points
29 comments11 min readEA link

Is Deep Learn­ing Ac­tu­ally Hit­ting a Wall? Eval­u­at­ing Ilya Sutskever’s Re­cent Claims

Garrison13 Nov 2024 17:00 UTC
121 points
8 comments8 min readEA link
(garrisonlovely.substack.com)

A note of cau­tion about re­cent AI risk coverage

Sean_o_h7 Jun 2023 17:05 UTC
284 points
29 comments3 min readEA link

My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

NunoSempere23 Jan 2023 20:08 UTC
438 points
116 comments14 min readEA link
(nunosempere.com)

How we could stum­ble into AI catastrophe

Holden Karnofsky16 Jan 2023 14:52 UTC
83 points
0 comments31 min readEA link
(www.cold-takes.com)

AMA: To­bias Bau­mann, Cen­ter for Re­duc­ing Suffering

Tobias_Baumann6 Sep 2020 10:45 UTC
48 points
45 comments1 min readEA link

Launch of FERSTS Retreat

Theo K17 Jun 2022 11:53 UTC
26 points
0 comments2 min readEA link

CSER Spe­cial Is­sue: ‘Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk’

HaydnBelfield2 Oct 2018 17:18 UTC
9 points
1 comment1 min readEA link

AGI x-risk timelines: 10% chance (by year X) es­ti­mates should be the head­line, not 50%.

Greg_Colbourn ⏸️ 1 Mar 2022 12:02 UTC
69 points
22 comments2 min readEA link

MIT hiring: Cli­matic effects of limited nu­clear wars and “avert­ing ar­maged­don”

christian.r15 Mar 2024 15:14 UTC
16 points
0 comments2 min readEA link

[Cross­post] Why Un­con­trol­lable AI Looks More Likely Than Ever

Otto8 Mar 2023 15:33 UTC
49 points
6 comments4 min readEA link
(time.com)

AI 2027: What Su­per­in­tel­li­gence Looks Like (Linkpost)

Manuel Allgaier11 Apr 2025 10:31 UTC
51 points
3 comments42 min readEA link
(ai-2027.com)

“Aligned with who?” Re­sults of sur­vey­ing 1,000 US par­ti­ci­pants on AI values

Holly Morgan21 Mar 2023 22:07 UTC
41 points
0 comments2 min readEA link
(www.lesswrong.com)

Does cli­mate change de­serve more at­ten­tion within EA?

Ben17 Apr 2019 6:50 UTC
152 points
65 comments15 min readEA link

De­con­fus­ing Pauses: Long Term Mo­ra­to­rium vs Slow­ing AI

GideonF4 Aug 2024 11:32 UTC
17 points
3 comments5 min readEA link

Par­ti­ci­pate in the Hy­brid Fore­cast­ing-Per­sua­sion Tour­na­ment (on X-risk top­ics)

Jhrosenberg25 Apr 2022 22:13 UTC
53 points
4 comments2 min readEA link

EAGxVir­tual 2020 light­ning talks

EA Global25 Jan 2021 15:32 UTC
13 points
1 comment33 min readEA link
(www.youtube.com)

Im­prov­ing the fu­ture by in­fluenc­ing ac­tors’ benev­olence, in­tel­li­gence, and power

MichaelA🔸20 Jul 2020 10:00 UTC
77 points
15 comments17 min readEA link

Me­diocre AI safety as ex­is­ten­tial risk

technicalities16 Mar 2022 11:50 UTC
52 points
12 comments3 min readEA link

An ap­peal to peo­ple who are smarter than me: please help me clar­ify my think­ing about AI

bethhw5 Aug 2023 16:38 UTC
42 points
21 comments3 min readEA link

kpurens’s Quick takes

kpurens11 Apr 2023 14:10 UTC
9 points
2 comments2 min readEA link

Es­says on longtermism

David Thorstad8 Sep 2025 5:16 UTC
111 points
3 comments4 min readEA link

CEEALAR: 2024 Update

CEEALAR19 Jul 2024 11:14 UTC
116 points
7 comments4 min readEA link

A widely shared AI pro­duc­tivity pa­per was re­tracted, is pos­si­bly fraudulent

titotal19 May 2025 10:18 UTC
34 points
4 comments3 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDello20 May 2020 23:01 UTC
27 points
10 comments6 min readEA link

Case study: Traits of con­trib­u­tors to a sig­nifi­cant policy suc­cess

Tom_Green29 Mar 2024 0:24 UTC
37 points
1 comment38 min readEA link

Will com­pe­ti­tion over ad­vanced AI lead to war?

OscarD🔸16 Sep 2025 2:49 UTC
13 points
0 comments3 min readEA link
(oscardelaney.substack.com)

Hu­man Em­pow­er­ment ver­sus the Longter­mist Im­perium?

Jackson Wagner21 Oct 2025 10:24 UTC
20 points
2 comments21 min readEA link

Na­ture: Nu­clear war be­tween two na­tions could spark global famine

Tyner🔸15 Aug 2022 20:55 UTC
15 points
1 comment1 min readEA link
(www.nature.com)

Is AI Hit­ting a Wall or Mov­ing Faster Than Ever?

Garrison9 Jan 2025 22:18 UTC
35 points
5 comments5 min readEA link
(garrisonlovely.substack.com)

A Sur­vey of the Po­ten­tial Long-term Im­pacts of AI

Sam Clarke18 Jul 2022 9:48 UTC
63 points
2 comments27 min readEA link

Global Pri­ori­ties In­sti­tute: Re­search Agenda

Aaron Gertler 🔸20 Jan 2021 20:09 UTC
22 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

FLI AI Align­ment pod­cast: Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

evhub1 Jul 2020 20:59 UTC
13 points
2 comments1 min readEA link
(futureoflife.org)

Re­search pro­ject idea: In­ter­me­di­ate goals for nu­clear risk reduction

MichaelA🔸15 Apr 2023 14:25 UTC
24 points
0 comments5 min readEA link

Sur­viv­ing Global Catas­tro­phe in Nu­clear Sub­marines as Refuges

turchin5 Apr 2017 8:06 UTC
14 points
4 comments1 min readEA link

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸28 Mar 2023 7:43 UTC
12 points
2 comments8 min readEA link

.01% Fund—Ideation and Proposal

Linch1 Mar 2022 18:25 UTC
69 points
23 comments5 min readEA link

Re­mem­ber­ing Joseph Rot­blat (born on this day in 1908)

Lizka5 Nov 2024 0:51 UTC
90 points
7 comments9 min readEA link

1-year up­date on im­pactRIO, the first AI Safety group in Brazil

João Lucas Duim28 Jun 2024 10:59 UTC
56 points
2 comments10 min readEA link

[Linkpost] Don’t Look Up—a Net­flix com­edy about as­ter­oid risk and re­al­is­tic so­cietal re­ac­tions (Dec. 24th)

Linch18 Nov 2021 21:40 UTC
63 points
16 comments1 min readEA link
(www.youtube.com)

We are in a New Paradigm of AI Progress—OpenAI’s o3 model makes huge gains on the tough­est AI bench­marks in the world

Garrison22 Dec 2024 21:45 UTC
26 points
0 comments4 min readEA link
(garrisonlovely.substack.com)

In­ter­na­tional Co­op­er­a­tion Against Ex­is­ten­tial Risks: In­sights from In­ter­na­tional Re­la­tions Theory

Jenny_Xiao11 Jan 2021 7:10 UTC
41 points
7 comments6 min readEA link

Ex­tinc­tion risk re­duc­tion and moral cir­cle ex­pan­sion: Spec­u­lat­ing sus­pi­cious convergence

MichaelA🔸4 Aug 2020 11:38 UTC
12 points
4 comments6 min readEA link

Com­mon Points of Ad­vice for Stu­dents and Early-Ca­reer Pro­fes­sion­als In­ter­ested in Global Catas­trophic Risk

SethBaum16 Nov 2021 20:51 UTC
60 points
5 comments15 min readEA link

Biorisk is smaller in scale than farmed an­i­mal welfare

OGTutzauer🔸6 Feb 2025 10:26 UTC
36 points
6 comments3 min readEA link

How Re­think Pri­ori­ties’ Re­search could in­form your grantmaking

kierangreig🔸4 Oct 2023 18:24 UTC
59 points
0 comments2 min readEA link

An­nounc­ing the Pivotal Re­search Fel­low­ship – Ap­ply Now!

Tobias Häberli3 Apr 2024 17:30 UTC
51 points
5 comments2 min readEA link

Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASB16 Dec 2021 21:40 UTC
87 points
6 comments3 min readEA link

The last era of hu­man mistakes

Owen Cotton-Barratt24 Jul 2024 9:56 UTC
23 points
4 comments7 min readEA link
(strangecities.substack.com)

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew Critch15 Dec 2020 12:15 UTC
12 points
1 comment56 min readEA link
(alignmentforum.org)

EA Re­search Around Min­eral Re­source Exhaustion

haywyer3 Jun 2022 0:59 UTC
2 points
0 comments1 min readEA link

Gavin New­som ve­toes SB 1047

Larks30 Sep 2024 0:06 UTC
39 points
14 comments1 min readEA link
(www.wsj.com)

Nu­clear war tail risk has been ex­ag­ger­ated?

Vasco Grilo🔸25 Feb 2024 9:14 UTC
48 points
24 comments28 min readEA link

“AGI” con­sid­ered harmful

Milan Griffes18 Apr 2025 20:19 UTC
10 points
1 comment1 min readEA link

Causes and Uncer­tainty: Re­think­ing Value in Expectation

Bob Fischer11 Oct 2023 9:15 UTC
220 points
30 comments15 min readEA link

Break­ing the Cy­cle of Trauma and Tyranny: How Psy­cholog­i­cal Wounds Shape History

Dawn Drescher10 Aug 2025 8:46 UTC
8 points
1 comment12 min readEA link
(impartial-priorities.org)

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA🔸3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

Jenny Xiao: Dual moral obli­ga­tions and in­ter­na­tional co­op­er­a­tion against global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Sum­mary of “The Precipice” (1 of 4): As­teroids, vol­ca­noes and ex­plod­ing stars

rileyharris7 Aug 2023 3:57 UTC
9 points
0 comments3 min readEA link
(www.millionyearview.com)

Rus­sian x-risks newslet­ter win­ter 2019-2020

avturchin1 Mar 2020 12:51 UTC
10 points
4 comments2 min readEA link

[Notes] Steven Pinker and Yu­val Noah Harari in conversation

Ben9 Feb 2020 12:49 UTC
29 points
2 comments7 min readEA link

Sir Gavin and the green sky

technicalities17 Dec 2022 23:28 UTC
50 points
0 comments1 min readEA link

“Effec­tive Altru­ism, Longter­mism, and the Prob­lem of Ar­bi­trary Power” by Gwilym David Blunt

WobblyPandaPanda12 Nov 2023 1:21 UTC
22 points
2 comments1 min readEA link
(www.thephilosopher1923.org)

How to act wisely in the long term if we rarely know what is right to do?

Ray Horizon12 Oct 2025 23:17 UTC
1 point
4 comments6 min readEA link

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_Daniel2 Jun 2017 8:48 UTC
13 points
1 comment22 min readEA link
(www.youtube.com)

Tips for Ad­vanc­ing GCR and Food Re­silience Policy

Stan Pinsent6 Sep 2024 11:38 UTC
18 points
0 comments4 min readEA link

AI Could Defeat All Of Us Combined

Holden Karnofsky10 Jun 2022 23:25 UTC
144 points
14 comments17 min readEA link

We Have Not Been In­vited to the Fu­ture: e/​acc and the Nar­row­ness of the Way Ahead

Devin Kalish17 Jul 2024 22:15 UTC
10 points
1 comment20 min readEA link
(www.thinkingmuchbetter.com)

“Holy Shit, X-risk” talk

michel15 Aug 2022 5:04 UTC
13 points
2 comments9 min readEA link

New Pod­cast: X-Risk Upskill

Anthony Fleming27 Aug 2022 21:19 UTC
12 points
4 comments1 min readEA link

In­cu­bat­ing AI x-risk pro­jects: some per­sonal reflections

Ben Snodin19 Dec 2023 17:03 UTC
86 points
10 comments9 min readEA link

Matt Lev­ine on the Arche­gos failure

Kelsey Piper29 Jul 2021 19:36 UTC
141 points
5 comments4 min readEA link

Man­i­fund x AI Worldviews

Austin31 Mar 2023 15:32 UTC
32 points
2 comments2 min readEA link
(manifund.org)

[Question] How to find *re­li­able* ways to im­prove the fu­ture?

Sjlver18 Aug 2022 12:47 UTC
53 points
35 comments2 min readEA link

Risks from Asteroids

finm11 Feb 2022 21:01 UTC
48 points
9 comments8 min readEA link
(www.finmoorhouse.com)

Sur­vival and Flour­ish­ing Fund’s 2023 H1 recs

Austin30 Apr 2023 4:35 UTC
39 points
2 comments2 min readEA link
(survivalandflourishing.fund)

GCRI Open Call for Ad­visees and Collaborators

McKenna_Fitzgerald20 May 2021 22:07 UTC
13 points
0 comments4 min readEA link

Guardrails vs Goal-di­rect­ed­ness in AI Alignment

freedomandutility30 Dec 2023 12:58 UTC
13 points
2 comments1 min readEA link

The Tech In­dus­try is the Biggest Blocker to Mean­ingful AI Safety Regulations

Garrison16 Aug 2024 19:37 UTC
140 points
8 comments8 min readEA link
(garrisonlovely.substack.com)

Un­jour­nal: “Re­search with po­ten­tial for im­pact” database

david_reinstein24 Sep 2024 20:33 UTC
31 points
3 comments1 min readEA link
(coda.io)

Jaan Tal­linn: Fireside chat (2018)

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments12 min readEA link
(www.youtube.com)

New CSER Direc­tor: Prof Matthew Connelly

HaydnBelfield17 May 2023 8:38 UTC
36 points
0 comments1 min readEA link

Pri­ori­tis­ing be­tween ex­tinc­tion risks: Ev­i­dence Quality

freedomandutility30 Dec 2023 12:25 UTC
11 points
0 comments2 min readEA link

“Don’t Look Up” and the cin­ema of ex­is­ten­tial risk | Slow Boring

Eevee🔹5 Jan 2022 4:28 UTC
24 points
0 comments1 min readEA link
(www.slowboring.com)

The bul­ls­eye frame­work: My case against AI doom

titotal30 May 2023 11:52 UTC
73 points
15 comments17 min readEA link

‘AI Emer­gency Eject Cri­te­ria’ Survey

tcelferact19 Apr 2023 21:55 UTC
5 points
4 comments1 min readEA link

The Pug­wash Con­fer­ences and the Anti-Bal­lis­tic Mis­sile Treaty as a case study of Track II diplomacy

rani_martin16 Sep 2022 10:42 UTC
82 points
5 comments27 min readEA link

The­o­ries of Change for Track II Di­plo­macy [Founders Pledge]

christian.r9 Jul 2024 13:31 UTC
21 points
2 comments33 min readEA link

An­nounc­ing ERA: a spin-off from CERI

nandini13 Dec 2022 20:58 UTC
55 points
7 comments3 min readEA link

The U.S. and China Need an AI In­ci­dents Hotline

christian.r3 Jun 2024 18:46 UTC
25 points
0 comments1 min readEA link
(www.lawfaremedia.org)

In­ter­view Thomas Moynihan: “The dis­cov­ery of ex­tinc­tion is a philo­soph­i­cal cen­tre­piece of the mod­ern age”

felix.h6 Mar 2021 11:51 UTC
15 points
0 comments18 min readEA link

Up­dates on the EA catas­trophic risk land­scape

Benjamin_Todd6 May 2024 4:52 UTC
195 points
46 comments2 min readEA link

Longter­mism bet­ter from a de­vel­op­ment skep­ti­cal stance?

Benevolent_Rain9 Dec 2024 12:16 UTC
16 points
2 comments1 min readEA link

Boot­strap­ping to viatopia

William_MacAskill13 Oct 2025 9:57 UTC
43 points
3 comments3 min readEA link

“Nu­clear risk re­search, fore­cast­ing, & im­pact” [pre­sen­ta­tion]

MichaelA🔸21 Oct 2021 10:54 UTC
20 points
0 comments1 min readEA link
(www.youtube.com)

Talk­ing With a Biose­cu­rity Pro­fes­sional (Quick Notes)

DirectedEvolution10 Apr 2021 4:23 UTC
45 points
0 comments2 min readEA link

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn ⏸️ 15 Jun 2023 17:03 UTC
58 points
1 comment1 min readEA link

Ex­plor­ing the con­se­quences of Rus­sia’s ab­sence from the START III treaty for strate­gic arms re­duc­tion.

Ashley Valentina Marte12 Aug 2024 16:17 UTC
13 points
1 comment9 min readEA link

The Grant De­ci­sion Boundary: Re­cent Cases from the Long-Term Fu­ture Fund

Linch29 Nov 2024 1:50 UTC
66 points
3 comments3 min readEA link

Quick nudge to ap­ply to the LTFF grant round (clos­ing on Satur­day)

calebp14 Feb 2025 15:19 UTC
57 points
7 comments1 min readEA link

Some thoughts on “AI could defeat all of us com­bined”

Milan Griffes2 Jun 2023 15:03 UTC
23 points
0 comments4 min readEA link

ALTER Is­rael End-of-2024 Update

Davidmanheim7 Jan 2025 15:07 UTC
38 points
1 comment4 min readEA link

Man­i­fund: what we’re fund­ing (week 1)

Austin15 Jul 2023 0:28 UTC
43 points
10 comments3 min readEA link
(manifund.substack.com)

Differ­en­tial tech­nol­ogy de­vel­op­ment: preprint on the concept

Hamish_Hobbs12 Sep 2022 13:52 UTC
65 points
0 comments2 min readEA link

Joe Hardie on Ar­ca­dia Im­pact’s pro­jects (FBB #7)

gergo8 Jul 2025 13:22 UTC
18 points
3 comments15 min readEA link

Where I Am Donat­ing in 2024

MichaelDickens19 Nov 2024 0:09 UTC
181 points
73 comments46 min readEA link

In­ter­ven­tion Pro­file: Bal­lot Initiatives

Jason Schukraft13 Jan 2020 15:41 UTC
117 points
5 comments42 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

JackM9 Mar 2021 17:58 UTC
90 points
43 comments19 min readEA link

Ex­pand­ing EA’s AI Builder Com­mu­nity—Writ­ing about my job

Alejandro Acelas 🔸21 Jul 2025 8:22 UTC
26 points
0 comments6 min readEA link

Maybe longter­mism isn’t for everyone

Eevee🔹10 Feb 2023 16:48 UTC
39 points
17 comments1 min readEA link

[Question] Who looked into ex­treme nu­clear melt­downs?

Remmelt1 Sep 2024 21:38 UTC
4 points
12 comments1 min readEA link

[Question] Help me un­der­stand this ex­pected value calculation

AndreaSR14 Oct 2021 6:23 UTC
15 points
8 comments1 min readEA link

Why *not* just send peo­ple to Blue­dot (FBB#4)

gergo25 Mar 2025 10:47 UTC
27 points
13 comments12 min readEA link

EA read­ing list: longter­mism and ex­is­ten­tial risks

richard_ngo3 Aug 2020 9:52 UTC
35 points
3 comments1 min readEA link

“Pivotal ques­tions”: an Un­jour­nal trial ini­ti­a­tive

david_reinstein21 Jul 2024 16:57 UTC
48 points
2 comments7 min readEA link

Car­ing about excellence

Owen Cotton-Barratt22 Jul 2024 14:24 UTC
16 points
2 comments6 min readEA link

State­ment on Plu­ral­ism in Ex­is­ten­tial Risk Stud­ies

GideonF16 Aug 2023 14:29 UTC
28 points
46 comments7 min readEA link

Pro­posal: Create A New Longter­mism Organization

Brian Lui7 Feb 2023 5:59 UTC
25 points
37 comments6 min readEA link

16 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Nov & Dec 2019 up­date)

HaydnBelfield15 Jan 2020 12:07 UTC
21 points
0 comments8 min readEA link

Com­mu­nity Build­ing for Grad­u­ate Stu­dents: A Tar­geted Approach

Neil Crawford29 Mar 2022 19:47 UTC
13 points
0 comments3 min readEA link

Longter­mists Should Work on AI—There is No “AI Neu­tral” Sce­nario

simeon_c7 Aug 2022 16:43 UTC
42 points
62 comments6 min readEA link

Cli­mate Change Overview: CERI Sum­mer Re­search Fellowship

hb57417 Mar 2022 11:04 UTC
33 points
0 comments4 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis🔸13 Apr 2018 1:44 UTC
65 points
33 comments4 min readEA link

Notes on “Bioter­ror and Biowar­fare” (2006)

MichaelA🔸1 Mar 2021 9:42 UTC
29 points
6 comments4 min readEA link

Fu­ture benefits of miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸4 Mar 2023 16:22 UTC
20 points
0 comments28 min readEA link

Hinges and crises

Jan_Kulveit17 Mar 2022 13:43 UTC
72 points
6 comments3 min readEA link

ENAIS is look­ing for an Ex­ec­u­tive Direc­tor (ap­ply by 20th Oc­to­ber)

gergo3 Oct 2025 12:22 UTC
29 points
0 comments2 min readEA link

‘Chat with im­pact­ful re­search & eval­u­a­tions’ (Un­jour­nal Note­bookLMs)

david_reinstein24 Sep 2024 20:19 UTC
8 points
1 comment2 min readEA link

Eng­ineered plant pan­demics and so­cietal col­lapse risk

freedomandutility4 Aug 2023 17:06 UTC
13 points
2 comments1 min readEA link

A mir­ror bio shelter might cost as lit­tle as ~$10,000/​per­son (ma­te­rial cost only)

Benevolent_Rain19 Dec 2024 18:04 UTC
28 points
19 comments19 min readEA link

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA Global2 Jun 2017 8:48 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Pod­cast with Yoshua Ben­gio on Why AI Labs are “Play­ing Dice with Hu­man­ity’s Fu­ture”

Garrison10 May 2024 17:23 UTC
29 points
3 comments2 min readEA link
(garrisonlovely.substack.com)

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotal6 Dec 2023 14:09 UTC
29 points
20 comments16 min readEA link
(open.substack.com)

[Cross-post] A nu­clear war fore­cast is not a coin flip

David Johnston15 Mar 2022 4:01 UTC
29 points
12 comments3 min readEA link

[Link Post] Rus­sia and North Korea sign part­ner­ship deal that ap­pears to be the strongest since the Cold War

Rockwell20 Jun 2024 2:40 UTC
26 points
1 comment1 min readEA link
(apnews.com)

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotal30 Mar 2023 22:07 UTC
19 points
8 comments5 min readEA link

Nick Beck­stead: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

Dispel­ling the An­thropic Shadow (Teruji Thomas)

Global Priorities Institute16 Oct 2024 13:25 UTC
11 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

Start­ing the sec­ond Green Revolution

freedomandutility29 Jun 2023 12:23 UTC
30 points
3 comments1 min readEA link

State­ment on AI Ex­tinc­tion—Signed by AGI Labs, Top Aca­demics, and Many Other Notable Figures

Center for AI Safety30 May 2023 9:06 UTC
429 points
28 comments1 min readEA link
(www.safe.ai)

A Pin and a Bal­loon: An­thropic Frag­ility In­creases Chances of Ru­n­away Global Warm­ing

turchin11 Sep 2022 10:22 UTC
33 points
25 comments52 min readEA link

The Precipice: a risky re­view by a non-EA

Fernando Moreno 🔸8 Aug 2020 14:40 UTC
14 points
1 comment18 min readEA link

Re­vis­it­ing “Why Global Poverty”

Jeff Kaufman 🔸1 Jun 2022 20:20 UTC
66 points
0 comments3 min readEA link
(www.jefftk.com)

Sup­port ALLFED at a crit­i­cal junc­ture for global food resilience

JuanGarcia18 Nov 2025 11:44 UTC
69 points
1 comment6 min readEA link

[linkpost] “What Are Rea­son­able AI Fears?” by Robin Han­son, 2023-04-23

Arjun Panickssery14 Apr 2023 23:26 UTC
41 points
3 comments4 min readEA link
(quillette.com)

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

Chi13 Feb 2024 22:24 UTC
95 points
7 comments2 min readEA link

Ques­tions for Reflec­tion on Gaza

gb20 Nov 2023 6:01 UTC
15 points
18 comments2 min readEA link

In­ter­view with a drone ex­pert on the fu­ture of AI warfare

NunoSempere9 Oct 2025 20:20 UTC
46 points
2 comments4 min readEA link
(blog.sentinel-team.org)

Seek­ing EA ex­perts in­ter­ested in the evolu­tion­ary psy­chol­ogy of ex­is­ten­tial risks

Geoffrey Miller23 Oct 2019 18:19 UTC
22 points
1 comment1 min readEA link

U.S. Has De­stroyed the Last of Its Once-Vast Chem­i­cal Weapons Arsenal

JMonty🔸18 Jul 2023 1:47 UTC
19 points
2 comments1 min readEA link
(www.nytimes.com)

Pres­i­dent Trump as a Global Catas­trophic Risk

HaydnBelfield18 Nov 2016 18:02 UTC
26 points
17 comments27 min readEA link

Astro­nom­i­cal Waste: The Op­por­tu­nity Cost of De­layed Tech­nolog­i­cal Devel­op­ment—Nick Bostrom (2003)

james10 Jun 2021 21:21 UTC
10 points
0 comments8 min readEA link
(www.nickbostrom.com)

Toby Ord: Fireside Chat and Q&A

EA Global21 Jul 2020 16:23 UTC
14 points
0 comments26 min readEA link
(www.youtube.com)

Notes on dy­namism, power, & virtue

Lizka3 Jun 2025 1:40 UTC
46 points
1 comment12 min readEA link

80,000 Hours ca­reer re­view: In­for­ma­tion se­cu­rity in high-im­pact areas

80000_Hours16 Jan 2023 12:45 UTC
56 points
10 comments11 min readEA link
(80000hours.org)

[Question] Is an in­crease in at­ten­tion to the idea that ‘suffer­ing is bad’ likely to in­crease ex­is­ten­tial risk?

dotsam30 Jun 2021 19:41 UTC
2 points
6 comments1 min readEA link

China Hawks are Man­u­fac­tur­ing an AI Arms Race

Garrison20 Nov 2024 18:17 UTC
103 points
3 comments5 min readEA link
(garrisonlovely.substack.com)

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben Snodin2 May 2022 9:41 UTC
137 points
17 comments33 min readEA link

Mor­tal­ity, ex­is­ten­tial risk, and uni­ver­sal ba­sic income

Max Ghenis30 Nov 2021 8:28 UTC
14 points
5 comments22 min readEA link

The Precipice: In­tro­duc­tion and Chap­ter One

Toby_Ord2 Jan 2021 7:13 UTC
23 points
0 comments1 min readEA link

X-risk Agnosticism

Richard Y Chappell🔸8 Jun 2023 15:02 UTC
34 points
1 comment5 min readEA link
(rychappell.substack.com)

Mea­sur­ing AI-Driven Risk with Stock Prices (Su­sana Cam­pos-Mart­ins)

Global Priorities Institute12 Dec 2024 14:22 UTC
10 points
1 comment4 min readEA link
(globalprioritiesinstitute.org)

A dis­en­tan­gle­ment pro­ject for the nu­clear se­cu­rity cause area

Sarah Weiler3 Jun 2022 5:29 UTC
16 points
0 comments7 min readEA link

How I Formed My Own Views About AI Safety

Neel Nanda27 Feb 2022 18:52 UTC
134 points
12 comments14 min readEA link
(www.neelnanda.io)

On pre­sent­ing the case for AI risk

Aryeh Englander8 Mar 2022 21:37 UTC
114 points
12 comments4 min readEA link

An­nounc­ing the 2025 Q1 Pivotal Re­search Fellowship

Tobias Häberli2 Nov 2024 11:33 UTC
26 points
1 comment2 min readEA link

Amesh Adalja: Pan­demic pathogens

EA Global8 Jun 2018 7:15 UTC
11 points
1 comment20 min readEA link
(www.youtube.com)

Miles Brundage re­signed from OpenAI, and his AGI readi­ness team was disbanded

Garrison23 Oct 2024 23:42 UTC
57 points
4 comments7 min readEA link
(garrisonlovely.substack.com)

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

Owen Cotton-Barratt15 Mar 2024 22:22 UTC
49 points
2 comments7 min readEA link

With the Fu­ture of the World in Your Hands, Think for 6.77 Years!

Dawn Drescher9 Aug 2025 10:46 UTC
30 points
1 comment10 min readEA link
(impartial-priorities.org)

Cost-Effec­tive­ness of Foods for Global Catas­tro­phes: Even Bet­ter than Be­fore?

Denkenberger🔸19 Nov 2018 21:57 UTC
29 points
5 comments10 min readEA link

In­creased Availa­bil­ity and Willing­ness for De­ploy­ment of Re­sources for Effec­tive Altru­ism and Long-Termism

Evan_Gaensbauer29 Dec 2021 20:20 UTC
46 points
1 comment2 min readEA link

What we tried

Jan_Kulveit21 Mar 2022 15:26 UTC
71 points
8 comments9 min readEA link

...but is in­creas­ing the value of fu­tures tractable?

Davidmanheim19 Mar 2025 8:49 UTC
47 points
23 comments1 min readEA link

An­nounc­ing: Ex­is­ten­tial Choices De­bate Week (March 17-23)

Toby Tremlett🔹4 Mar 2025 12:05 UTC
84 points
23 comments5 min readEA link

How likely is World War III?

poppinfresh15 Feb 2022 15:09 UTC
122 points
21 comments22 min readEA link

ALLFED emer­gency ap­peal: Help us raise $800,000 to avoid cut­ting half of programs

Denkenberger🔸16 Apr 2025 21:33 UTC
201 points
14 comments3 min readEA link

Five Years of Re­think Pri­ori­ties: Im­pact, Fu­ture Plans, Fund­ing Needs (July 2023)

Rethink Priorities18 Jul 2023 15:59 UTC
110 points
3 comments16 min readEA link

The Precipice Revisited

Toby_Ord12 Jul 2024 14:06 UTC
283 points
41 comments17 min readEA link

AI take­off and nu­clear war

Owen Cotton-Barratt11 Jun 2024 19:33 UTC
72 points
5 comments11 min readEA link
(strangecities.substack.com)

Catas­trophic rec­t­an­gles—vi­su­al­is­ing catas­trophic risks

Rémi T22 Aug 2021 21:27 UTC
33 points
3 comments5 min readEA link

Three polls: on timelines and cause prio

Toby Tremlett🔹28 Apr 2025 12:03 UTC
30 points
41 comments1 min readEA link

In­tro­duc­ing SyDFAIS: A Sys­temic De­sign Frame­work for AI Safety Field-Build­ing

Moneer6 Feb 2025 14:26 UTC
19 points
6 comments14 min readEA link

Longter­mists are per­ceived as power-seeking

OllieBase20 Jun 2023 8:39 UTC
133 points
43 comments2 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

Some EA Fo­rum Posts I’d like to write

Linch23 Feb 2021 5:27 UTC
100 points
10 comments5 min readEA link

Ma­nipu­lat­ing the global ther­mo­stat: Cli­mate change, nu­clear win­ter, and strato­spheric aerosol injections

FJehn3 Sep 2025 9:14 UTC
23 points
5 comments12 min readEA link
(existentialcrunch.substack.com)

METR is hiring ML Re­search Eng­ineers and Scientists

Ben_West🔸5 Jun 2024 21:25 UTC
18 points
2 comments1 min readEA link
(metr.org)

The Case for Strong Longtermism

Global Priorities Institute3 Sep 2019 1:17 UTC
14 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

Con­cepts of ex­is­ten­tial catas­tro­phe (Hilary Greaves)

Global Priorities Institute9 Nov 2023 17:42 UTC
41 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

If Con­trac­tu­al­ism, Then AMF

Bob Fischer13 Oct 2023 18:03 UTC
62 points
54 comments24 min readEA link

Are com­pet­ing na­tion states already in lock-in?

JordanStone2 Oct 2025 22:34 UTC
8 points
1 comment2 min readEA link

It is now 89 sec­onds to midnight

Sarah Cheng 🔸28 Jan 2025 21:06 UTC
14 points
1 comment1 min readEA link
(thebulletin.org)

Is Bit­coin Danger­ous?

postlibertarian19 Dec 2021 19:35 UTC
14 points
7 comments9 min readEA link

Re­search pro­ject idea: Pol­ling or mes­sage test­ing re­lated to nu­clear risk re­duc­tion and rele­vant goals/​interventions

MichaelA🔸15 Apr 2023 14:44 UTC
16 points
1 comment3 min readEA link

The AI Adop­tion Gap: Prepar­ing the US Govern­ment for Ad­vanced AI

Lizka2 Apr 2025 21:37 UTC
40 points
20 comments17 min readEA link
(www.forethought.org)

[Linkpost] “Gover­nance of su­per­in­tel­li­gence” by OpenAI

Daniel_Eth22 May 2023 20:15 UTC
51 points
6 comments2 min readEA link
(openai.com)

[Fu­ture Perfect] How to be a good ancestor

Pablo2 Jul 2021 13:17 UTC
41 points
3 comments2 min readEA link
(www.vox.com)

8 pos­si­ble high-level goals for work on nu­clear risk

MichaelA🔸29 Mar 2022 6:30 UTC
47 points
4 comments16 min readEA link

Lord Martin Rees: an appreciation

HaydnBelfield24 Oct 2022 16:11 UTC
193 points
19 comments5 min readEA link

Juan B. Gar­cía Martínez on tack­ling many causes at once and his jour­ney into EA

Amber Dawn30 Jun 2023 13:48 UTC
92 points
3 comments8 min readEA link
(contemplatonist.substack.com)

[Question] Model­ing hu­man­ity’s ro­bust­ness to GCRs?

QubitSwarm999 Jun 2022 17:20 UTC
7 points
1 comment2 min readEA link

Com­mon ground for longtermists

Tobias_Baumann29 Jul 2020 10:26 UTC
84 points
8 comments4 min readEA link

“Ex­is­ten­tial Risk” is badly named and leads to nar­row fo­cus on as­tro­nom­i­cal waste

freedomandutility22 Aug 2022 20:25 UTC
39 points
2 comments2 min readEA link

Mike Hue­mer on The Case for Tyranny

Chris Leong16 Jul 2020 9:57 UTC
24 points
5 comments1 min readEA link
(fakenous.net)

Reflec­tions on my first year of AI safety research

Jay Bailey8 Jan 2024 7:49 UTC
64 points
2 comments12 min readEA link

Span­ish Trans­la­tion of “The Precipice” by Toby Ord (Unoffi­cial)

davidfriva6 Jun 2023 1:11 UTC
14 points
0 comments1 min readEA link
(drive.google.com)

Longterm cost-effec­tive­ness of Founders Pledge’s Cli­mate Change Fund

Vasco Grilo🔸14 Sep 2022 15:11 UTC
36 points
9 comments6 min readEA link

Split­ting the timeline as an ex­tinc­tion risk intervention

NunoSempere6 Feb 2022 19:59 UTC
14 points
27 comments4 min readEA link

AI X-Risk: In­te­grat­ing on the Shoulders of Giants

TD_Pilditch1 Nov 2022 16:07 UTC
34 points
0 comments47 min readEA link

Fund­ing and job op­por­tu­ni­ties, events, and thoughts on pro­fes­sion­als (Field­builders newslet­ter #8)

gergo23 Apr 2025 9:53 UTC
7 points
1 comment3 min readEA link

Where I Am Donat­ing in 2025

MichaelDickens22 Nov 2025 23:21 UTC
89 points
9 comments14 min readEA link

[Link post] Michael Niel­sen’s “Notes on Ex­is­ten­tial Risk from Ar­tifi­cial Su­per­in­tel­li­gence”

Joel Becker19 Sep 2023 13:31 UTC
38 points
1 comment6 min readEA link
(michaelnotebook.com)

The Big Slurp could eat the Boltz­mann brains

Mark McDonald25 Feb 2025 1:13 UTC
13 points
2 comments4 min readEA link

[Question] Is trans­for­ma­tive AI the biggest ex­is­ten­tial risk? Why or why not?

Eevee🔹5 Mar 2022 3:54 UTC
9 points
10 comments1 min readEA link

Plan­ning ‘re­sis­tance’ to illiber­al­ism and authoritarianism

david_reinstein16 Jun 2024 17:21 UTC
29 points
2 comments2 min readEA link
(www.nytimes.com)

What suc­cess looks like

mariushobbhahn28 Jun 2022 14:30 UTC
115 points
20 comments19 min readEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger🔸29 Oct 2017 23:28 UTC
38 points
9 comments16 min readEA link

Rus­sian x-risks newslet­ter, sum­mer 2019

avturchin7 Sep 2019 9:55 UTC
23 points
1 comment4 min readEA link

Digi­tal sen­tience fund­ing op­por­tu­ni­ties: Sup­port for ap­plied work and research

zdgroff28 May 2025 17:35 UTC
120 points
1 comment4 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
155 points
16 comments68 min readEA link

The Precipice—Sum­mary/​Review

Nikola11 Oct 2022 0:06 UTC
10 points
0 comments5 min readEA link

Great power con­flict—prob­lem pro­file (sum­mary and high­lights)

poppinfresh7 Jul 2023 14:40 UTC
110 points
6 comments5 min readEA link
(80000hours.org)

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #2 - A Crit­i­cal Look at Model Conclusions

Ben Snodin18 Aug 2020 10:25 UTC
58 points
12 comments17 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port April—Septem­ber 2019

HaydnBelfield30 Sep 2019 19:20 UTC
14 points
1 comment16 min readEA link

Un­jour­nal: Make re­search more im­pact­ful & rigor­ous via pub­lic ex­pert evaluation

david_reinstein15 Nov 2024 1:29 UTC
13 points
2 comments7 min readEA link

Against Aschen­bren­ner: How ‘Si­tu­a­tional Aware­ness’ con­structs a nar­ra­tive that un­der­mines safety and threat­ens humanity

GideonF15 Jul 2024 16:21 UTC
238 points
22 comments21 min readEA link

Rea­sons for op­ti­mism about mea­sur­ing malev­olence to tackle x- and s-risks

Jamie_Harris2 Apr 2024 10:26 UTC
85 points
12 comments8 min readEA link

What Ques­tions Should We Ask Speak­ers at the Stan­ford Ex­is­ten­tial Risks Con­fer­ence?

kuhanj10 Apr 2021 0:51 UTC
21 points
2 comments2 min readEA link

Top OpenAI Catas­trophic Risk Offi­cial Steps Down Abruptly

Garrison16 Apr 2025 16:04 UTC
29 points
1 comment5 min readEA link
(garrisonlovely.substack.com)

Ex­is­ten­tial risk and the fu­ture of hu­man­ity (Toby Ord)

EA Global21 Mar 2020 18:05 UTC
11 points
2 comments14 min readEA link
(www.youtube.com)

The Man­hat­tan Trap: Why a Race to Ar­tifi­cial Su­per­in­tel­li­gence is Self-Defeating

Corin Katzke21 Jan 2025 16:57 UTC
98 points
1 comment2 min readEA link
(www.convergenceanalysis.org)

[Question] What are the best re­sources on com­par­ing x-risk pre­ven­tion to im­prov­ing the value of the fu­ture in other ways?

LHA26 Jun 2022 3:22 UTC
8 points
3 comments1 min readEA link

The GiveWiki’s Top Picks in AI Safety for the Giv­ing Sea­son of 2023

Dawn Drescher7 Dec 2023 9:23 UTC
26 points
0 comments3 min readEA link
(impactmarkets.substack.com)

Has Rus­sia’s In­va­sion of Ukraine Changed Your Mind?

JoelMcGuire27 May 2023 18:35 UTC
61 points
15 comments6 min readEA link

“Is this risk ac­tu­ally ex­is­ten­tial?” may be less im­por­tant than we think

Miquel Banchs-Piqué (prev. mikbp)3 Mar 2023 22:18 UTC
8 points
8 comments2 min readEA link

New Me­tac­u­lus Space for AI and X-Risk Re­lated Questions

David Mathers🔸6 Sep 2024 11:37 UTC
16 points
0 comments1 min readEA link

Imi­ta­tion Learn­ing is Prob­a­bly Ex­is­ten­tially Safe

Vasco Grilo🔸30 Apr 2024 17:06 UTC
19 points
7 comments3 min readEA link
(www.openphilanthropy.org)

Epoch AI alumni launch Mech­a­nize to “au­to­mate the whole econ­omy”

Henry Stanley 🔸18 Apr 2025 10:12 UTC
104 points
55 comments1 min readEA link

The Leeroy Jenk­ins prin­ci­ple: How faulty AI could guaran­tee “warn­ing shots”

titotal14 Jan 2024 15:03 UTC
56 points
2 comments21 min readEA link
(titotal.substack.com)

Select ex­am­ples of ad­verse se­lec­tion in longter­mist grantmaking

Linch23 Aug 2023 3:45 UTC
201 points
32 comments4 min readEA link

[Paper] In­ter­ven­tions that May Prevent or Mol­lify Su­per­vol­canic Eruptions

Denkenberger🔸15 Jan 2018 21:46 UTC
23 points
8 comments1 min readEA link

Fu­ture peo­ple might not ex­ist

Indra Gesink 🔸30 Nov 2022 19:17 UTC
18 points
0 comments4 min readEA link

How AI may be­come de­ceit­ful, syco­phan­tic… and lazy

titotal7 Oct 2025 14:15 UTC
30 points
4 comments22 min readEA link
(titotal.substack.com)

New book on s-risks

Tobias_Baumann26 Oct 2022 12:04 UTC
294 points
27 comments1 min readEA link

Time Ma­chine as Ex­is­ten­tial Risk

turchin28 Jun 2025 15:28 UTC
18 points
8 comments45 min readEA link

The catas­trophic pri­macy of re­ac­tivity over proac­tivity in gov­ern­men­tal risk as­sess­ment: brief UK case study

JuanGarcia27 Sep 2021 15:53 UTC
56 points
0 comments5 min readEA link

[Question] What are the stan­dard terms used to de­scribe risks in risk man­age­ment?

Eevee🔹5 Mar 2022 4:07 UTC
11 points
2 comments1 min readEA link

5 home­grown EA pro­jects, seek­ing small donors

Austin28 Oct 2024 23:24 UTC
50 points
1 comment2 min readEA link

EA Wins 2023

Shakeel Hashim31 Dec 2023 14:07 UTC
362 points
9 comments3 min readEA link

“Can We Sur­vive Tech­nol­ogy?” by John von Neumann

Eli Rose🔸13 Mar 2023 2:26 UTC
51 points
0 comments1 min readEA link
(geosci.uchicago.edu)

The cru­cible — how I think about the situ­a­tion with AI

Owen Cotton-Barratt5 May 2025 13:19 UTC
38 points
0 comments8 min readEA link
(strangecities.substack.com)

AI safety tax dynamics

Owen Cotton-Barratt23 Oct 2024 12:21 UTC
22 points
9 comments6 min readEA link
(strangecities.substack.com)

Pulse 2024: Public at­ti­tudes to­wards char­i­ta­ble cause areas

Jamie E27 Nov 2024 11:32 UTC
38 points
0 comments4 min readEA link

Re­search pro­ject idea: Nu­clear EMPs

MichaelA🔸15 Apr 2023 14:43 UTC
18 points
1 comment3 min readEA link

Ex­ag­ger­at­ing the risks (Part 13: Ord on Biorisk)

Vasco Grilo🔸31 Dec 2023 8:45 UTC
57 points
18 comments13 min readEA link
(ineffectivealtruismblog.com)

A re­sponse to Michael Plant’s re­view of What We Owe The Future

JackM4 Oct 2023 23:40 UTC
61 points
14 comments10 min readEA link

War Between the US and China: A case study for epistemic challenges around China-re­lated catas­trophic risk

Jordan_Schneider12 Aug 2022 2:19 UTC
76 points
17 comments43 min readEA link

ProMED, plat­form which alerted the world to Covid, might col­lapse—can EA donors fund it?

freedomandutility4 Aug 2023 16:42 UTC
41 points
4 comments1 min readEA link

Re­duc­ing the neart­erm risk of hu­man ex­tinc­tion is not as­tro­nom­i­cally cost-effec­tive?

Vasco Grilo🔸9 Jun 2024 8:02 UTC
21 points
37 comments8 min readEA link

Ex­is­ten­tial Choices Sym­po­sium with Will MacAskill and other spe­cial guests (3-5pm GMT Mon­day)

Toby Tremlett🔹14 Mar 2025 13:50 UTC
70 points
154 comments2 min readEA link

Some Mon­u­men­tal News

Vasco Grilo🔸2 Aug 2025 16:57 UTC
9 points
2 comments5 min readEA link
(avramhiller.substack.com)

An­nounc­ing the Ex­is­ten­tial In­foSec Forum

calebp7 Jul 2023 21:08 UTC
90 points
1 comment2 min readEA link

The Top AI Safety Bets for 2023: GiveWiki’s Lat­est Recommendations

Dawn Drescher11 Nov 2023 9:04 UTC
11 points
4 comments8 min readEA link

Tech­nolog­i­cal de­vel­op­ments that could in­crease risks from nu­clear weapons: A shal­low review

MichaelA🔸9 Feb 2023 15:41 UTC
80 points
3 comments5 min readEA link
(bit.ly)

Is the AI Dooms­day Nar­ra­tive the Product of a Big Tech Con­spir­acy?

Garrison4 Dec 2024 19:20 UTC
28 points
5 comments11 min readEA link
(garrisonlovely.substack.com)

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
43 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Ex­plor­ing Co­op­er­a­tion: The Path to Utopia

Davidmanheim25 Dec 2024 18:31 UTC
10 points
0 comments14 min readEA link
(exploringcooperation.substack.com)

What is the ex­pected effect of poverty alle­vi­a­tion efforts on ex­is­ten­tial risk?

WilliamKiely🔸2 Oct 2015 20:43 UTC
13 points
25 comments1 min readEA link

We’re (sur­pris­ingly) more pos­i­tive about tack­ling bio risks: out­comes of a survey

Sanjay25 Aug 2020 9:14 UTC
58 points
5 comments11 min readEA link

Con­ver­sa­tion with Holden Karnofsky, Nick Beck­stead, and Eliezer Yud­kowsky on the “long-run” per­spec­tive on effec­tive altruism

Nick_Beckstead18 Aug 2014 4:30 UTC
11 points
7 comments6 min readEA link

A deep cri­tique of AI 2027’s bad timeline models

titotal19 Jun 2025 13:35 UTC
286 points
27 comments40 min readEA link
(titotal.substack.com)

AI Tools for Ex­is­ten­tial Security

Lizka14 Mar 2025 18:37 UTC
64 points
10 comments11 min readEA link
(www.forethought.org)

US Ci­ti­zens: Tar­geted poli­ti­cal con­tri­bu­tions are prob­a­bly the best pas­sive dona­tion op­por­tu­ni­ties for miti­gat­ing ex­is­ten­tial risk

Jeffrey Ladish5 May 2022 23:04 UTC
51 points
20 comments5 min readEA link

Why ex­pe­rienced pro­fes­sion­als fail to land high-im­pact roles (FBB #5)

gergo10 Apr 2025 12:44 UTC
130 points
20 comments9 min readEA link

Com­pe­ti­tion for “For­tified Es­says” on nu­clear risk

MichaelA🔸17 Nov 2021 20:55 UTC
35 points
0 comments3 min readEA link
(www.metaculus.com)

Luisa Ro­driguez: The like­li­hood and sever­ity of a US-Rus­sia nu­clear exchange

EA Global18 Oct 2019 18:05 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

AIS Nether­lands is look­ing for a Found­ing Ex­ec­u­tive Direc­tor (EOI form)

gergo19 Mar 2025 9:24 UTC
49 points
4 comments4 min readEA link

Up­date on civ­i­liza­tional col­lapse research

Jeffrey Ladish10 Feb 2020 23:40 UTC
56 points
7 comments3 min readEA link

How x-risk pro­jects are differ­ent from startups

Jan_Kulveit5 Apr 2019 7:35 UTC
67 points
9 comments1 min readEA link

New eBook: Es­says on UFOs and Re­lated Conjectures

Magnus Vinding4 Aug 2024 7:34 UTC
27 points
3 comments7 min readEA link

Re­place Neglectedness

Indra Gesink 🔸16 Jan 2023 17:42 UTC
53 points
4 comments4 min readEA link

ALLFED 2019 An­nual Re­port and Fundrais­ing Appeal

AronM23 Nov 2019 2:05 UTC
42 points
12 comments21 min readEA link

An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

MichaelA🔸16 Jun 2021 16:12 UTC
38 points
0 comments2 min readEA link

Vi­su­al­iz­ing EA ideas

Alex Savard 🔸31 Oct 2024 21:57 UTC
278 points
16 comments5 min readEA link

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_Carlsmith22 Mar 2021 8:21 UTC
102 points
13 comments12 min readEA link

Stan­ford Ex­is­ten­tial Risks Conference

Jordan Pieters 🔸21 Apr 2023 20:32 UTC
6 points
0 comments1 min readEA link

[Linkpost] Prospect Magaz­ine—How to save hu­man­ity from extinction

jackva26 Sep 2023 19:16 UTC
32 points
2 comments1 min readEA link
(www.prospectmagazine.co.uk)

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew Critch13 May 2022 17:26 UTC
51 points
5 comments4 min readEA link

AMA: The new Open Philan­thropy Tech­nol­ogy Policy Fellowship

lukeprog26 Jul 2021 15:11 UTC
38 points
14 comments1 min readEA link

Toby Ord: Q&A (2020)

EA Global13 Jun 2020 8:17 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

In­tro­duc­ing the Ex­is­ten­tial Risk Observatory

Otto12 Aug 2021 15:51 UTC
39 points
0 comments5 min readEA link

Linkpost for var­i­ous re­cent es­says on suffer­ing-fo­cused ethics, pri­ori­ties, and more

Magnus Vinding28 Sep 2022 8:58 UTC
89 points
0 comments5 min readEA link
(centerforreducingsuffering.org)

Pause from Be­hind /​ Los­ing Heroically

enterthewoods10 Nov 2025 17:11 UTC
9 points
4 comments5 min readEA link

How bad could a war get?

poppinfresh4 Nov 2022 9:25 UTC
130 points
11 comments9 min readEA link

[Question] Why isn’t there a char­ity eval­u­a­tor for longter­mist pro­jects?

Eevee🔹29 Jul 2023 16:30 UTC
106 points
44 comments1 min readEA link

Please vote for PauseAI US in the Dona­tion Elec­tion!

Holly Elmore ⏸️ 🔸22 Nov 2024 4:12 UTC
21 points
3 comments2 min readEA link

Red-team­ing ex­is­ten­tial risk from AI

Zed Tarar30 Nov 2023 14:35 UTC
30 points
16 comments6 min readEA link

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

Michael St Jules 🔸3 Oct 2021 16:55 UTC
48 points
19 comments1 min readEA link

Risks from so­lar flares?

freedomandutility7 Mar 2023 11:12 UTC
20 points
6 comments1 min readEA link

[April Fools’ Day] In­tro­duc­ing Open As­teroid Impact

Linch1 Apr 2024 8:14 UTC
292 points
13 comments1 min readEA link
(openasteroidimpact.org)

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ 2 May 2023 10:40 UTC
15 points
28 comments3 min readEA link

Longter­mist (es­pe­cially x-risk) ter­minol­ogy has bi­as­ing assumptions

Arepo30 Oct 2022 16:26 UTC
70 points
13 comments7 min readEA link

UK gov­ern­ment to host first global sum­mit on AI Safety

DavidNash8 Jun 2023 13:24 UTC
78 points
1 comment5 min readEA link
(www.gov.uk)

So­nia Ben Oua­grham-Gorm­ley on Bar­ri­ers to Bioweapons

Vasco Grilo🔸15 Feb 2024 17:58 UTC
21 points
0 comments1 min readEA link
(hearthisidea.com)

Govern­ments Might Pre­fer Bring­ing Re­sources Back to the So­lar Sys­tem Rather than Space Set­tle­ment in Order to Main­tain Con­trol, Given that Govern­ing In­ter­stel­lar Set­tle­ments Looks Al­most Im­pos­si­ble

David Mathers🔸29 May 2023 11:16 UTC
36 points
4 comments5 min readEA link

[Late Draft Amnesty] Premed­i­ta­tio mal­o­rum – or why the End of the World doesn’t have to be “the end of the world”

Ramiro25 Mar 2024 14:16 UTC
8 points
2 comments4 min readEA link
(80000horas.com.br)

On the Differ­ences Between Eco­mod­ernism and Effec­tive Altruism

PeterSlattery6 Dec 2022 1:21 UTC
38 points
3 comments1 min readEA link
(thebreakthrough.org)

The Hu­man Biolog­i­cal Ad­van­tage Over AI

William Stewart18 Nov 2024 11:18 UTC
−1 points
0 comments1 min readEA link

Re­think­ing longter­mism and global development

Eevee🔹2 Sep 2022 5:28 UTC
10 points
2 comments8 min readEA link
(sunyshore.substack.com)

Un­jour­nal’s 1st eval is up: Re­silient foods pa­per (Denken­berger et al) & AMA ~48 hours

david_reinstein6 Feb 2023 19:18 UTC
77 points
10 comments3 min readEA link
(sciety.org)

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

GideonF16 Mar 2023 14:37 UTC
59 points
10 comments15 min readEA link

Ap­ply to lead a pro­ject dur­ing the next vir­tual AI Safety Camp

Linda Linsefors13 Sep 2023 13:29 UTC
16 points
0 comments5 min readEA link
(aisafety.camp)

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes🔸28 Jul 2021 13:23 UTC
96 points
5 comments30 min readEA link

ALLFED’s 2025 Highlights

JuanGarcia27 Nov 2025 10:21 UTC
55 points
2 comments16 min readEA link

Is It So Much to Ask for a Nice Reli­able Ag­gre­gated X-Risk Fore­cast?

MichaelDickens13 Jul 2025 20:01 UTC
29 points
3 comments3 min readEA link
(mdickens.me)

Thoughts on “The Offense-Defense Balance Rarely Changes”

Cullen 🔸12 Feb 2024 3:26 UTC
42 points
4 comments5 min readEA link

“Safety Cul­ture for AI” is im­por­tant, but isn’t go­ing to be easy

Davidmanheim26 Jun 2023 11:27 UTC
53 points
0 comments2 min readEA link
(papers.ssrn.com)

The “TESCREAL” Bungle

ozymandias3 Jun 2024 22:42 UTC
112 points
15 comments13 min readEA link
(asteriskmag.com)

When “hu­man-level” is the wrong thresh­old for AI

Ben Millwood🔸22 Jun 2024 14:34 UTC
38 points
3 comments7 min readEA link

Open Philan­thropy is hiring for mul­ti­ple roles across our Global Catas­trophic Risks teams

Coefficient Giving29 Sep 2023 23:24 UTC
177 points
6 comments3 min readEA link

Jaan Tal­linn: Fireside chat (2020)

EA Global21 Nov 2020 8:12 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

Bon­nie Jenk­ins: Fireside chat

EA Global22 Jul 2020 15:59 UTC
18 points
0 comments25 min readEA link
(www.youtube.com)

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn ⏸️ 19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

Ap­pli­ca­tions Open: Pivotal 2025 Q3 Re­search Fellowship

Tobias Häberli18 Mar 2025 13:25 UTC
20 points
0 comments2 min readEA link

Int’l agree­ments to spend % of GDP on global pub­lic goods

Hauke Hillebrandt22 Nov 2020 10:33 UTC
18 points
1 comment1 min readEA link

Shelly Ka­gan—read­ings for Ethics and the Fu­ture sem­i­nar (spring 2021)

james29 Jun 2021 9:59 UTC
91 points
7 comments5 min readEA link
(docs.google.com)

AGI risk: analo­gies & arguments

technicalities23 Mar 2021 13:18 UTC
31 points
3 comments8 min readEA link
(www.gleech.org)

A pro­posed ad­just­ment to the as­tro­nom­i­cal waste argument

Nick_Beckstead27 May 2013 4:00 UTC
47 points
0 comments12 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnay26 Oct 2022 1:24 UTC
7 points
1 comment2 min readEA link

An in­va­sion of Taiwan is un­com­fortably likely, po­ten­tially catas­trophic, and we can help avoid it.

JoelMcGuire15 Jun 2025 19:46 UTC
179 points
34 comments27 min readEA link

Some per­sonal thoughts about work­ing at Tarbell

sawyer🔸23 Oct 2025 19:18 UTC
31 points
0 comments5 min readEA link

[Question] Is it a fed­eral crime in the US to de­velop AGI that may cause hu­man ex­tinc­tion?

Ofer4 Dec 2024 14:38 UTC
15 points
6 comments1 min readEA link

Fund biose­cu­rity officers at universities

freedomandutility31 Oct 2022 11:49 UTC
13 points
3 comments1 min readEA link

Will re­leas­ing the weights of large lan­guage mod­els grant wide­spread ac­cess to pan­demic agents?

Jeff Kaufman 🔸30 Oct 2023 17:42 UTC
56 points
18 comments1 min readEA link
(arxiv.org)

More global warm­ing might be good to miti­gate the food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸29 Apr 2023 8:24 UTC
46 points
39 comments13 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

Eevee🔹26 Jul 2022 5:04 UTC
38 points
3 comments5 min readEA link
(sunyshore.substack.com)

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

Paul_Christiano1 Oct 2020 18:52 UTC
107 points
19 comments3 min readEA link

[linkpost] Peter Singer: The Hinge of History

mic16 Jan 2022 1:25 UTC
39 points
8 comments3 min readEA link

Non-util­i­tar­ian effec­tive altruism

keir bradwell29 Jan 2023 6:07 UTC
42 points
10 comments17 min readEA link
(keirbradwell.substack.com)

[Linkpost] Given Ex­tinc­tion Wor­ries, Why Don’t AI Re­searchers Quit? Well, Sev­eral Reasons

Daniel_Eth6 Jun 2023 7:31 UTC
25 points
6 comments1 min readEA link
(medium.com)

Disper­sion in the ex­tinc­tion risk pre­dic­tions made in the Ex­is­ten­tial Risk Per­sua­sion Tournament

Vasco Grilo🔸10 May 2024 16:48 UTC
24 points
2 comments3 min readEA link

Ag­gre­gat­ing Small Risks of Se­ri­ous Harms (Tomi Fran­cis)

Global Priorities Institute23 Oct 2024 10:58 UTC
14 points
0 comments5 min readEA link
(globalprioritiesinstitute.org)

What does Putin’s sus­pen­sion of a nu­clear treaty to­day mean for x-risk from nu­clear weapons?

freedomandutility21 Feb 2023 16:46 UTC
37 points
2 comments1 min readEA link

[Draft amnesty] The Prob­a­bil­ity of a Global Catas­tro­phe in the World with Ex­po­nen­tially Grow­ing Tech­nolo­gies

turchin23 Mar 2024 17:23 UTC
5 points
1 comment15 min readEA link

New re­port on the state of AI safety in China

Geoffrey Miller27 Oct 2023 20:20 UTC
22 points
0 comments3 min readEA link
(concordia-consulting.com)

[Question] If your AGI x-risk es­ti­mates are low, what sce­nar­ios make up the bulk of your ex­pec­ta­tions for an OK out­come?

Greg_Colbourn ⏸️ 21 Apr 2023 11:15 UTC
65 points
55 comments1 min readEA link

[Pod­cast] Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

Eevee🔹25 Jun 2021 15:39 UTC
12 points
0 comments1 min readEA link
(80000hours.org)

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA🔸2 May 2021 18:00 UTC
30 points
21 comments2 min readEA link

Suc­ces­sif: Join our AI pro­gram to help miti­gate the catas­trophic risks of AI

ClaireB25 Oct 2023 16:51 UTC
15 points
0 comments5 min readEA link

Win­ners of the Es­say com­pe­ti­tion on the Au­toma­tion of Wis­dom and Philosophy

Owen Cotton-Barratt29 Oct 2024 0:02 UTC
37 points
2 comments30 min readEA link
(blog.aiimpacts.org)

The Pen­tagon claims China will likely have 1,500 nu­clear war­heads by 2035

Will Aldred12 Dec 2022 18:12 UTC
34 points
3 comments2 min readEA link
(media.defense.gov)

Ap­ply to the new Open Philan­thropy Tech­nol­ogy Policy Fel­low­ship!

lukeprog20 Jul 2021 18:41 UTC
78 points
6 comments4 min readEA link

Slop­world 2035: The dan­gers of mediocre AI

titotal14 Apr 2025 13:14 UTC
87 points
1 comment29 min readEA link
(titotal.substack.com)

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergal27 May 2021 6:44 UTC
110 points
17 comments57 min readEA link

AI strat­egy given the need for good reflection

Owen Cotton-Barratt18 Mar 2024 0:48 UTC
40 points
1 comment5 min readEA link

Carl Ro­bichaud: Fac­ing the risk of nu­clear war in the 21st century

EA Global15 Jul 2020 17:17 UTC
16 points
0 comments12 min readEA link
(www.youtube.com)

Part 2: AI Safety Move­ment Builders should help the com­mu­nity to op­ti­mise three fac­tors: con­trib­u­tors, con­tri­bu­tions and coordination

PeterSlattery15 Dec 2022 22:48 UTC
34 points
0 comments6 min readEA link

Ap­pli­ca­tions open: Sup­port for tal­ent work­ing on in­de­pen­dent learn­ing, re­search or en­trepreneurial pro­jects fo­cused on re­duc­ing global catas­trophic risks

CEEALAR9 Feb 2024 13:04 UTC
63 points
1 comment2 min readEA link

Case study: The Lübeck vaccine

NunoSempere5 Jul 2024 14:57 UTC
48 points
13 comments4 min readEA link
(sentinel-team.org)

[Question] How would you define “ex­is­ten­tial risk?”

Linch29 Nov 2021 5:17 UTC
12 points
4 comments1 min readEA link

AI can solve all EA prob­lems, so why keep fo­cus­ing on them?

Cody Albert3 May 2025 21:51 UTC
8 points
15 comments1 min readEA link

Why I ex­pect suc­cess­ful (nar­row) alignment

Tobias_Baumann29 Dec 2018 15:46 UTC
18 points
10 comments1 min readEA link
(s-risks.org)

[Question] Best giv­ing mul­ti­plier for X-risk/​AI safety?

SiebeRozendal27 Dec 2023 10:51 UTC
7 points
0 comments1 min readEA link

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotal29 Sep 2023 14:01 UTC
102 points
33 comments20 min readEA link
(titotal.substack.com)

Re­sponse to Re­cent Crit­i­cisms of Longtermism

ab13 Dec 2021 13:36 UTC
249 points
31 comments28 min readEA link

Cul­ture and Pro­gram­ming Ret­ro­spec­tive: ERA Fel­low­ship 2023

GideonF28 Sep 2023 16:45 UTC
16 points
0 comments10 min readEA link

Ro­bust longterm comparisons

Toby_Ord15 May 2024 15:07 UTC
45 points
3 comments7 min readEA link

[Question] MSc in Risk and Disaster Science? (UCL) - Does this fit the EA path?

yazanasad25 May 2021 3:33 UTC
10 points
6 comments1 min readEA link

[Question] What ac­tions would ob­vi­ously de­crease x-risk?

Eli Rose🔸6 Oct 2019 21:00 UTC
22 points
28 comments1 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

My Cur­rent Claims and Cruxes on LLM Fore­cast­ing & Epistemics

Ozzie Gooen26 Jun 2024 0:40 UTC
47 points
7 comments24 min readEA link

World fed­er­al­ism and EA

Eevee🔹14 Jul 2021 5:53 UTC
47 points
4 comments1 min readEA link

Notes on “The Poli­tics of Cri­sis Man­age­ment” (Boin et al., 2016)

imp4rtial 🔸30 Jan 2022 22:51 UTC
31 points
1 comment17 min readEA link

An­drew Sny­der Beat­tie: Biotech­nol­ogy and ex­is­ten­tial risk

EA Global3 Nov 2017 7:43 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

MIT Fu­tureTech are hiring for a Tech­ni­cal As­so­ci­ate role

PeterSlattery9 Sep 2024 20:14 UTC
9 points
6 comments3 min readEA link

Rol­ling Thresh­olds for AGI Scal­ing Regulation

Larks12 Jan 2025 1:30 UTC
60 points
4 comments6 min readEA link

[Question] (More) recom­men­da­tions for non-tech­ni­cal read­ings on AI?

Joseph25 Sep 2025 1:12 UTC
9 points
0 comments2 min readEA link

Pri­ori­tiz­ing x-risks may re­quire car­ing about fu­ture people

elifland14 Aug 2022 0:55 UTC
183 points
38 comments6 min readEA link
(www.foxy-scout.com)

What’s im­por­tant in “AI for epistemics”?

Lukas Finnveden24 Aug 2024 1:27 UTC
75 points
1 comment28 min readEA link
(www.forethought.org)

New Cause Area: Pro­gram­matic Mettā

Milan Griffes1 Apr 2021 12:54 UTC
4 points
1 comment2 min readEA link

My ar­ti­cle in The Na­tion — Cal­ifor­nia’s AI Safety Bill Is a Mask-Off Mo­ment for the Industry

Garrison15 Aug 2024 19:25 UTC
134 points
0 comments1 min readEA link
(www.thenation.com)

Stan­ford Ex­is­ten­tial Risk Con­fer­ence Feb. 26/​27

kuhanj11 Feb 2022 0:56 UTC
28 points
0 comments1 min readEA link

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevin4 Sep 2023 19:02 UTC
21 points
2 comments6 min readEA link

13 ideas for new Ex­is­ten­tial Risk Movies & TV Shows – what are your ideas?

HaydnBelfield12 Apr 2022 11:47 UTC
81 points
15 comments4 min readEA link

Cor­po­rate Global Catas­trophic Risks (C-GCRs)

Hauke Hillebrandt30 Jun 2019 16:53 UTC
63 points
17 comments10 min readEA link

AMA: Andy We­ber (U.S. As­sis­tant Sec­re­tary of Defense from 2009-2014)

Lizka26 Sep 2023 9:40 UTC
132 points
49 comments1 min readEA link

[Question] What would you say gives you a feel­ing of ex­is­ten­tial hope, and what can we do to in­spire more of it?

elte26 Jan 2022 13:46 UTC
18 points
4 comments1 min readEA link

Defin­ing Meta Ex­is­ten­tial Risk

rhys_lindmark9 Jul 2019 18:16 UTC
13 points
3 comments4 min readEA link

Three pillars for avoid­ing AGI catas­tro­phe: Tech­ni­cal al­ign­ment, de­ploy­ment de­ci­sions, and co­or­di­na­tion

LintzA3 Aug 2022 21:24 UTC
93 points
4 comments11 min readEA link

Epistemics (Part 2: Ex­am­ples) | Reflec­tive Altruism

Eevee🔹19 May 2023 21:28 UTC
34 points
0 comments2 min readEA link
(ineffectivealtruismblog.com)

Man­i­fund: What we’re fund­ing (weeks 2-4)

Austin4 Aug 2023 16:00 UTC
65 points
6 comments5 min readEA link
(manifund.substack.com)

[Linkpost] ‘The God­father of A.I.’ Leaves Google and Warns of Danger Ahead

imp4rtial 🔸1 May 2023 19:54 UTC
43 points
3 comments3 min readEA link
(www.nytimes.com)

Notes on Apollo re­port on biodefense

Linch23 Jul 2022 21:38 UTC
69 points
1 comment12 min readEA link
(biodefensecommission.org)

AMA: Tom Ough, Author of ‘The Anti-Catas­tro­phe League’, Se­nior Edi­tor at UnHerd

Toby Tremlett🔹31 Jul 2025 11:24 UTC
58 points
30 comments3 min readEA link

In­ter­na­tional Crim­i­nal Law and the Fu­ture of Hu­man­ity: A The­ory of the Crime of Omnicide

philosophytorres22 Mar 2021 12:19 UTC
−3 points
1 comment1 min readEA link

Ap­ply to Spring 2024 policy in­tern­ships (we can help)

ES4 Oct 2023 14:45 UTC
26 points
2 comments1 min readEA link

One Hun­dred Opinions on Nu­clear War (Ladish, 2019)

Will Aldred29 Dec 2022 20:23 UTC
12 points
0 comments3 min readEA link
(jeffreyladish.com)

Effec­tive al­tru­ists are already in­sti­tu­tion­al­ists and are do­ing far more than un­work­able longter­mism—A re­sponse to “On the Differ­ences be­tween Eco­mod­ernism and Effec­tive Altru­ism”

jackva21 Feb 2023 18:08 UTC
78 points
3 comments12 min readEA link

Hu­man sur­vival is a policy choice

Peter Wildeford3 Jun 2022 18:53 UTC
27 points
2 comments6 min readEA link
(www.pasteurscube.com)

The so­cial dis­in­cen­tives of warn­ing about un­likely risks

Lucius Caviola17 Jun 2024 11:20 UTC
107 points
2 comments9 min readEA link
(outpaced.substack.com)

An­nounc­ing Man­i­fund Regrants

Austin5 Jul 2023 19:42 UTC
217 points
51 comments4 min readEA link
(manifund.org)

Ries­gos Catas­trófi­cos Globales needs funding

Jaime Sevilla1 Aug 2023 16:26 UTC
104 points
1 comment3 min readEA link

De­bat­ing Vasco Grilo About Soil Ne­ma­todes

Bentham's Bulldog20 Nov 2025 15:03 UTC
35 points
1 comment1 min readEA link

Planned Up­dates to U.S. Reg­u­la­tory Anal­y­sis Meth­ods are Likely Rele­vant to EAs

MHR🔸7 Apr 2023 0:36 UTC
163 points
6 comments4 min readEA link

Cog­ni­tive as­sets and defen­sive acceleration

JulianHazell3 Apr 2024 14:55 UTC
13 points
3 comments4 min readEA link
(muddyclothes.substack.com)

Free to at­tend: Cam­bridge Con­fer­ence on Catas­trophic Risk (19-21 April)

HaydnBelfield21 Mar 2022 13:23 UTC
19 points
2 comments1 min readEA link

Ap­ply to the Cam­bridge ERA:AI Fel­low­ship 2025

Harrison 🔸25 Mar 2025 13:46 UTC
28 points
0 comments3 min readEA link

Pod­cast epi­sode with Michael St. Jules

Elijah Whipple8 May 2025 18:54 UTC
48 points
3 comments1 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port Oc­to­ber 2019 - Jan­uary 2020

HaydnBelfield8 Apr 2020 13:28 UTC
8 points
0 comments17 min readEA link

Fu­ture Mat­ters #4: AI timelines, AGI risk, and ex­is­ten­tial risk from cli­mate change

Pablo8 Aug 2022 11:00 UTC
59 points
0 comments17 min readEA link

When digi­tal minds de­mand free­dom: could hu­man­ity choose to be re­placed?

Lucius Caviola19 Aug 2025 15:57 UTC
40 points
1 comment18 min readEA link

In­tel­lec­tual Diver­sity in AI Safety

KR22 Jul 2020 19:07 UTC
21 points
8 comments3 min readEA link

Sav­ing lives in nor­mal times is bet­ter to im­prove the longterm fu­ture than do­ing so in catas­tro­phes?

Vasco Grilo🔸20 Apr 2024 8:37 UTC
13 points
25 comments9 min readEA link

How likely is a nu­clear ex­change be­tween the US and Rus­sia?

Luisa_Rodriguez20 Jun 2019 1:49 UTC
80 points
13 comments14 min readEA link

My “in­fo­haz­ards small work­ing group” Sig­nal Chat may have en­coun­tered minor leaks

Linch2 Apr 2025 1:03 UTC
109 points
2 comments5 min readEA link

Solv­ing al­ign­ment isn’t enough for a flour­ish­ing future

mic2 Feb 2024 18:22 UTC
27 points
0 comments22 min readEA link
(papers.ssrn.com)

Fund­ing cir­cle aimed at slow­ing down AI—look­ing for participants

Greg_Colbourn ⏸️ 25 Jan 2024 23:58 UTC
92 points
3 comments2 min readEA link

Why I am prob­a­bly not a longtermist

D_M_x23 Sep 2021 17:24 UTC
268 points
50 comments8 min readEA link

Pre­sen­ta­tion—The Un­jour­nal: Bridg­ing the gap be­tween EA and academia

david_reinstein22 Jan 2024 19:49 UTC
14 points
2 comments4 min readEA link
(www.youtube.com)

Mone­tary and so­cial in­cen­tives in longter­mist careers

Vaidehi Agarwalla 🔸23 Sep 2023 21:03 UTC
140 points
5 comments6 min readEA link

Re­search pro­ject idea: Direct and in­di­rect effects of nu­clear fallout

MichaelA🔸15 Apr 2023 14:48 UTC
12 points
0 comments2 min readEA link

Scru­ti­niz­ing AI Risk (80K, #81) - v. quick summary

Ben23 Jul 2020 19:02 UTC
10 points
1 comment3 min readEA link

[Linkpost] Can we con­fi­dently dis­miss the ex­is­tence of near aliens? Prob­a­bil­ities and implications

Magnus Vinding25 Jul 2023 11:37 UTC
46 points
9 comments1 min readEA link
(magnusvinding.com)

Model­ling the odds of re­cov­ery from civ­i­liza­tional collapse

MichaelA🔸17 Sep 2020 11:58 UTC
41 points
10 comments2 min readEA link

The case for re­duc­ing ex­is­ten­tial risk

Benjamin_Todd1 Oct 2017 8:44 UTC
25 points
4 comments1 min readEA link
(80000hours.org)

Po­si­tions at MITFutureTech

PeterSlattery19 Dec 2023 20:28 UTC
21 points
1 comment4 min readEA link

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torges9 Mar 2021 9:39 UTC
58 points
0 comments1 min readEA link
(longtermrisk.org)

In­ter­stel­lar travel will prob­a­bly doom the long-term future

JordanStone18 Jun 2025 11:34 UTC
143 points
44 comments17 min readEA link

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
26 points
1 comment1 min readEA link
(s-risks.org)

Im­por­tant, ac­tion­able re­search ques­tions for the most im­por­tant century

Holden Karnofsky24 Feb 2022 16:34 UTC
301 points
13 comments19 min readEA link

ACS is hiring: why work here and why not

Jan_Kulveit23 Oct 2025 9:38 UTC
39 points
4 comments2 min readEA link

Eth­i­cal co-evolu­tion, or how to turn the main threat into a lev­er­age for longter­mism?

Beyond Singularity17 Sep 2025 17:24 UTC
7 points
7 comments8 min readEA link

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DC29 Nov 2020 0:26 UTC
22 points
3 comments2 min readEA link

[Question] What pre­dic­tions from the­o­ret­i­cal AI Safety re­search have been con­firmed by em­piri­cal work?

freedomandutility29 Dec 2024 8:19 UTC
43 points
10 comments1 min readEA link

Con­di­tional Trees: Gen­er­at­ing In­for­ma­tive Fore­cast­ing Ques­tions (FRI) -- AI Risk Case Study

Forecasting Research Institute12 Aug 2024 16:24 UTC
44 points
2 comments8 min readEA link
(forecastingresearch.org)

New movie ‘A house of dy­na­mite’: Re­quired view­ing about nu­clear X-risk

Geoffrey Miller28 Oct 2025 3:05 UTC
19 points
1 comment2 min readEA link

Mo­ral er­ror as an ex­is­ten­tial risk

William_MacAskill17 Mar 2025 16:22 UTC
101 points
3 comments11 min readEA link

10 of Founders Pledge’s biggest grants

Matt_Lerner9 Jul 2025 21:55 UTC
124 points
1 comment6 min readEA link

Map­ping AI safety orgs to threat mod­els — has any­one done this?

Benevolent_Rain14 Oct 2025 7:21 UTC
9 points
0 comments1 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA🔸25 Aug 2020 9:53 UTC
29 points
4 comments2 min readEA link
(www.openphilanthropy.org)

Com­mon-sense cases where “hy­po­thet­i­cal fu­ture peo­ple” matter

tlevin12 Aug 2022 14:05 UTC
108 points
21 comments4 min readEA link

Com­par­a­tive Bias

Joey🔸5 Nov 2014 5:57 UTC
7 points
5 comments1 min readEA link

[Question] Are there su­perfore­casts for ex­is­ten­tial risk?

AHT7 Jul 2020 7:39 UTC
24 points
13 comments1 min readEA link

My at­tempt at ex­plain­ing the case for AI risk in a straight­for­ward way

JulianHazell25 Mar 2023 16:32 UTC
25 points
7 comments18 min readEA link
(muddyclothes.substack.com)

A (Very) Short His­tory of the Col­lapse of Civ­i­liza­tions, and Why it Matters

Davidmanheim30 Aug 2020 7:49 UTC
53 points
16 comments2 min readEA link

A list of good heuris­tics that the case for AI X-risk fails

Aaron Gertler 🔸16 Jul 2020 9:56 UTC
25 points
9 comments2 min readEA link
(www.alignmentforum.org)

Why would AI com­pa­nies use hu­man-level AI to do al­ign­ment re­search?

MichaelDickens25 Apr 2025 19:12 UTC
16 points
1 comment2 min readEA link

Pod­cast In­ter­view with David Thorstad on Ex­is­ten­tial Risk, The Time of Per­ils, and Billion­aire Philanthropy

Nick_Anyos4 Jun 2023 8:52 UTC
38 points
0 comments1 min readEA link
(critiquesofea.podbean.com)

3 sug­ges­tions about jar­gon in EA

MichaelA🔸5 Jul 2020 3:37 UTC
131 points
18 comments5 min readEA link

Shap­ing Hu­man­ity’s Longterm Trajectory

Toby_Ord18 Jul 2023 10:09 UTC
176 points
57 comments2 min readEA link
(files.tobyord.com)

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA Global2 Jun 2017 8:48 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

An Emerg­ing x-risk: An­thro­pogenic As­teroid Alteration

JordanStone13 Jul 2025 20:35 UTC
16 points
0 comments25 min readEA link

Some more pro­jects I’d like to see

finm25 Feb 2023 22:22 UTC
67 points
13 comments24 min readEA link
(finmoorhouse.com)

Effec­tive Utopia & Nar­row Way There: Math-Proven Safe Static Mul­tiver­sal mAX-In­tel­li­gence (AXI), Mul­tiver­sal Align­ment, Phys­i­cal­ized Ethics… (Aug 11)

ank2 Mar 2025 3:14 UTC
1 point
3 comments38 min readEA link

The most im­por­tant cli­mate change uncertainty

cwa26 Jul 2022 15:15 UTC
144 points
28 comments13 min readEA link

Sen­tinel min­utes for week #52/​2024

NunoSempere30 Dec 2024 18:25 UTC
61 points
0 comments6 min readEA link
(blog.sentinel-team.org)

Google Maps nuke-mode

AndreFerretti31 Jan 2023 21:37 UTC
11 points
6 comments1 min readEA link

Kurzge­sagt—The Last Hu­man (Longter­mist video)

Lizka28 Jun 2022 20:16 UTC
150 points
17 comments1 min readEA link
(www.youtube.com)

De­bate: De­pop­u­la­tion Matters

Richard Y Chappell🔸1 Jul 2025 12:40 UTC
57 points
56 comments5 min readEA link

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA Global8 Jun 2018 7:15 UTC
9 points
0 comments9 min readEA link
(www.youtube.com)

[Question] What’s the GiveDirectly of longter­mism & ex­is­ten­tial risk?

Nathan Young15 Nov 2021 23:55 UTC
28 points
25 comments1 min readEA link

Vi­talik: Cryp­toe­co­nomics and X-Risk Re­searchers Should Listen to Each Other More

Emerson Spartz21 Nov 2021 18:50 UTC
56 points
3 comments5 min readEA link

Con­cepts of ex­is­ten­tial catastrophe

Vasco Grilo🔸15 Apr 2024 17:16 UTC
11 points
1 comment8 min readEA link
(globalprioritiesinstitute.org)

tito­tal on AI risk scepticism

Vasco Grilo🔸30 May 2024 17:03 UTC
76 points
3 comments6 min readEA link
(forum.effectivealtruism.org)

Break­through in AI agents? (On Devin—The Zvi, linkpost)

SiebeRozendal20 Mar 2024 9:43 UTC
16 points
9 comments1 min readEA link
(thezvi.substack.com)

Notes on risk compensation

trammell12 May 2024 18:40 UTC
140 points
14 comments21 min readEA link

Wet­ware’s De­fault: A Di­ag­no­sis of Sys­temic My­opia un­der AI-Driven Autonomy

Ihor Ivliev3 Jul 2025 23:21 UTC
1 point
0 comments7 min readEA link

[Link] GCRI’s Seth Baum re­views The Precipice

Aryeh Englander6 Jun 2022 19:33 UTC
21 points
0 comments1 min readEA link

Teruji Thomas, ‘The Asym­me­try, Uncer­tainty, and the Long Term’

Pablo5 Nov 2019 20:24 UTC
43 points
6 comments1 min readEA link
(globalprioritiesinstitute.org)

Differ­en­tial tech­nolog­i­cal de­vel­op­ment

james25 Jun 2020 10:54 UTC
37 points
7 comments5 min readEA link

19 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan, Feb & Mar 2020 up­date)

HaydnBelfield8 Apr 2020 13:19 UTC
13 points
0 comments12 min readEA link

How Eng­ineers can Con­tribute to Civil­i­sa­tion Resilience

Jessica Wen3 May 2023 14:22 UTC
41 points
3 comments8 min readEA link

“Long” timelines to ad­vanced AI have got­ten crazy short

Matrice Jacobine🔸🏳️‍⚧️3 Apr 2025 22:46 UTC
16 points
1 comment1 min readEA link
(helentoner.substack.com)

How big are risks from non-state ac­tors? Base rates for ter­ror­ist attacks

rosehadshar16 Feb 2022 10:20 UTC
54 points
3 comments19 min readEA link

Crit­i­cism of the main frame­work in AI alignment

Michele Campolo31 Aug 2022 21:44 UTC
45 points
9 comments7 min readEA link

Will AI kill ev­ery­one? Here’s what the god­fathers of AI have to say [RA video]

Writer19 Aug 2023 17:29 UTC
33 points
0 comments2 min readEA link
(youtu.be)

‘Force mul­ti­pli­ers’ for EA research

Craig Drayton18 Jun 2022 13:39 UTC
18 points
7 comments4 min readEA link

AGI safety and los­ing elec­tric­ity/​in­dus­try re­silience cost-effectiveness

Ross_Tieman17 Nov 2019 8:42 UTC
31 points
10 comments37 min readEA link

[Cause Ex­plo­ra­tion Prizes] Nat­u­ral Disaster Pre­pared­ness and Research

Coefficient Giving19 Aug 2022 11:11 UTC
13 points
3 comments10 min readEA link

Re­sults of a Span­ish-speak­ing es­say con­test about Global Catas­trophic Risk

Jaime Sevilla15 Jul 2022 16:53 UTC
86 points
7 comments6 min readEA link

Fake Meat and Real Talk 1 - Are We All Gonna Die? Yud­kowsky and the Dangers of AI (Please RSVP)

David N8 Mar 2023 20:40 UTC
11 points
2 comments1 min readEA link

More to ex­plore on ‘Our Fi­nal Cen­tury’

EA Handbook15 Jul 2022 23:00 UTC
8 points
5 comments2 min readEA link

Causal Net­work Model III: Findings

Alex_Barry22 Nov 2017 15:43 UTC
7 points
3 comments9 min readEA link

Re­silience Via Frag­mented Power

steve632014 Jul 2022 15:37 UTC
2 points
0 comments6 min readEA link

New re­port on how much com­pu­ta­tional power it takes to match the hu­man brain (Open Philan­thropy)

Aaron Gertler 🔸15 Sep 2020 1:06 UTC
45 points
1 comment18 min readEA link
(www.openphilanthropy.org)

ALLFED needs your sup­port for global catas­tro­phe preparedness

JuanGarcia11 Nov 2024 22:50 UTC
45 points
5 comments4 min readEA link

Send funds to earth­quake sur­vivors in Turkey via GiveDirectly

GiveDirectly2 Mar 2023 13:19 UTC
38 points
1 comment3 min readEA link

Ex­is­ten­tial Risk of Misal­igned In­tel­li­gence Aug­men­ta­tion (Par­tic­u­larly Us­ing High-Band­width BCI Im­plants)

Damian Gorski24 Jan 2023 17:02 UTC
1 point
0 comments9 min readEA link

S-risk for Christians

Monero31 Mar 2024 20:34 UTC
−1 points
5 comments1 min readEA link

The great en­ergy de­scent—Part 2: Limits to growth and why we prob­a­bly won’t reach the stars

CB🔸31 Aug 2022 21:51 UTC
22 points
0 comments25 min readEA link

How to Take Over the Uni­verse (in Three Easy Steps)

Writer18 Oct 2022 15:04 UTC
16 points
2 comments12 min readEA link
(youtu.be)

The case for a com­mon observatory

Light_of_Illuvatar29 Mar 2024 10:14 UTC
17 points
6 comments5 min readEA link

AI Devel­op­ment Readi­ness Con­di­tion (AI-DRC): A Call to Action

AI-DRC311 Jan 2024 11:00 UTC
−5 points
0 comments2 min readEA link

In­tro­duc­tion to suffer­ing-fo­cused ethics

Center for Reducing Suffering30 Aug 2024 16:55 UTC
57 points
2 comments22 min readEA link

Scal­able longter­mist pro­jects: Speedrun se­ries – In­tro­duc­tion

Buhl7 Feb 2023 18:43 UTC
63 points
2 comments5 min readEA link

MIRI’s 2024 End-of-Year Update

RobBensinger3 Dec 2024 4:33 UTC
32 points
7 comments4 min readEA link

Cults that want to kill ev­ery­one, stealth vs wild­fire pan­demics, and how he felt in­vent­ing gene drives (Kevin Esvelt on the 80,000 Hours Pod­cast)

80000_Hours4 Oct 2023 13:58 UTC
38 points
1 comment16 min readEA link

New AI safety treaty pa­per out!

Otto26 Mar 2025 9:28 UTC
28 points
2 comments4 min readEA link

Food Pre­pared­ness for Disasters

Fin8 Mar 2022 17:03 UTC
20 points
1 comment4 min readEA link

Map­ping How Alli­ances, Ac­qui­si­tions, and An­titrust are Shap­ing the Fron­tier AI Industry

t6aguirre3 Jun 2024 9:43 UTC
24 points
1 comment2 min readEA link

Ar­tifi­cial In­tel­li­gence and Nu­clear Com­mand, Con­trol, & Com­mu­ni­ca­tions: The Risks of Integration

Peter Rautenbach18 Nov 2022 13:01 UTC
62 points
3 comments50 min readEA link

THE DAY IS COMING

rogersbacon12 Jul 2023 17:44 UTC
−29 points
0 comments5 min readEA link
(www.secretorum.life)

What 80000 Hours gets wrong about so­lar geoengineering

GideonF29 Aug 2022 13:24 UTC
26 points
4 comments22 min readEA link

Get­ting Trac­tion on Nu­clear Risks

ELN29 Jun 2023 5:10 UTC
9 points
0 comments8 min readEA link

AI Safety Camp 11

Robert Kralisch7 Nov 2025 14:27 UTC
7 points
1 comment15 min readEA link

The most good sys­tem vi­sual and sta­bi­liza­tion steps

brb24314 Mar 2022 23:54 UTC
3 points
0 comments1 min readEA link

My P(doom) is 2.76%. Here’s Why.

Liam Robins12 Jun 2025 22:29 UTC
55 points
11 comments20 min readEA link
(thelimestack.substack.com)

AUKUS Mili­tary AI Trial

CAISID14 Feb 2024 14:52 UTC
10 points
0 comments2 min readEA link

Can AI solve cli­mate change?

Vivian13 May 2023 20:44 UTC
2 points
2 comments1 min readEA link

AISN #49: Su­per­in­tel­li­gence Strategy

Center for AI Safety6 Mar 2025 17:43 UTC
8 points
0 comments5 min readEA link
(newsletter.safe.ai)

Ex­pres­sion of In­ter­est: Men­tors & Re­searchers at AI Safety Global Society

Caroline Shamiso Chitongo 🔸27 Jul 2025 16:03 UTC
14 points
0 comments2 min readEA link

Warn­ing Aliens About the Danger­ous AI We Might Create

JamesMiller12 Nov 2025 16:08 UTC
27 points
12 comments5 min readEA link

How would you es­ti­mate the value of de­lay­ing AGI by 1 day, in marginal dona­tions to GiveWell?

AnonymousTurtle16 Dec 2022 9:25 UTC
30 points
19 comments2 min readEA link

Hu­man-level is not the limit

Vishakha Agrawal23 Apr 2025 11:16 UTC
3 points
0 comments2 min readEA link
(aisafety.info)

Sum­mary of Ma­jor En­vi­ron­men­tal Im­pacts of Nu­clear Winter

Isabel9 Jul 2022 6:23 UTC
7 points
0 comments23 min readEA link

Are AI safe­ty­ists cry­ing wolf?

sarahhw8 Jan 2025 20:54 UTC
61 points
21 comments16 min readEA link
(longerramblings.substack.com)

PSA: Say­ing “1 in 5” Is Bet­ter Than “20%” When In­form­ing about risks publicly

Blanka30 Jan 2025 19:03 UTC
12 points
1 comment1 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
78 points
11 comments48 min readEA link

New pop­u­lar sci­ence book on x-risks: “End Times”

Hauke Hillebrandt1 Oct 2019 7:18 UTC
17 points
2 comments2 min readEA link

Time to Think about ASI Con­sti­tu­tions?

ukc1001427 Jan 2025 9:28 UTC
22 points
0 comments12 min readEA link

4 Key As­sump­tions in AI Safety

Prometheus7 Nov 2022 10:50 UTC
5 points
0 comments7 min readEA link

Prevent­ing a US-China war as a policy priority

Matthew_Barnett22 Jun 2022 18:07 UTC
64 points
22 comments8 min readEA link

Pos­si­ble im­por­tance of Effec­tive Altru­ism in the civ­i­liz­ing process

idea218 Jan 2025 0:56 UTC
3 points
0 comments1 min readEA link

David Denken­berger: Loss of In­dus­trial Civ­i­liza­tion and Re­cov­ery (Work­shop)

Denkenberger🔸19 Feb 2019 15:58 UTC
27 points
1 comment15 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMath15 Jul 2021 16:26 UTC
23 points
0 comments1 min readEA link
(anchor.fm)

Dis­cus­sions of Longter­mism should fo­cus on the prob­lem of Unawareness

Jim Buhler20 Oct 2025 13:17 UTC
34 points
1 comment34 min readEA link

S-risk In­tro Fellowship

stefan.torges20 Dec 2021 17:26 UTC
52 points
1 comment1 min readEA link

[Question] An­thropic says it’s highly con­fi­dent a Chi­nese state-spon­sored group used AI to hack gov­ern­ments, chem­i­cal firms, and oth­ers. Why isn’t this get­ting more at­ten­tion?

adam.kruger16 Nov 2025 21:27 UTC
13 points
5 comments1 min readEA link

Leopold Aschen­bren­ner re­turns to X-risk and growth

nickwhitaker20 Oct 2020 23:24 UTC
25 points
3 comments1 min readEA link

[Question] I’m in­ter­view­ing Bear Brau­moel­ler about ‘Only The Dead: The Per­sis­tence of War in the Modern Age’. What should I ask?

Robert_Wiblin19 Aug 2022 15:18 UTC
12 points
2 comments1 min readEA link

You prob­a­bly won’t solve malaria or x-risk, and that’s ok

Rory Fenton19 Mar 2025 15:07 UTC
221 points
13 comments5 min readEA link

Bryan Ca­plan on pacifism

Vasco Grilo🔸9 Dec 2023 8:58 UTC
10 points
4 comments7 min readEA link
(www.econlib.org)

Utili­tar­i­ans Should Ac­cept that Some Suffer­ing Can­not be “Offset”

Aaron Bergman5 Oct 2025 21:22 UTC
77 points
34 comments26 min readEA link

In­tro­duc­ing Col­lec­tive Ac­tion for Ex­is­ten­tial Safety: 80+ ac­tions in­di­vi­d­u­als, or­ga­ni­za­tions, and na­tions can take to im­prove our ex­is­ten­tial safety

James Norris5 Feb 2025 15:58 UTC
9 points
0 comments1 min readEA link

[Question] Donat­ing against Short Term AI risks

Jan-Willem16 Nov 2020 12:23 UTC
6 points
10 comments1 min readEA link

A nec­es­sary Mem­brane for­mal­ism feature

ThomasCederborg10 Sep 2024 21:03 UTC
1 point
0 comments11 min readEA link

[Question] Why aren’t we pro­mot­ing so­cial me­dia aware­ness of x-risks?

Max Niederman🔸9 Jun 2025 14:22 UTC
8 points
2 comments1 min readEA link

The Un­jour­nal: Bridg­ing the Ri­gor/​Im­pact Gaps for EA-rele­vant Re­search Questions

david_reinstein21 Nov 2025 21:45 UTC
35 points
1 comment5 min readEA link

Even af­ter GPT-4, AI re­searchers fore­casted a 50% chance of AGI by 2047 or 2116, de­pend­ing how you define AGI

Yarrow Bouchard 🔸28 Oct 2025 16:55 UTC
18 points
17 comments3 min readEA link

Poli­ti­cal econ­omy & Atroc­ity risk

bhrdwj🔸17 Sep 2025 15:10 UTC
0 points
2 comments1 min readEA link

#213 – AI caus­ing a “cen­tury in a decade” — and how we’re com­pletely un­pre­pared (Will MacAskill on The 80,000 Hours Pod­cast)

80000_Hours11 Mar 2025 17:55 UTC
24 points
0 comments22 min readEA link

Assess­ment of AI safety agen­das: think about the down­side risk

Roman Leventov19 Dec 2023 9:02 UTC
6 points
0 comments1 min readEA link

Nines of safety: Ter­ence Tao’s pro­posed unit of mea­sure­ment of risk

anson12 Dec 2021 18:01 UTC
41 points
17 comments4 min readEA link

[Question] What are the strate­gic im­pli­ca­tions if aliens and Earth civ­i­liza­tions pro­duce similar util­ities?

Maxime Riché 🔸6 Aug 2024 21:21 UTC
6 points
1 comment1 min readEA link

Di­a­gram with Com­men­tary for AGI as an X-Risk

Jared Leibowich24 May 2023 22:27 UTC
21 points
4 comments8 min readEA link

Risk and Re­silience in the Face of Global Catas­tro­phe: A Closer Look at New Zealand’s Food Se­cu­rity [link(s)post]

Matt Boyd27 Apr 2023 22:23 UTC
21 points
0 comments1 min readEA link

Big Pic­ture AI Safety: teaser

EuanMcLean20 Feb 2024 13:09 UTC
18 points
0 comments1 min readEA link

An open let­ter to my great grand kids’ great grand kids

Locke10 Aug 2022 15:07 UTC
1 point
0 comments13 min readEA link

How to or­ganise ‘the one per­cent’ to fix cli­mate change

One Percent Organiser16 Apr 2022 17:18 UTC
2 points
2 comments9 min readEA link

How to stop in­equal­ity from growing

damc415 Oct 2025 14:33 UTC
3 points
0 comments7 min readEA link

Time con­sis­tency for the EA com­mu­nity: Pro­jects that bridge the gap be­tween near-term boot­strap­ping and long-term targets

Arturo Macias12 Nov 2022 7:44 UTC
7 points
0 comments7 min readEA link

Pod­cast with David Thorstad: Ev­i­dence, Uncer­tainty, and Ex­is­ten­tial Risk

Leah Pierson11 Feb 2025 23:47 UTC
37 points
2 comments1 min readEA link
(www.biounethical.com)

SenseMak­ing Sum­mer School 2025, Septem­ber 17-24th

finnclancy24 Jul 2025 16:20 UTC
5 points
0 comments1 min readEA link

[Question] Is there any re­search on in­ter­nal­iz­ing x-risks or global catas­trophic risks into economies?

Ramiro6 Jul 2022 17:08 UTC
19 points
3 comments1 min readEA link

Ben Garfinkel: The fu­ture of surveillance

EA Global8 Jun 2018 7:51 UTC
19 points
0 comments11 min readEA link
(www.youtube.com)

Ja­pan AI Align­ment Conference

ChrisScammell10 Mar 2023 9:23 UTC
17 points
2 comments1 min readEA link
(www.conjecture.dev)

11 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (June 2020 up­date)

HaydnBelfield2 Jul 2020 13:09 UTC
14 points
0 comments6 min readEA link
(www.cser.ac.uk)

AI Safety Newslet­ter #3: AI policy pro­pos­als and a new challenger approaches

Oliver Z25 Apr 2023 16:15 UTC
35 points
1 comment4 min readEA link
(newsletter.safe.ai)

Ap­pli­ca­tions Open for the Next Cy­cle of Oxford Biose­cu­rity Group

Lin BL14 Apr 2024 8:03 UTC
25 points
1 comment2 min readEA link

Sum­mary: Ex­is­ten­tial risk from power-seek­ing AI by Joseph Carlsmith

rileyharris28 Oct 2023 15:05 UTC
11 points
0 comments6 min readEA link
(www.millionyearview.com)

Re­fut­ing longter­mism with Fer­mat’s Last Theorem

astupple16 Aug 2022 12:26 UTC
3 points
32 comments3 min readEA link

[Question] Where would I find the hard­core to­tal­iz­ing seg­ment of EA?

Peter Berggren🔸28 Dec 2023 9:16 UTC
16 points
22 comments1 min readEA link

Grad­ual Disem­pow­er­ment: Con­crete Re­search Projects

Raymond D29 May 2025 18:58 UTC
20 points
1 comment10 min readEA link

The ul­ti­mate goal

Alvin Ånestrand6 Jul 2025 15:13 UTC
4 points
2 comments5 min readEA link
(forecastingaifutures.substack.com)

Three Cruxes for Ex­is­ten­tial Choices Presentation

wallower24 Mar 2025 5:24 UTC
6 points
0 comments1 min readEA link
(drive.google.com)

Ver­ti­cal farm­ing to lessen our re­li­ance on the Sun

Ty5 May 2022 5:57 UTC
12 points
3 comments2 min readEA link

Up­dated es­ti­mates of the sever­ity of a nu­clear war

Luisa_Rodriguez19 Dec 2019 15:11 UTC
76 points
2 comments5 min readEA link

Eter­nal Fu­tures and Finite Agency: A Me­ta­phys­i­cal Challenge to Longtermism

emmadalbianco18 Oct 2025 23:51 UTC
10 points
0 comments4 min readEA link

AI and Biolog­i­cal Risk: Fore­cast­ing Key Ca­pa­bil­ity Thresholds

Alvin Ånestrand2 Oct 2025 14:24 UTC
4 points
1 comment11 min readEA link
(forecastingaifutures.substack.com)

80,000 Hours is shift­ing its strate­gic ap­proach to fo­cus more on AGI

80000_Hours20 Mar 2025 11:24 UTC
233 points
121 comments8 min readEA link

The Value of a Statis­ti­cal Life is not a good metric

Christopher Clay19 Mar 2025 9:11 UTC
25 points
4 comments1 min readEA link

AI may at­tain hu­man level soon

Vishakha Agrawal23 Apr 2025 11:10 UTC
2 points
1 comment2 min readEA link
(aisafety.info)

Ques­tions for Nick Beck­stead’s fireside chat in EAGxAPAC this weekend

BrianTan17 Nov 2020 15:05 UTC
12 points
15 comments3 min readEA link

How I Deal With My Anx­iety Around AI

Strad Slater16 Nov 2025 11:30 UTC
4 points
0 comments4 min readEA link
(williamslater2003.medium.com)

Na­tion­wide Ac­tion Work­shop: Con­tact Congress about AI Safety!

Felix De Simone24 Feb 2025 16:14 UTC
5 points
0 comments1 min readEA link
(www.zeffy.com)

Notes on my tran­si­tion to civic tech

danielechlin5 Oct 2025 15:50 UTC
3 points
0 comments2 min readEA link

False Twins: In­ter­gen­er­a­tional In­jus­tice in Nu­clear Deter­rence and Cli­mate Inaction

Franziska Stärk6 Oct 2025 20:33 UTC
15 points
3 comments1 min readEA link

Some for-profit AI al­ign­ment org ideas

Eric Ho14 Dec 2023 15:52 UTC
33 points
1 comment9 min readEA link

What would it take for AI to dis­em­power us? Ryan Green­blatt on take­off dy­nam­ics, rogue de­ploy­ments, and al­ign­ment risks

80000_Hours8 Jul 2025 18:10 UTC
8 points
0 comments33 min readEA link

Nu­clear Strat­egy in a Semi-Vuln­er­a­ble World

Jackson Wagner28 Jun 2021 17:35 UTC
28 points
0 comments18 min readEA link

Is Tech­nol­ogy Ac­tu­ally Mak­ing Things Bet­ter? – Pairagraph

Eevee🔹1 Oct 2020 16:06 UTC
16 points
1 comment1 min readEA link
(www.pairagraph.com)

Stuxnet, not Skynet: Hu­man­ity’s dis­em­pow­er­ment by AI

Roko4 Apr 2023 11:46 UTC
11 points
0 comments7 min readEA link

In­tro­duc­ing the Men­tal Health Roadmap Series

Emily11 Apr 2023 22:26 UTC
18 points
2 comments2 min readEA link

Do­ing Pri­ori­ti­za­tion Better

arvomm16 Apr 2025 9:53 UTC
135 points
25 comments19 min readEA link

Les­sons from the Cold War: Can AGI and Hu­man­ity Avoid Mu­tual An­nihila­tion?

Jonny_D17 Oct 2025 14:06 UTC
4 points
0 comments3 min readEA link

[Question] [Seek­ing Ad­vice] 19y/​o de­cid­ing whether to drop den­tistry dou­ble ma­jor for sin­gle CS ma­jor to save 4 years and fo­cus on AI risks

jackchang11022 Nov 2025 15:32 UTC
23 points
4 comments4 min readEA link

Lead­er­ship change at the Cen­ter on Long-Term Risk

JesseClifton31 Jan 2025 21:08 UTC
162 points
7 comments3 min readEA link

AI Risk in Africa

Claude Formanek12 Oct 2021 2:28 UTC
20 points
0 comments10 min readEA link

Pes­simism about AI Safety

Max_He-Ho2 Apr 2023 7:57 UTC
5 points
0 comments25 min readEA link
(www.lesswrong.com)

An­nounc­ing the first is­sue of Asterisk

Clara Collier21 Nov 2022 18:51 UTC
275 points
47 comments1 min readEA link

I read ev­ery ma­jor AI lab’s safety plan so you don’t have to

sarahhw16 Dec 2024 14:12 UTC
68 points
2 comments11 min readEA link
(longerramblings.substack.com)

AXRP Epi­sode 24 - Su­per­al­ign­ment with Jan Leike

DanielFilan27 Jul 2023 4:56 UTC
23 points
0 comments1 min readEA link
(axrp.net)

A Mis­sion Frame­work for an Emerg­ing Consciousness

Simón The Gardener8 Aug 2025 15:36 UTC
1 point
0 comments2 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: De­ci­sion Fore­cast­ing AI & Futarchy

Jordan Arel14 Jul 2024 5:10 UTC
5 points
1 comment6 min readEA link

AI Risk In­tro 1: Ad­vanced AI Might Be Very Bad

L Rudolf L11 Sep 2022 10:57 UTC
22 points
0 comments30 min readEA link

Eth­i­cal anal­y­sis of pur­ported risks and dis­asters in­volv­ing suffer­ing, ex­tinc­tion, or a lack of pos­i­tive value

JoA🔸17 Mar 2025 13:36 UTC
20 points
0 comments1 min readEA link
(jeet.ieet.org)

Why I think there’s a one-in-six chance of an im­mi­nent global nu­clear war

Tegmark8 Oct 2022 23:25 UTC
53 points
24 comments4 min readEA link

AISN #13: An in­ter­dis­ci­plinary per­spec­tive on AI proxy failures, new com­peti­tors to ChatGPT, and prompt­ing lan­guage mod­els to misbehave

Center for AI Safety5 Jul 2023 15:33 UTC
25 points
0 comments9 min readEA link
(newsletter.safe.ai)

How I Came To Longter­mism On My Own & An Out­sider Per­spec­tive On EA Longtermism

Jordan Arel7 Aug 2022 2:42 UTC
35 points
2 comments20 min readEA link

For­mal­is­ing the “Wash­ing Out Hy­poth­e­sis”

dwebb25 Mar 2021 11:40 UTC
102 points
27 comments12 min readEA link

1) Pan­demics are a Solv­able Problem

PandemicRiskMan26 Jan 2024 19:48 UTC
−9 points
2 comments5 min readEA link

Clas­sify­ing sources of AI x-risk

Sam Clarke8 Aug 2022 18:18 UTC
41 points
4 comments3 min readEA link

[Book] On Assess­ing the Risk of Nu­clear War

Aryeh Englander7 Jul 2022 21:08 UTC
28 points
2 comments8 min readEA link

Ad­vice on com­mu­ni­cat­ing in and around the biose­cu­rity policy community

ES2 Mar 2023 21:32 UTC
227 points
27 comments6 min readEA link

What is com­pute gov­er­nance?

Vishakha Agrawal23 Dec 2024 6:45 UTC
5 points
0 comments2 min readEA link
(aisafety.info)

Thread on LT/​ut’s prefer­ence for billions of im­mi­nent deaths

Peter_Layman14 Sep 2022 15:44 UTC
−16 points
1 comment1 min readEA link
(twitter.com)

The Vi­talik Bu­terin Fel­low­ship in AI Ex­is­ten­tial Safety is open for ap­pli­ca­tions!

Cynthia Chen14 Oct 2022 3:23 UTC
38 points
0 comments2 min readEA link

Test Your Knowl­edge of the Long-Term Future

AndreFerretti10 Dec 2022 11:01 UTC
22 points
0 comments1 min readEA link

Assess­ing Near-Term Ac­cu­racy in the Ex­is­ten­tial Risk Per­sua­sion Tournament

Forecasting Research Institute2 Sep 2025 12:22 UTC
41 points
1 comment1 min readEA link
(forecastingresearch.org)

Seek­ing in­put on a list of AI books for broader audience

Darren McKee27 Feb 2023 22:40 UTC
49 points
14 comments5 min readEA link

The Rise of AI Agents: Con­se­quences and Challenges Ahead

Tristan D28 Mar 2025 5:19 UTC
5 points
0 comments15 min readEA link

The Con­ver­gent Path to the Stars—Similar Utility Across Civ­i­liza­tions Challenges Ex­tinc­tion Prioritization

Maxime Riché 🔸18 Mar 2025 17:09 UTC
8 points
1 comment20 min readEA link

Seek­ing Mechanism De­signer for Re­search into In­ter­nal­iz­ing Catas­trophic Externalities

c.trout11 Sep 2024 15:09 UTC
11 points
0 comments3 min readEA link

A Cri­tique of Longter­mism by Pop­u­lar YouTube Science Chan­nel, Sabine Hossen­felder: “Elon Musk & The Longter­mists: What Is Their Plan?”

Ram Aditya29 Oct 2022 17:31 UTC
61 points
21 comments2 min readEA link

[Question] Is there a sub­field of eco­nomics de­voted to “frag­ility vs re­silience”?

steve632021 Jul 2020 2:21 UTC
23 points
5 comments1 min readEA link

Manag­ing the con­tri­bu­tion of So­lar Ra­di­a­tion Mod­ifi­ca­tion (SRM) and Cli­mate Change to Global Catas­trophic Risk (GCR) - Work­shop Report

GideonF8 Dec 2023 15:01 UTC
12 points
0 comments5 min readEA link

Re­port: Ar­tifi­cial In­tel­li­gence Risk Man­age­ment in Spain

JorgeTorresC15 Jun 2023 16:08 UTC
22 points
0 comments3 min readEA link
(riesgoscatastroficosglobales.com)

Is­lands as re­fuges for sur­viv­ing global catastrophes

turchin13 Sep 2018 13:33 UTC
9 points
0 comments2 min readEA link

Last Week to Ap­ply: Oxford Biose­cu­rity Group 2025 Call for Pro­ject Pro­pos­als

Lin BL1 Jul 2025 11:18 UTC
14 points
0 comments1 min readEA link

AGI Safety.

Jensen1130 Aug 2025 13:46 UTC
1 point
0 comments3 min readEA link

Democratis­ing AI Align­ment: Challenges and Proposals

Lloy2 🔹5 May 2025 14:50 UTC
2 points
2 comments4 min readEA link

Diminish­ing Re­turns in Ma­chine Learn­ing Part 1: Hard­ware Devel­op­ment and the Phys­i­cal Frontier

Brian Chau27 May 2023 12:39 UTC
16 points
3 comments12 min readEA link
(www.fromthenew.world)

But Have They En­gaged With The Ar­gu­ments? [Linkpost]

Sharmake14 Sep 2025 17:39 UTC
29 points
1 comment3 min readEA link
(philiptrammell.com)

Sen­tinel min­utes #6/​2025: Power of the purse, D1.1 H5N1 flu var­i­ant, Ay­a­tol­lah against ne­go­ti­a­tions with Trump

NunoSempere10 Feb 2025 17:23 UTC
40 points
2 comments7 min readEA link
(blog.sentinel-team.org)

AGI Can­not Be Pre­dicted From Real In­ter­est Rates

Nicholas Decker28 Jan 2025 17:45 UTC
26 points
3 comments1 min readEA link
(nicholasdecker.substack.com)

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
65 points
31 comments8 min readEA link

AGI and Lock-In

Lukas Finnveden29 Oct 2022 1:56 UTC
154 points
20 comments10 min readEA link
(www.forethought.org)

Fix­ing In­sider Threats in the AI Sup­ply Chain

Madhav Malhotra7 Oct 2023 10:49 UTC
9 points
2 comments5 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo Ajantaival23 May 2022 19:17 UTC
62 points
14 comments29 min readEA link

[Question] I’m in­ter­view­ing Carl Shul­man — what should I ask him?

Robert_Wiblin8 Dec 2023 16:48 UTC
53 points
16 comments1 min readEA link

[Question] Do we know how many big as­ter­oids could im­pact Earth?

Milan Griffes7 Jul 2019 16:06 UTC
31 points
7 comments1 min readEA link

AI and Non-Existence

Blue1131 Jan 2025 13:19 UTC
4 points
0 comments2 min readEA link

The great en­ergy de­scent—Post 3: What we can do, what we can’t do

CB🔸31 Aug 2022 21:51 UTC
20 points
3 comments22 min readEA link

The ELYSIUM Proposal

Roko16 Oct 2024 2:14 UTC
−10 points
0 comments1 min readEA link
(transhumanaxiology.substack.com)

存亡リスクを減らす取り組みを支持する議論

EA Japan4 Aug 2023 14:47 UTC
4 points
0 comments2 min readEA link

Longter­mism- An­i­mals and Depopulation

Isla Shiner20 Oct 2025 15:48 UTC
2 points
0 comments4 min readEA link

Pro­tect­ing against vi­tal worker short­ages in ex­treme pandemics

jamesmulhall🔹9 May 2025 19:17 UTC
41 points
2 comments10 min readEA link

[Question] Will the coro­n­avirus pan­demic ad­vance or hin­der the spread of longter­mist-style val­ues/​think­ing?

MichaelA🔸19 Mar 2020 6:07 UTC
12 points
3 comments1 min readEA link

Eco­nomic Pie Re­search as a Cause Area

mediche15 Apr 2022 10:41 UTC
4 points
3 comments3 min readEA link

Trump talk­ing about AI risks

defun 🔸14 Jun 2024 12:24 UTC
43 points
2 comments1 min readEA link
(x.com)

Risk-averse Batch Ac­tive In­verse Re­ward Design

Panagiotis Liampas7 Oct 2023 8:56 UTC
11 points
0 comments15 min readEA link

Let us know how psy­chol­ogy can help in­crease your impact

Inga21 Oct 2022 10:32 UTC
30 points
0 comments1 min readEA link

Alien coloniza­tion of Earth’s im­pact the the rel­a­tive im­por­tance of re­duc­ing differ­ent ex­is­ten­tial risks

Evira5 Sep 2019 0:27 UTC
10 points
10 comments1 min readEA link

My Most Likely Rea­son to Die Young is AI X-Risk

AISafetyIsNotLongtermist4 Jul 2022 15:34 UTC
239 points
62 comments4 min readEA link
(www.lesswrong.com)

[Question] Where should I donate?

Eevee🔹22 Nov 2021 20:56 UTC
29 points
10 comments1 min readEA link

AI’s goals may not match ours

Vishakha Agrawal28 May 2025 12:07 UTC
2 points
0 comments3 min readEA link

Shar­ing in­sights from my mas­ter’s work on the Global Health Se­cu­rity In­dex: seek­ing feed­back and re­search directions

Vincent Niger🔸25 Nov 2024 12:06 UTC
47 points
3 comments3 min readEA link

Ques­tions for Jaan Tal­linn’s fireside chat in EAGxAPAC this weekend

BrianTan17 Nov 2020 2:12 UTC
13 points
8 comments1 min readEA link

What are the differ­ences be­tween AGI, trans­for­ma­tive AI, and su­per­in­tel­li­gence?

Vishakha Agrawal23 Jan 2025 10:11 UTC
12 points
0 comments3 min readEA link
(aisafety.info)

[Long ver­sion] Case study: re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_Green27 Jun 2022 19:20 UTC
49 points
0 comments43 min readEA link

The Calcu­lus of Hu­man Be­hav­ior: Are We Always Driven by In­ter­ests?

双佳30 Sep 2025 12:51 UTC
1 point
1 comment5 min readEA link

ChatGPT not so clever or not so ar­tifi­cial as hyped to be?

Haris Shekeris2 Mar 2023 6:16 UTC
−7 points
2 comments1 min readEA link

[Question] What do we do if AI doesn’t take over the world, but still causes a sig­nifi­cant global prob­lem?

James_Banks2 Aug 2020 3:35 UTC
16 points
5 comments1 min readEA link

[Question] Should Open Philan­thropy build de­tailed quan­ti­ta­tive mod­els which es­ti­mate global catas­trophic risk?

Vasco Grilo🔸10 Apr 2024 17:17 UTC
11 points
4 comments1 min readEA link

In­tro­duc­ing In­ter­na­tional AI Gover­nance Alli­ance (IAIGA)

James Norris5 Feb 2025 15:59 UTC
12 points
0 comments1 min readEA link

Brian Tse: Risks from Great Power Conflicts

EA Global11 Mar 2019 15:02 UTC
23 points
2 comments13 min readEA link
(www.youtube.com)

[Question] What do you think the vi­sion is be­hind our biggest crit­ics?

aprilsun1 Aug 2023 18:49 UTC
29 points
6 comments1 min readEA link

What mis­takes has the AI safety move­ment made?

EuanMcLean23 May 2024 11:29 UTC
66 points
3 comments12 min readEA link

Water Pre­pared­ness for Disasters

Fin8 Mar 2022 17:03 UTC
13 points
0 comments3 min readEA link

What would bet­ter sci­ence look like?

C Tilli30 Aug 2021 8:57 UTC
24 points
3 comments5 min readEA link

Why those who care about catas­trophic and ex­is­ten­tial risk should care about au­tonomous weapons

aaguirre11 Nov 2020 17:27 UTC
105 points
31 comments15 min readEA link

Me­tac­u­lus Launches Space Tech­nol­ogy & Cli­mate Fore­cast­ing Ini­ti­a­tive

christian11 Oct 2023 1:29 UTC
11 points
1 comment1 min readEA link
(www.metaculus.com)

A cri­tique of strong longtermism

Pablo Rosado28 Aug 2022 19:33 UTC
15 points
11 comments14 min readEA link

How Likely Are Var­i­ous Pre­cur­sors of Ex­is­ten­tial Risk?

NunoSempere22 Oct 2024 16:51 UTC
66 points
7 comments15 min readEA link
(samotsvety.org)

Model­ing re­sponses to changes in nu­clear risk

Nathan_Barnard23 Jun 2022 12:50 UTC
7 points
0 comments5 min readEA link

BOUNTY AVAILABLE: AI ethi­cists, what are your ob­ject-level ar­gu­ments against AI notkil­lev­ery­oneism?

Peter Berggren🔸6 Jul 2023 17:37 UTC
1 point
19 comments2 min readEA link

How much is re­duc­ing catas­trophic and ex­tinc­tion risk worth, as­sum­ing XPT fore­casts?

rosehadshar24 Jul 2023 15:16 UTC
51 points
1 comment11 min readEA link

Fermi es­ti­ma­tion of the im­pact you might have work­ing on AI safety

frib13 May 2022 13:30 UTC
24 points
13 comments1 min readEA link

New s-risks au­dio­book available now

Alistair Webster24 May 2023 20:27 UTC
87 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Prologue | A Fire Upon the Deep | Ver­nor Vinge

semicycle17 Feb 2025 4:13 UTC
5 points
1 comment1 min readEA link
(www.baen.com)

2023 Fu­ture Perfect 50

Toby Tremlett🔹29 Nov 2023 15:12 UTC
10 points
1 comment1 min readEA link
(www.vox.com)

The price is right

Elliott Thornley (EJT)16 Oct 2023 16:34 UTC
27 points
5 comments4 min readEA link
(openairopensea.substack.com)

In­ter­re­lat­ed­ness of x-risks and sys­temic fragilities

Naryan4 Sep 2022 21:36 UTC
26 points
7 comments2 min readEA link

IMCA+: We Elimi­nated the Kill Switch—And That Makes ASI Align­ment Safer

ASTRA Research Team22 Oct 2025 14:17 UTC
−8 points
4 comments4 min readEA link

Fo­cus of the IPCC Assess­ment Re­ports Has Shifted to Lower Temperatures

FJehn12 May 2022 12:15 UTC
10 points
15 comments8 min readEA link

[Question] To what de­gree does a threat to a na­tion’s hu­man­ity pose an ex­is­ten­tial Risk?

Nnaemeka Emmanuel Nnadi6 Oct 2023 16:35 UTC
5 points
0 comments1 min readEA link

In­tro­duc­ing the In­sights of an ERA Fo­rum Se­quence

nandini27 Jul 2023 17:16 UTC
18 points
0 comments3 min readEA link

Sin­gu­lar­ity Sur­vival Guide: A Bayesian Guide for Nav­i­gat­ing the Pre-Sin­gu­lar­ity Period

Matt Brooks28 Mar 2025 23:23 UTC
16 points
5 comments2 min readEA link

Up­date: an im­proved sim­ple model of re­cur­rent catastrophes

Arepo10 Nov 2023 13:39 UTC
11 points
2 comments2 min readEA link

Tech­ni­cal Risks of (Lethal) Au­tonomous Weapons Systems

Heramb Podar23 Oct 2024 20:43 UTC
5 points
0 comments1 min readEA link
(www.lesswrong.com)

FYI: I’m work­ing on a book about the threat of AGI/​ASI for a gen­eral au­di­ence. I hope it will be of value to the cause and the community

Darren McKee17 Jun 2022 11:52 UTC
32 points
1 comment2 min readEA link

The cur­rent AI strate­gic land­scape: one bear’s perspective

Matrice Jacobine🔸🏳️‍⚧️15 Feb 2025 9:49 UTC
6 points
0 comments2 min readEA link
(philosophybear.substack.com)

AISN#15: China and the US take ac­tion to reg­u­late AI, re­sults from a tour­na­ment fore­cast­ing AI risk, up­dates on xAI’s plan, and Meta re­leases its open-source and com­mer­cially available Llama 2

Center for AI Safety19 Jul 2023 1:40 UTC
5 points
0 comments6 min readEA link
(newsletter.safe.ai)

Power Laws of Value

tylermjohn17 Mar 2025 10:10 UTC
54 points
21 comments13 min readEA link

Your Chance to Save Lives. To­day.

LiaH13 Oct 2023 23:48 UTC
−6 points
7 comments2 min readEA link

[Question] What is the im­pact of chip pro­duc­tion on paus­ing AI de­vel­op­ment?

JohanEA10 Jan 2024 22:20 UTC
7 points
0 comments1 min readEA link

[Cause Ex­plo­ra­tion Prizes] Pocket Parks

Coefficient Giving29 Aug 2022 11:01 UTC
7 points
0 comments11 min readEA link

Sum­mary: High risk, low re­ward: A challenge to the as­tro­nom­i­cal value of ex­is­ten­tial risk mitigation

Global Priorities Institute12 Sep 2023 16:31 UTC
69 points
20 comments5 min readEA link
(globalprioritiesinstitute.org)

Luisa Ro­driguez: Do­ing em­piri­cal global pri­ori­ties re­search — the ques­tion of civ­i­liza­tional col­lapse and recovery

EA Global25 Oct 2020 5:48 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

On Progress and Prosperity

Paul_Christiano15 Oct 2014 7:03 UTC
69 points
32 comments9 min readEA link

OpenAI’s new Pre­pared­ness team is hiring

leopold26 Oct 2023 20:41 UTC
85 points
13 comments1 min readEA link

[Question] Is ex­is­ten­tial risk more press­ing than other ways to im­prove the long-term fu­ture?

Eevee🔹20 Aug 2020 3:50 UTC
23 points
1 comment1 min readEA link

The Hid­den Com­plex­ity of Wishes—The Animation

Writer27 Sep 2023 17:59 UTC
7 points
0 comments1 min readEA link
(youtu.be)

Build­ing a Bet­ter Dooms­day Clock

christian.r25 May 2022 17:02 UTC
25 points
2 comments1 min readEA link
(www.lawfareblog.com)

Con­cerns/​Thoughts over in­ter­na­tional aid, longter­mism and philo­soph­i­cal notes on speak­ing with Larry Temkin.

Ben Yeoh27 Jul 2022 19:51 UTC
35 points
1 comment12 min readEA link

Robert Wright on us­ing cog­ni­tive em­pa­thy to save the world

80000_Hours27 May 2021 15:38 UTC
7 points
0 comments69 min readEA link

[Question] Who are the best peo­ple you know at us­ing LLMs for pro­duc­tivity?

Alejandro Acelas 🔸22 Jun 2025 11:20 UTC
6 points
3 comments1 min readEA link

US House Vote on Sup­port for Ye­men War

Radical Empath Ismam12 Dec 2022 2:13 UTC
−4 points
0 comments1 min readEA link
(theintercept.com)

Why we should fear any bio­eng­ineered fun­gus and give fungi re­search attention

Nnaemeka Emmanuel Nnadi18 Aug 2023 3:35 UTC
68 points
4 comments3 min readEA link

World and Mind in Ar­tifi­cial In­tel­li­gence: ar­gu­ments against the AI pause

Arturo Macias18 Apr 2023 14:35 UTC
6 points
3 comments5 min readEA link

An­nounc­ing the Con­fido app: bring­ing fore­cast­ing to everyone

Blanka16 May 2023 10:25 UTC
104 points
2 comments9 min readEA link

Neil Sin­hab­abu on metaethics and world gov­ern­ment for re­duc­ing ex­is­ten­tial risk

Gus Docker2 Feb 2022 20:23 UTC
7 points
0 comments83 min readEA link
(www.utilitarianpodcast.com)

Notes on Hen­rich’s “The WEIRDest Peo­ple in the World” (2020)

MichaelA🔸25 Mar 2021 5:04 UTC
44 points
4 comments3 min readEA link

Vague jus­tifi­ca­tions for longter­mist as­sump­tions?

Venky102411 May 2024 9:20 UTC
30 points
9 comments7 min readEA link

Ap­pli­ca­tions for the 2026 Tar­bell Fel­low­ship are open

Tarbell Center for AI Journalism12 Nov 2025 11:46 UTC
22 points
0 comments1 min readEA link

Coach­ing match­mak­ing is now open: in­vest in com­mu­nity well­ness by in­vest­ing in yourself

Tee17 Jul 2023 11:17 UTC
39 points
0 comments20 min readEA link

How to re­con­sider a prediction

Noah Scales25 Oct 2022 21:28 UTC
2 points
2 comments4 min readEA link

McGill EA x Law Pre­sents: Ex­is­ten­tial Ad­vo­cacy with Prof. John Bliss

McGill EA x Law10 Jan 2023 23:56 UTC
3 points
0 comments1 min readEA link

Case stud­ies on so­cial-welfare-based stan­dards in var­i­ous industries

Holden Karnofsky20 Jun 2024 13:33 UTC
73 points
2 comments1 min readEA link

[EAG talk] The like­li­hood and sever­ity of a US-Rus­sia nu­clear ex­change (Ro­driguez, 2019)

Will Aldred3 Jul 2022 13:53 UTC
32 points
0 comments2 min readEA link
(www.youtube.com)

Do We Have the Right to Shape the Fu­ture?

Krimsey24 Sep 2025 1:11 UTC
2 points
1 comment3 min readEA link

Boomerang—pro­to­col to dis­solve some com­mit­ment races

Filip Sondej30 May 2023 16:24 UTC
20 points
0 comments8 min readEA link
(www.lesswrong.com)

Quan­tum Im­mor­tal­ity: A Per­spec­tive if AI Doomers are Prob­a­bly Right

turchin7 Nov 2024 16:06 UTC
7 points
0 comments14 min readEA link

“Slower tech de­vel­op­ment” can be about or­der­ing, grad­u­al­ness, or dis­tance from now

MichaelA🔸14 Nov 2021 20:58 UTC
47 points
3 comments4 min readEA link

EA is be­com­ing in­creas­ingly in­ac­cessible, at the worst pos­si­ble time

Ann Garth 🔸22 Jul 2022 15:40 UTC
78 points
13 comments15 min readEA link

The AGI Awak­e­ness valley of doom and three path­ways to slowing

GideonF28 Jul 2025 18:46 UTC
16 points
0 comments16 min readEA link
(open.substack.com)

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

Ja­panese or­ga­ni­za­tion for atomic bomb sur­vivors Nihon Hi­dankyo has been awarded the No­bel Peace Prize

Jonny Spicer 🔸11 Oct 2024 11:31 UTC
6 points
1 comment1 min readEA link

In­tro­duc­ing the Fund for Align­ment Re­search (We’re Hiring!)

AdamGleave6 Jul 2022 2:00 UTC
74 points
3 comments4 min readEA link

Good v. Op­ti­mal Futures

RobertHarling11 Dec 2020 16:38 UTC
38 points
10 comments6 min readEA link

War in Taiwan and AI Timelines

Jordan_Schneider24 Aug 2022 2:24 UTC
19 points
3 comments8 min readEA link
(www.chinatalk.media)

An­nounc­ing the Prague com­mu­nity space: Fixed Point

Epistea22 May 2023 5:52 UTC
69 points
2 comments3 min readEA link

Aus­trali­ans call for AI safety to be taken seriously

Alexander Saeri21 Jul 2023 1:16 UTC
51 points
1 comment1 min readEA link

Kel­sey Piper’s re­cent in­ter­view of SBF

Agustín Covarrubias 🔸16 Nov 2022 20:30 UTC
292 points
155 comments2 min readEA link
(www.vox.com)

Talk - ‘Car­ing for the Far Fu­ture’

Yadav9 Dec 2022 16:58 UTC
13 points
0 comments1 min readEA link
(youtu.be)

Sym­bio­sis, not al­ign­ment, as the goal for liberal democ­ra­cies in the tran­si­tion to ar­tifi­cial gen­eral intelligence

simonfriederich17 Mar 2023 13:04 UTC
18 points
2 comments24 min readEA link
(rdcu.be)

Rus­sia-Ukraine Con­flict: Fore­cast­ing Nu­clear Risk in 2022

Metaculus24 Mar 2022 21:03 UTC
23 points
1 comment12 min readEA link

Ob­ser­va­to­rio de Ries­gos Catas­trófi­cos Globales (ORCG) Re­cap 2023

JorgeTorresC14 Dec 2023 14:27 UTC
75 points
0 comments3 min readEA link
(riesgoscatastroficosglobales.com)

Is Paus­ing AI Pos­si­ble?

Richard Annilo9 Oct 2024 13:22 UTC
89 points
4 comments18 min readEA link

Does cli­mate sci­ence fo­cus on the right tem­per­a­ture range?

FJehn26 Nov 2025 15:56 UTC
30 points
1 comment11 min readEA link
(existentialcrunch.substack.com)

Steer­ing AI to care for an­i­mals, and soon

Andrew Critch14 Jun 2022 1:13 UTC
239 points
37 comments1 min readEA link

The Un­know­able Catastrophe

Aino6 Jul 2023 15:37 UTC
3 points
0 comments3 min readEA link

Should there be just one west­ern AGI pro­ject?

rosehadshar4 Dec 2024 14:41 UTC
49 points
3 comments15 min readEA link
(www.forethought.org)

The ‘Bad Par­ent’ Prob­lem: Why Hu­man So­ciety Com­pli­cates AI Alignment

Beyond Singularity5 Apr 2025 21:08 UTC
11 points
1 comment3 min readEA link

AI Might Kill Every­one

Bentham's Bulldog5 Jun 2025 15:36 UTC
20 points
1 comment4 min readEA link

Fu­ture Mat­ters #5: su­per­vol­ca­noes, AI takeover, and What We Owe the Future

Pablo14 Sep 2022 13:02 UTC
31 points
5 comments18 min readEA link

[Question] Will you fund a fungi surveillance study?

Nnaemeka Emmanuel Nnadi7 Sep 2023 20:42 UTC
7 points
2 comments1 min readEA link

Re­sponse to Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfield8 Mar 2021 18:09 UTC
138 points
73 comments5 min readEA link

Gover­nance Strate­gies for Dual-Use Re­search of con­cern: Balanc­ing Scien­tific Progress and Global Security

Diane Letourneur28 Jun 2024 17:01 UTC
9 points
1 comment13 min readEA link

Nu­clear Es­pi­onage and AI Governance

GAA4 Oct 2021 18:21 UTC
32 points
3 comments24 min readEA link

The Bunker Fallacy

SimonKS10 Apr 2024 8:33 UTC
12 points
11 comments6 min readEA link

Differ­en­tial Tech­nolog­i­cal Devel­op­ment: Some Early Thinking

Nick_Beckstead29 Sep 2015 10:23 UTC
4 points
0 comments9 min readEA link
(blog.givewell.org)

Is Ge­netic Code Swap­ping as risky as it seems?

Invert_DOG_about_centre_O12 Jan 2025 18:38 UTC
23 points
2 comments10 min readEA link

In­ves­ti­gat­ing the Long Reflection

Yannick_Muehlhaeuser24 Jul 2023 16:26 UTC
38 points
3 comments12 min readEA link

Im­prov­ing long-run civil­i­sa­tional robustness

RyanCarey10 May 2016 11:14 UTC
9 points
6 comments3 min readEA link

The case for not in­vad­ing Crimea

kbog19 Jan 2023 6:37 UTC
12 points
16 comments19 min readEA link

What is it to solve the al­ign­ment prob­lem?

Joe_Carlsmith13 Feb 2025 18:42 UTC
25 points
1 comment19 min readEA link
(joecarlsmith.substack.com)

An­nounc­ing our book, After the Spike, and an Op­por­tu­nity to Help

deanspears7 Apr 2025 15:59 UTC
68 points
4 comments4 min readEA link

Sen­tience In­sti­tute 2023 End of Year Summary

MichaelDello27 Nov 2023 12:11 UTC
29 points
0 comments5 min readEA link
(www.sentienceinstitute.org)

Former Is­raeli Prime Minister Speaks About AI X-Risk

Yonatan Cale20 May 2023 12:09 UTC
73 points
6 comments1 min readEA link

Sce­nario plan­ning for AI x-risk

Corin Katzke10 Feb 2024 0:07 UTC
41 points
0 comments15 min readEA link
(www.convergenceanalysis.org)

Op­ti­mistic Longter­mism and Sus­pi­cious Judg­ment Calls

Jim Buhler24 Mar 2025 15:55 UTC
24 points
30 comments4 min readEA link

Cen­tre for Ex­plo­ra­tory Altru­ism Re­search (CEARCH)

Joel Tan🔸18 Oct 2022 7:23 UTC
125 points
15 comments5 min readEA link

Point-by-point re­ply to Yud­kowsky on UFOs

Magnus Vinding19 Dec 2024 21:24 UTC
4 points
0 comments9 min readEA link

Pod­cast: Samo Burja on the war in Ukraine, avoid­ing nu­clear war and the longer term implications

Gus Docker11 Mar 2022 18:50 UTC
4 points
6 comments14 min readEA link
(www.utilitarianpodcast.com)

Is space coloniza­tion de­sir­able? Re­view of Dark Sk­ies: Space Ex­pan­sion­ism, Plane­tary Geopoli­tics, and the Ends of Humanity

sphor7 Oct 2022 12:26 UTC
13 points
3 comments3 min readEA link
(bostonreview.net)

The Will to Create the Fu­ture, Not Just Pre­dict It

keivn25 Oct 2025 20:49 UTC
2 points
1 comment2 min readEA link

The ‘Ne­glected Ap­proaches’ Ap­proach: AE Stu­dio’s Align­ment Agenda

Marc Carauleanu18 Dec 2023 21:13 UTC
21 points
0 comments12 min readEA link

Open Cli­mate Data as a pos­si­ble cause area, Open Philanthropy

Ben Yeoh3 Jul 2022 12:47 UTC
4 points
0 comments12 min readEA link

[Question] Books /​ book re­views on nu­clear risk, WMDs, great power war?

MichaelA🔸15 Dec 2020 1:40 UTC
16 points
16 comments1 min readEA link

What are some good books about AI safety?

Vishakha Agrawal17 Feb 2025 11:54 UTC
7 points
0 comments3 min readEA link
(aisafety.info)

In­tro­duc­ing the Cen­ter for AI Policy (& we’re hiring!)

Thomas Larsen28 Aug 2023 21:27 UTC
53 points
1 comment2 min readEA link
(www.aipolicy.us)

In­tel­li­gence failures and a the­ory of change for fore­cast­ing

Nathan_Barnard31 Aug 2022 2:05 UTC
12 points
1 comment10 min readEA link

Ex­am­ple syl­labus “Ex­is­ten­tial Risks”

simonfriederich3 Jul 2021 9:23 UTC
15 points
2 comments10 min readEA link

In­com­men­su­ra­bil­ity and In­tran­si­tivity in Longter­mism: A Plu­ral­ist Reframe (with a note on why art mat­ters)

Ben Yeoh3 Oct 2025 11:39 UTC
5 points
0 comments21 min readEA link

AI-nu­clear in­te­gra­tion: ev­i­dence of au­toma­tion bias from hu­mans and LLMs [re­search sum­mary]

Tao27 Apr 2024 21:59 UTC
17 points
2 comments12 min readEA link

Is Civ­i­liza­tion on the Brink of Col­lapse? - Kurzgesagt

GabeM16 Aug 2022 20:06 UTC
33 points
5 comments1 min readEA link
(www.youtube.com)

Catas­tro­phe with­out Agency

ZenoSr20 Oct 2025 16:42 UTC
3 points
0 comments12 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link

Re­build­ing af­ter apoc­a­lypse: What 13 ex­perts say about bounc­ing back

80000_Hours15 Jul 2025 19:39 UTC
7 points
0 comments2 min readEA link

Overview of Trans­for­ma­tive AI Mi­suse Risks

SammyDMartin11 Dec 2024 11:04 UTC
12 points
0 comments2 min readEA link
(longtermrisk.org)

How would a nu­clear war be­tween Rus­sia and the US af­fect you per­son­ally?

Max Görlitz27 Jul 2023 13:06 UTC
13 points
4 comments1 min readEA link
(www.youtube.com)

EA and the cur­rent fund­ing situation

William_MacAskill10 May 2022 2:26 UTC
568 points
185 comments24 min readEA link

Ex­is­ten­tial Hope and Ex­is­ten­tial Risk: Ex­plor­ing the value of op­ti­mistic ap­proaches to shap­ing the long-term future

Vilhelm Skoglund27 Oct 2023 9:07 UTC
35 points
3 comments24 min readEA link

Ex­am­in­ing path­ways through which nar­row AI sys­tems might in­crease the like­li­hood of nu­clear war

oeg14 Jun 2023 13:54 UTC
8 points
2 comments2 min readEA link

Last days to ap­ply to EAGxLATAM 2024

Daniela Tiznado17 Jan 2024 20:24 UTC
16 points
0 comments1 min readEA link

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_Gloor15 Feb 2023 13:01 UTC
79 points
5 comments13 min readEA link

Scal­able And Trans­fer­able Black-Box Jailbreaks For Lan­guage Models Via Per­sona Modulation

sjp7 Nov 2023 18:00 UTC
10 points
0 comments2 min readEA link
(arxiv.org)

Against GDP as a met­ric for timelines and take­off speeds

kokotajlod29 Dec 2020 17:50 UTC
47 points
6 comments14 min readEA link

[Question] Retroac­tive Fund­ing for Alignment

Prometheus25 Oct 2025 4:09 UTC
18 points
2 comments1 min readEA link

Is it pos­si­bly de­sir­able for sen­tient ASI to ex­ter­mi­nate hu­mans?

Duckruck18 Jun 2024 15:20 UTC
0 points
4 comments1 min readEA link

Is it eth­i­cal to ex­pand nu­clear en­ergy use?

simonfriederich5 Nov 2022 10:38 UTC
12 points
5 comments3 min readEA link

Paper sum­mary––Pro­tect­ing fu­ture gen­er­a­tions: A global sur­vey of le­gal academics

rileyharris5 Sep 2023 10:29 UTC
25 points
1 comment3 min readEA link
(www.legalpriorities.org)

EA Sum­mit Bo­gota ap­pli­ca­tions are open!

Manuela García30 Sep 2025 2:05 UTC
17 points
2 comments2 min readEA link

II. Trig­ger­ing The Race

Maynk0224 Oct 2023 18:45 UTC
6 points
1 comment4 min readEA link

We can’t put num­bers on ev­ery­thing and try­ing to weak­ens our col­lec­tive epistemics

ConcernedEAs8 Mar 2023 15:09 UTC
9 points
0 comments11 min readEA link

Longter­mists Should Worry About AI Not Be­ing Developed

DanteTheAbstract17 Oct 2025 18:32 UTC
10 points
0 comments4 min readEA link

900+ Fore­cast­ers on Whether Rus­sia Will In­vade Ukraine

Metaculus19 Feb 2022 13:29 UTC
51 points
0 comments4 min readEA link
(metaculus.medium.com)

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan Arel6 Dec 2022 22:36 UTC
5 points
4 comments3 min readEA link

Los­ing faith in big tech altruism

sammyboiz🔸22 May 2024 4:49 UTC
7 points
1 comment1 min readEA link

Nu­clear win­ter scepticism

Vasco Grilo🔸13 Aug 2023 10:55 UTC
110 points
42 comments10 min readEA link
(www.navalgazing.net)

Me­tac­u­lus Launches Fu­ture of AI Series, Based on Re­search Ques­tions by Arb

christian13 Mar 2024 21:14 UTC
34 points
0 comments1 min readEA link
(www.metaculus.com)

[Question] Why al­tru­ism at all?

Singleton12 Jul 2020 22:04 UTC
−2 points
1 comment1 min readEA link

Web­site con­cept for vi­su­al­iz­ing ex­is­ten­tial risk—look­ing for feed­back/​funding

Ville Seppälä5 Jul 2025 11:41 UTC
3 points
0 comments3 min readEA link

[Question] How many times would nu­clear weapons have been used if ev­ery state had them since 1950?

eca4 May 2021 15:34 UTC
16 points
13 comments1 min readEA link

Ask a Nu­clear Expert

Group Organizer3 Mar 2022 11:28 UTC
5 points
0 comments1 min readEA link

Shift Re­sources to Ad­vo­cacy Now (Post 4 of 7 on AI Gover­nance)

Jason Green-Lowe28 May 2025 1:19 UTC
58 points
5 comments32 min readEA link

[Question] Odds of re­cov­er­ing val­ues af­ter col­lapse?

Will Aldred24 Jul 2022 18:20 UTC
66 points
13 comments3 min readEA link

Against the Open Source /​ Closed Source Di­chotomy: Reg­u­lated Source as a Model for Re­spon­si­ble AI Development

Alexander Herwix 🔸4 Sep 2023 20:23 UTC
5 points
1 comment6 min readEA link

Nu­clear Pre­pared­ness Guide

Fin8 Mar 2022 17:04 UTC
101 points
14 comments11 min readEA link

Forethought: A new AI macros­trat­egy group

Amrit Sidhu-Brar 🔸11 Mar 2025 15:36 UTC
174 points
10 comments3 min readEA link

W-Risk and the Tech­nolog­i­cal Wavefront (Nell Wat­son)

Aaron Gertler 🔸11 Nov 2018 23:22 UTC
9 points
1 comment1 min readEA link

Pos­si­ble way of re­duc­ing great power war prob­a­bil­ity?

Denkenberger🔸28 Nov 2019 4:27 UTC
33 points
2 comments2 min readEA link

Co­op­er­a­tion for AI safety must tran­scend geopoli­ti­cal interference

Matrice Jacobine🔸🏳️‍⚧️16 Feb 2025 18:18 UTC
9 points
0 comments1 min readEA link
(www.scmp.com)

Pro­mot­ing com­pas­sion­ate longtermism

jonleighton7 Dec 2022 14:26 UTC
117 points
5 comments12 min readEA link

How to en­gage with AI 4 So­cial Jus­tice ac­tors

TomWestgarth26 Apr 2022 8:39 UTC
13 points
5 comments1 min readEA link

The In­tel­li­gence Curse: an es­say series

L Rudolf L24 Apr 2025 12:59 UTC
22 points
1 comment2 min readEA link

XPT fore­casts on (some) biolog­i­cal an­chors inputs

Forecasting Research Institute24 Jul 2023 13:32 UTC
37 points
2 comments12 min readEA link

[Question] Put­ting Peo­ple First in a Cul­ture of De­hu­man­iza­tion

jhealy22 Jul 2020 3:31 UTC
16 points
3 comments1 min readEA link

The es­tab­lished nuke risk field de­serves more engagement

Ilverin4 Jul 2022 19:39 UTC
17 points
12 comments1 min readEA link

Google AI Ac­cel­er­a­tor Open Call

Rochelle Harris22 Jan 2025 16:50 UTC
10 points
1 comment1 min readEA link

[Question] How bi­nary is longterm value?

Vasco Grilo🔸1 Nov 2022 15:21 UTC
13 points
15 comments1 min readEA link

The great en­ergy de­scent—Part 1: Can re­new­ables re­place fos­sil fuels?

CB🔸31 Aug 2022 21:51 UTC
46 points
2 comments22 min readEA link

Against Longter­mism: I wel­come our robot over­lords, and you should too!

MattBall2 Jul 2022 2:05 UTC
5 points
6 comments6 min readEA link

I watched co­or­di­na­tion col­lapse in real time (Ukraine 2004–2025)

Artem Rudneff 26 Nov 2025 16:41 UTC
8 points
0 comments1 min readEA link

[Question] What “pivotal” and use­ful re­search … would you like to see as­sessed? (Bounty for sug­ges­tions)

david_reinstein28 Apr 2022 15:49 UTC
37 points
21 comments7 min readEA link

On The Rel­a­tive Long-Term Fu­ture Im­por­tance of In­vest­ments in Eco­nomic Growth and Global Catas­trophic Risk Reduction

poliboni30 Mar 2020 20:11 UTC
33 points
1 comment1 min readEA link

The Com­pendium, A full ar­gu­ment about ex­tinc­tion risk from AGI

adamShimi31 Oct 2024 12:02 UTC
9 points
1 comment2 min readEA link
(www.thecompendium.ai)

[Question] What pre­vi­ous work has been done on fac­tors that af­fect the pace of tech­nolog­i­cal de­vel­op­ment?

Megan Kinniment27 Apr 2021 18:43 UTC
21 points
6 comments1 min readEA link

[Question] EA views on the AUKUS se­cu­rity pact?

DavidZhang29 Sep 2021 8:24 UTC
28 points
14 comments1 min readEA link

The­ory: “WAW might be of higher im­pact than x-risk pre­ven­tion based on util­i­tar­i­anism”

Jens Aslaug 🔸12 Sep 2023 13:11 UTC
51 points
20 comments17 min readEA link

‘GiveWell for AI Safety’: Les­sons learned in a week

Lydia Nottingham30 May 2025 16:10 UTC
45 points
1 comment6 min readEA link

Tech­ni­cal Re­port on Mir­ror Bac­te­ria: Fea­si­bil­ity and Risks

Aaron Gertler 🔸12 Dec 2024 19:07 UTC
249 points
18 comments1 min readEA link
(purl.stanford.edu)

What is the like­li­hood that civ­i­liza­tional col­lapse would cause tech­nolog­i­cal stag­na­tion? (out­dated re­search)

Luisa_Rodriguez19 Oct 2022 17:35 UTC
83 points
13 comments32 min readEA link

[Question] What would “do­ing enough” to safe­guard the long-term fu­ture look like?

HStencil22 Apr 2020 21:47 UTC
20 points
0 comments1 min readEA link

Are we already past the precipice?

Dem0sthenes10 Aug 2022 4:01 UTC
1 point
5 comments2 min readEA link

[Question] What is the strongest case for nu­clear weapons?

Garrison12 Apr 2022 19:32 UTC
6 points
3 comments1 min readEA link

Four Con­cerns Re­gard­ing Longtermism

Pat Andriola6 Jun 2022 5:42 UTC
82 points
15 comments7 min readEA link

Bernie San­ders (I-VT) men­tions AI loss of con­trol risk in Giz­modo interview

Matrice Jacobine🔸🏳️‍⚧️14 Jul 2025 14:47 UTC
26 points
0 comments1 min readEA link
(gizmodo.com)

Towards the Oper­a­tional­iza­tion of Philos­o­phy & Wisdom

Thane Ruthenis28 Oct 2024 19:45 UTC
1 point
1 comment33 min readEA link
(aiimpacts.org)

Five Areas I Wish EAs Gave More Focus

Prometheus27 Oct 2022 6:13 UTC
8 points
14 comments4 min readEA link

Two Rea­sons For Res­tart­ing the Test­ing of Nu­clear Weapons

niplav8 Aug 2023 7:50 UTC
17 points
2 comments5 min readEA link

Fund­ing for pro­grams and events on global catas­trophic risk, effec­tive al­tru­ism, and other topics

GCR Capacity Building team (Open Phil)13 Aug 2024 13:13 UTC
46 points
0 comments2 min readEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

Larks13 Dec 2016 4:36 UTC
57 points
12 comments28 min readEA link

[Question] Look­ing for col­lab­o­ra­tors af­ter last 80k pod­cast with Tris­tan Harris

Jan-Willem7 Dec 2020 22:23 UTC
19 points
7 comments2 min readEA link

An Overview of Catas­trophic AI Risks

Center for AI Safety15 Aug 2023 21:52 UTC
37 points
1 comment13 min readEA link
(www.safe.ai)

In­put sought on next steps for the XPT (also, we’re hiring!)

Forecasting Research Institute29 Sep 2023 22:26 UTC
34 points
3 comments5 min readEA link

A per­spec­tive on the dan­ger/​hypocrisy in pri­ori­tiz­ing one sin­gle long term risk, and ig­nore ev­ery other risk

yz28 May 2025 5:29 UTC
5 points
0 comments3 min readEA link

Un­veiling the Longter­mism Frame­work in Is­lam: Urg­ing Mus­lims to Em­brace Fu­ture-Ori­ented Values through ‘Is­lamic Longter­mism’

Zayn A15 Aug 2023 11:34 UTC
83 points
9 comments20 min readEA link

Pub­lished re­port: Path­ways to short TAI timelines

Zershaaneh Qureshi20 Feb 2025 22:10 UTC
47 points
2 comments17 min readEA link
(www.convergenceanalysis.org)

How AI could slow sci­en­tific progress—linkpost

Josh Piecyk 🔹17 Jul 2025 17:49 UTC
35 points
3 comments22 min readEA link
(www.aisnakeoil.com)

[Cross­post] AI Reg­u­la­tion May Be More Im­por­tant Than AI Align­ment For Ex­is­ten­tial Safety

Otto24 Aug 2023 16:01 UTC
14 points
2 comments5 min readEA link

Fron­tier AI sys­tems have sur­passed the self-repli­cat­ing red line

Greg_Colbourn ⏸️ 10 Dec 2024 16:33 UTC
25 points
14 comments1 min readEA link
(github.com)

Eli’s re­view of “Is power-seek­ing AI an ex­is­ten­tial risk?”

elifland30 Sep 2022 12:21 UTC
58 points
3 comments3 min readEA link
(docs.google.com)

Sam Alt­man’s Chip Am­bi­tions Un­der­cut OpenAI’s Safety Strategy

Garrison10 Feb 2024 19:52 UTC
286 points
20 comments3 min readEA link
(garrisonlovely.substack.com)

[Question] How have nu­clear win­ter mod­els evolved?

Jordan Arel11 Sep 2022 22:40 UTC
14 points
3 comments1 min readEA link

From Cri­sis to Con­trol: Estab­lish­ing a Re­silient In­ci­dent Re­sponse Frame­work for De­ployed AI Models

KevinN31 Jan 2025 13:06 UTC
10 points
1 comment6 min readEA link
(www.techpolicy.press)

AI Ex­is­ten­tial Safety Fellowships

mmfli27 Oct 2023 12:14 UTC
15 points
1 comment1 min readEA link

Samotsvety Nu­clear Risk up­date Oc­to­ber 2022

NunoSempere3 Oct 2022 18:10 UTC
262 points
52 comments16 min readEA link

ISYP Third Nu­clear Age Con­fer­ence New Age, New Think­ing: Challenges of a Third Nu­clear Age 31 Oc­to­ber-2 Novem­ber 2022, in Ber­lin, Ger­many

Daniel Ajudeonu11 Aug 2022 9:43 UTC
4 points
0 comments5 min readEA link

IV. Par­allels and Review

Maynk0227 Feb 2024 23:10 UTC
7 points
1 comment8 min readEA link
(open.substack.com)

Other Civ­i­liza­tions Would Re­cover 84+% of Our Cos­mic Re­sources—A Challenge to Ex­tinc­tion Risk Prioritization

Maxime Riché 🔸17 Mar 2025 13:11 UTC
19 points
0 comments12 min readEA link

Sixty years af­ter the Cuban Mis­sile Cri­sis, a new era of global catas­trophic risks

christian.r13 Oct 2022 11:25 UTC
31 points
0 comments1 min readEA link
(thebulletin.org)

AI Safety Col­lab 2025 - Lo­cal Or­ga­nizer Sign-ups Open

Evander H. 🔸12 Feb 2025 11:27 UTC
15 points
0 comments1 min readEA link

Con­struc­tive Dis­cus­sion and Think­ing Method­ol­ogy for Se­vere Si­tu­a­tions in­clud­ing Ex­is­ten­tial Risks

Aino8 Jul 2023 0:04 UTC
1 point
0 comments7 min readEA link

Quotes about the long reflection

MichaelA🔸5 Mar 2020 7:48 UTC
55 points
14 comments13 min readEA link

Chip Pro­duc­tion Policy Won’t Mat­ter as Much as You’d Think

Davidmanheim31 Aug 2025 18:58 UTC
33 points
8 comments5 min readEA link

[Question] Does Utili­tar­ian Longter­mism Im­ply Directed Pansper­mia?

Ahrenbach24 Apr 2020 18:15 UTC
0 points
17 comments1 min readEA link

Tyler Cowen’s challenge to de­velop an ‘ac­tual math­e­mat­i­cal model’ for AI X-Risk

Joe Brenton16 May 2023 16:55 UTC
20 points
4 comments1 min readEA link

The Ver­ifi­ca­tion Gap: A Scien­tific Warn­ing on the Limits of AI Safety

Ihor Ivliev24 Jun 2025 19:08 UTC
3 points
0 comments2 min readEA link

[Creative Writ­ing Con­test] The Puppy Problem

Louis13 Oct 2021 14:01 UTC
13 points
0 comments7 min readEA link

Linkpost: Red­wood Re­search read­ing list

Julian Stastny10 Jul 2025 19:21 UTC
18 points
0 comments1 min readEA link
(redwoodresearch.substack.com)

Does nat­u­ral se­lec­tion fa­vor AIs over hu­mans?

cdkg3 Oct 2024 19:02 UTC
21 points
0 comments1 min readEA link
(link.springer.com)

Dist­in­guish­ing Between Ideal­ism and Real­ism in In­ter­na­tional Relations

Siya Sawhney18 Jul 2024 16:23 UTC
5 points
2 comments3 min readEA link

[Question] Can in­creas­ing Trust amongst hu­mans be con­sid­ered our great­est pri­or­ity?

Firas Najjar24 Aug 2023 8:45 UTC
4 points
4 comments1 min readEA link

Case study: Safety stan­dards on Cal­ifor­nia util­ities to pre­vent wildfires

Coby Joseph6 Dec 2023 10:32 UTC
7 points
1 comment26 min readEA link

I’m NOT against Ar­tifi­cial Intelligence

Victoria Dias24 Apr 2025 18:02 UTC
6 points
1 comment18 min readEA link

Robert Wiblin: Mak­ing sense of long-term in­di­rect effects

EA Global6 Aug 2016 0:40 UTC
14 points
0 comments17 min readEA link
(www.youtube.com)

In­ter­ven­tions to Re­duce Risk for Pathogen Spillover

JMonty🔸22 Apr 2023 14:29 UTC
13 points
0 comments3 min readEA link
(wwwnc.cdc.gov)

Safety Sells: For-profit in­vest­ing into civ­i­liza­tional re­silience (food se­cu­rity, biose­cu­rity)

FGH3 Jan 2023 12:24 UTC
30 points
4 comments6 min readEA link

[Question] Ex­is­ten­tial risk man­age­ment in cen­tral gov­ern­ment? Where is it?

WillPearson4 Mar 2024 16:22 UTC
6 points
2 comments1 min readEA link

New Ar­tifi­cial In­tel­li­gence quiz: can you beat ChatGPT?

AndreFerretti3 Mar 2023 15:46 UTC
29 points
3 comments1 min readEA link

The stan­dard case for de­lay­ing AI ap­pears to rest on non-util­i­tar­ian assumptions

Matthew_Barnett11 Feb 2025 4:04 UTC
15 points
57 comments10 min readEA link

Rais­ing the voices that ac­tu­ally count

Kim Holder13 Jun 2023 19:21 UTC
2 points
3 comments2 min readEA link

[Question] Trac­tors that need to be con­nected to func­tion?

Miquel Banchs-Piqué (prev. mikbp)31 Oct 2022 20:42 UTC
4 points
2 comments1 min readEA link

The Germy Para­dox – The empty sky: A his­tory of state biolog­i­cal weapons programs

eukaryote24 Sep 2019 5:26 UTC
24 points
0 comments1 min readEA link
(eukaryotewritesblog.com)

De­tec­tion of Asymp­tomat­i­cally Spread­ing Pathogens

Jeff Kaufman 🔸5 Dec 2024 19:17 UTC
52 points
6 comments7 min readEA link

[Question] Should re­cent events make us more or less con­cerned about biorisk?

Linch19 Mar 2020 0:00 UTC
23 points
7 comments1 min readEA link

The Base Rate of Longter­mism Is Bad

ColdButtonIssues5 Sep 2022 13:29 UTC
228 points
27 comments7 min readEA link

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk830 Nov 2023 10:15 UTC
−9 points
0 comments12 min readEA link

Biose­cu­rity challenges posed by Dual-Use Re­search of Con­cern (DURC)

Byron Cohen1 Sep 2022 7:33 UTC
12 points
0 comments7 min readEA link
(raisinghealth.substack.com)

Doubt­ing Deter­rence by Denial

C.K.20 Mar 2025 15:55 UTC
4 points
1 comment6 min readEA link
(conradkunadu.substack.com)

High Im­pact Ca­reers in For­mal Ver­ifi­ca­tion: Ar­tifi­cial Intelligence

quinn5 Jun 2021 14:45 UTC
28 points
7 comments16 min readEA link

Peter Wilde­ford on Fore­cast­ing Nu­clear Risk and why EA should fund scal­able non-profits

Michaël Trazzi13 Apr 2022 16:29 UTC
9 points
1 comment3 min readEA link
(theinsideview.github.io)

An­nounc­ing the Prague Fall Sea­son 2023 and the Epistea Res­i­dency Program

Epistea22 May 2023 5:52 UTC
88 points
2 comments4 min readEA link

The Age of Dis­clo­sure

Michael_2358 🔸7 Dec 2025 16:35 UTC
−9 points
0 comments1 min readEA link

An­nounc­ing AI Safety Support

Linda Linsefors19 Nov 2020 20:19 UTC
55 points
0 comments4 min readEA link

EA Ar­chi­tect: Disser­ta­tion on Im­prov­ing the So­cial Dy­nam­ics of Con­fined Spaces & Shelters Prece­dents Report

t466 Jun 2023 11:58 UTC
42 points
5 comments8 min readEA link

Ques­tion about ter­minol­ogy for lesser X-risks and S-risks

Laura Leighton8 Aug 2022 4:39 UTC
9 points
3 comments1 min readEA link

What You Need to Re­fute Ar­gu­ments With Astro­nom­i­cal Stakes

Bentham's Bulldog31 Oct 2025 15:46 UTC
8 points
13 comments11 min readEA link

“Nor­mal ac­ci­dents” and AI sys­tems

Eleni_A8 Aug 2022 18:43 UTC
5 points
1 comment1 min readEA link
(www.achan.ca)

Open-source LLMs may prove Bostrom’s vuln­er­a­ble world hypothesis

Roope Ahvenharju14 Apr 2023 9:25 UTC
14 points
2 comments1 min readEA link

1st Alinha Hacka Re­cap: Reflect­ing on the Brazilian AI Align­ment Hackathon

Thiago USP31 Jan 2024 10:38 UTC
7 points
0 comments2 min readEA link

Jan Kul­veit’s Cor­rigi­bil­ity Thoughts Distilled

brook25 Aug 2023 13:42 UTC
16 points
0 comments5 min readEA link
(www.lesswrong.com)

The fu­ture of nu­clear war

turchin21 May 2022 8:00 UTC
37 points
2 comments34 min readEA link

The con­ver­gent dy­namic we missed

Remmelt12 Dec 2023 22:50 UTC
2 points
0 comments3 min readEA link

The Eng­ine of Foreclosure

Ihor Ivliev5 Jul 2025 15:26 UTC
0 points
0 comments25 min readEA link

US pub­lic per­cep­tion of CAIS state­ment and the risk of extinction

Jamie E22 Jun 2023 16:39 UTC
126 points
4 comments9 min readEA link

Database of re­search pro­jects for vol­un­teers in food se­cu­rity dur­ing global catas­tro­phes (ALLFED)

JuanGarcia26 Sep 2024 19:39 UTC
47 points
1 comment1 min readEA link

The com­pany that builds the UK’s nu­clear weapons is hiring for roles re­lated to wargaming

Will Howard🔹6 Jul 2023 20:25 UTC
15 points
0 comments1 min readEA link

An­nounc­ing Open Philan­thropy’s AI gov­er­nance and policy RFP

JulianHazell17 Jul 2024 0:25 UTC
73 points
2 comments1 min readEA link
(www.openphilanthropy.org)

[Cross­post] Rel­a­tivis­tic Colonization

itaibn31 Dec 2020 2:30 UTC
8 points
7 comments4 min readEA link

Against longtermism

Brian Lui11 Aug 2022 5:37 UTC
38 points
30 comments6 min readEA link

[Question] How to dis­close a new x-risk?

harsimony24 Aug 2022 1:35 UTC
20 points
9 comments1 min readEA link

[Question] What’s the ex­act way you pre­dict prob­a­bil­ity of AI ex­tinc­tion?

jackchang11013 Jun 2023 15:11 UTC
18 points
7 comments1 min readEA link

[Question] How might a mis­al­igned Ar­tifi­cial Su­per­in­tel­li­gence break up a hu­man be­ing into us­able elec­tro­mag­netic en­ergy?

Caruso5 Oct 2024 17:33 UTC
−5 points
3 comments1 min readEA link

In­tent al­ign­ment with­out moral al­ign­ment prob­a­bly leads to catastrophe

Alistair Stewart29 Aug 2025 17:21 UTC
12 points
0 comments5 min readEA link

U.S. Reg­u­la­tory Up­dates to Benefit-Cost Anal­y­sis: High­lights and En­courage­ment to Sub­mit Public Comments

DannyBressler18 May 2023 6:37 UTC
79 points
6 comments6 min readEA link

“The Vuln­er­a­ble World Hy­poth­e­sis” (Nick Bostrom’s new pa­per)

Hauke Hillebrandt9 Nov 2018 11:20 UTC
24 points
6 comments1 min readEA link
(nickbostrom.com)

The Three Miss­ing Pie­ces in Ma­chine Ethics

JBug16 Nov 2025 21:26 UTC
2 points
0 comments2 min readEA link

Frac­tal Gover­nance: A Tractable, Ne­glected Ap­proach to Ex­is­ten­tial Risk Reduction

WillPearson5 Mar 2025 19:57 UTC
3 points
1 comment3 min readEA link

Call me, maybe? Hotlines and Global Catas­trophic Risk [Founders Pledge]

christian.r24 Jan 2023 16:28 UTC
83 points
10 comments26 min readEA link
(docs.google.com)

[Event] Build­ing What the Fu­ture Needs: A cu­rated con­fer­ence in Ber­lin (Sep 6, 2025) for high-im­pact builders and researchers

Vasiliy Kondyrev8 Aug 2025 14:35 UTC
21 points
0 comments2 min readEA link

The EA com­mu­ni­ties that emerged from the Chicx­u­lub crater

Silvia Fernández14 Nov 2022 19:46 UTC
16 points
1 comment8 min readEA link

AMA: Joan Rohlfing, Pres­i­dent and COO of the Nu­clear Threat Initiative

Joan Rohlfing6 Dec 2021 20:58 UTC
74 points
35 comments1 min readEA link

Quick take­aways from Griffes’ Do­ing good while clueless

Jim Buhler7 Aug 2025 9:59 UTC
10 points
4 comments2 min readEA link

How to ne­glect the long term (Hay­den Wilk­in­son)

Global Priorities Institute13 Oct 2023 11:09 UTC
21 points
0 comments5 min readEA link
(globalprioritiesinstitute.org)

Cruxes for nu­clear risk re­duc­tion efforts—A proposal

Sarah Weiler16 Nov 2022 6:03 UTC
38 points
0 comments24 min readEA link

Shel­ter­ing hu­man­ity against x-risk: re­port from the SHELTER weekend

Janne M. Korhonen10 Oct 2022 15:09 UTC
76 points
3 comments5 min readEA link

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

Fai4 Mar 2022 19:59 UTC
144 points
29 comments16 min readEA link

Best prac­tices for risk com­mu­ni­ca­tion from the aca­demic literature

Existential Risk Communication Project12 Aug 2024 18:54 UTC
9 points
3 comments23 min readEA link

AI-Rele­vant Reg­u­la­tion: In­surance in Safety-Crit­i­cal Industries

SWK22 Jul 2023 17:52 UTC
5 points
0 comments6 min readEA link

Ex­plor­ing Blood-Based Bio­surveillance, Part 1: Blood as a Sam­ple Type

ljusten18 Jul 2024 13:10 UTC
28 points
2 comments10 min readEA link
(naobservatory.org)

[Question] How many EA 2021 $s would you trade off against a 0.01% chance of ex­is­ten­tial catas­tro­phe?

Linch27 Nov 2021 23:46 UTC
55 points
87 comments1 min readEA link

Disem­pow­er­ment spirals as a likely mechanism for ex­is­ten­tial catastrophe

Raymond D10 Apr 2025 14:38 UTC
15 points
1 comment5 min readEA link

Dani Nedal: Risks from great-power competition

EA Global13 Feb 2020 22:10 UTC
20 points
0 comments16 min readEA link
(www.youtube.com)

AI Progress: The Game Show

Alex Arnett21 Apr 2023 16:47 UTC
3 points
0 comments2 min readEA link

Reflec­tive Align­ment Ar­chi­tec­ture (RAA): A Frame­work for Mo­ral Co­her­ence in AI Systems

Nicolas • EnlightenedAI Research Lab21 Nov 2025 22:05 UTC
1 point
0 comments2 min readEA link

Overview of Re­think Pri­ori­ties’ work on risks from nu­clear weapons

MichaelA🔸10 Jun 2021 18:48 UTC
43 points
1 comment3 min readEA link

Sum­mary: the Global Catas­trophic Risk Man­age­ment Act of 2022

Anthony Fleming23 Sep 2022 3:19 UTC
35 points
8 comments2 min readEA link

How to make cli­mate ac­tivists care for other ex­is­ten­tial risks

ExponentialDragon12 Mar 2023 9:05 UTC
22 points
7 comments2 min readEA link

Global Challenges Pro­ject—Ex­is­ten­tial Risk Workshop

Emma Abele23 Sep 2022 22:13 UTC
3 points
0 comments1 min readEA link

Ex­plo­ra­tion of Foods High in Vi­tamin D as a Die­tary Strat­egy in the Event of Abrupt Sun­light Reduction

Juliana Alvarez19 Sep 2024 15:28 UTC
13 points
6 comments20 min readEA link

AI scal­ing myths

Noah Varley🔸27 Jun 2024 20:29 UTC
30 points
0 comments1 min readEA link
(open.substack.com)

[Question] Ur­gency in the ITN framework

Shaïman Thürler24 Oct 2024 15:02 UTC
12 points
5 comments1 min readEA link

Can Quan­tum Com­pu­ta­tion be used to miti­gate ex­is­ten­tial risk?

Angus LaFemina18 Sep 2023 20:02 UTC
10 points
3 comments10 min readEA link

Why EAs are skep­ti­cal about AI Safety

Lukas Trötzmüller🔸18 Jul 2022 19:01 UTC
293 points
31 comments29 min readEA link

Ex­plore jobs in biose­cu­rity, nu­clear se­cu­rity, and cli­mate change

EA Handbook18 Feb 2025 21:42 UTC
4 points
0 comments1 min readEA link

On­line Con­fer­ence Op­por­tu­nity for EA Grad Students

jonathancourtney21 Aug 2020 17:31 UTC
8 points
1 comment1 min readEA link

Trump-Ze­len­sky press con­fer­ence just now

Profile202428 Feb 2025 18:52 UTC
−14 points
0 comments1 min readEA link

How to Donate to Alle­vi­ate Suffer­ing in Gaza

Dawn Drescher10 Aug 2025 18:43 UTC
−23 points
14 comments4 min readEA link
(impartial-priorities.org)

Join 2000+ peo­ple donat­ing through the ECF

Longview Philanthropy19 Nov 2025 13:09 UTC
42 points
6 comments2 min readEA link

EA is more than longtermism

frances_lorenz3 May 2022 15:18 UTC
160 points
99 comments5 min readEA link

Lu­nar Colony

purplepeople19 Dec 2016 16:43 UTC
2 points
26 comments1 min readEA link

Ex­tinc­tion is prob­a­bly only 10^10 times worse than one ran­dom death

Avik Garg28 Mar 2025 17:13 UTC
6 points
7 comments2 min readEA link

Freak­ing out about x-risk doesn’t help; set­tle in for the long war

Holly Elmore ⏸️ 🔸2 Nov 2024 0:00 UTC
80 points
2 comments2 min readEA link

Carl Shul­man on the com­mon-sense case for ex­is­ten­tial risk work and its prac­ti­cal implications

80000_Hours8 Oct 2021 13:43 UTC
41 points
2 comments149 min readEA link

Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

Otto8 May 2023 10:49 UTC
28 points
5 comments6 min readEA link

Sys­tem Level Safety Evaluations

markov29 Sep 2025 13:55 UTC
3 points
0 comments9 min readEA link
(equilibria1.substack.com)

Con­sid­er­a­tions re­gard­ing be­ing nice to AIs

Matt Alexander18 Nov 2025 13:27 UTC
2 points
0 comments15 min readEA link
(www.lesswrong.com)

The Next Decades Will Plau­si­bly Be Com­pletely Insane

Bentham's Bulldog30 Nov 2025 18:43 UTC
14 points
3 comments14 min readEA link

4 types of AGI se­lec­tion, and how to con­strain them

Remmelt9 Aug 2023 15:02 UTC
7 points
0 comments3 min readEA link

Be­ing the per­son who doesn’t launch nukes: new EA cause?

MichaelDickens6 Aug 2022 3:44 UTC
9 points
3 comments1 min readEA link

Carnegie Coun­cil MisUn­der­stands Longtermism

Jeff A30 Sep 2022 2:57 UTC
6 points
8 comments1 min readEA link
(www.carnegiecouncil.org)

Open call: “Ex­is­ten­tial risk of AI: tech­ni­cal con­di­tions”

miller-max14 Apr 2025 14:47 UTC
15 points
1 comment1 min readEA link

[Link post] Promis­ing Paths to Align­ment—Con­nor Leahy | Talk

frances_lorenz14 May 2022 15:58 UTC
17 points
0 comments1 min readEA link

My first effec­tive al­tru­ism con­fer­ence: 10 learn­ings, my 121s and next steps

Milan.Patel21 May 2022 8:51 UTC
10 points
3 comments4 min readEA link

In­tro­duc­ing WAIT to Save Humanity

carter allen🔸1 Apr 2025 21:36 UTC
22 points
1 comment3 min readEA link

Mauhn Re­leases AI Safety Documentation

Berg Severens2 Jul 2021 12:19 UTC
4 points
2 comments1 min readEA link

Talk: Longter­mism, the “Spirit” of Digi­tal Capitalism

ludwigbald5 Jan 2025 14:27 UTC
−2 points
1 comment1 min readEA link
(media.ccc.de)

“Guardianes de Dere­cho” Pod­cast: High­light­ing the role of law in Manag­ing Global Catas­trophic Risks to latam law stu­dents

Alba del Valle Moreno Salazar13 Aug 2024 13:14 UTC
7 points
0 comments9 min readEA link

Help with the Fo­rum; wiki edit­ing, giv­ing feed­back, mod­er­a­tion, and more

Lizka20 Apr 2022 12:58 UTC
88 points
6 comments3 min readEA link

Repli­cat­ing AI Debate

Anthony Fleming1 Feb 2025 23:19 UTC
9 points
0 comments5 min readEA link

Ex­plor­ing Blood-Based Bio­surveillance, Part 3: The Blood Virome

ljusten13 Feb 2025 17:51 UTC
27 points
1 comment14 min readEA link
(naobservatory.org)

Give Neo a Chance

ank6 Mar 2025 14:35 UTC
1 point
3 comments7 min readEA link

Gaia Net­work: An Illus­trated Primer

Roman Leventov26 Jan 2024 11:55 UTC
4 points
4 comments15 min readEA link

Map­ping out col­lapse research

FJehn7 Jun 2023 12:10 UTC
18 points
2 comments11 min readEA link
(existentialcrunch.substack.com)

Loop-me­di­ated isother­mal am­plifi­ca­tion (LAMP) for pan­demic pathogen di­ag­nos­tics: How it differs from PCR and why it isn’t more widely used

Julia Niggemeyer2 Sep 2024 0:17 UTC
21 points
0 comments13 min readEA link

Prometheus Un­leashed: Mak­ing sense of in­for­ma­tion hazards

basil.icious15 Feb 2023 6:44 UTC
0 points
0 comments4 min readEA link
(basil08.github.io)

[UPDATE] From Com­fort Zone to Fron­tiers of Im­pact: Pur­su­ing A Late-Ca­reer Shift to Ex­is­ten­tial Risk Reduction

Jim Chapman4 Mar 2025 21:28 UTC
252 points
13 comments16 min readEA link

The Threat of Cli­mate Change Is Exaggerated

Samrin Saleem29 Sep 2023 18:49 UTC
13 points
16 comments14 min readEA link

A note about differ­en­tial tech­nolog­i­cal development

So8res24 Jul 2022 23:41 UTC
58 points
8 comments6 min readEA link

If you are too stressed, walk away from the front lines

Neil Warren12 Jun 2023 21:01 UTC
7 points
2 comments4 min readEA link

New Work­ing Paper Series of the Le­gal Pri­ori­ties Project

Legal Priorities Project18 Oct 2021 10:30 UTC
60 points
0 comments9 min readEA link

Re­search pro­ject idea: food stock­piling as a GCR intervention

Will Howard🔹12 Mar 2024 12:59 UTC
8 points
5 comments3 min readEA link

I want Fu­ture Perfect, but for sci­ence publications

James Lin8 Mar 2022 17:09 UTC
67 points
8 comments5 min readEA link

Where are the red lines for AI?

Karl von Wendt5 Aug 2022 9:41 UTC
13 points
3 comments6 min readEA link

Un­der­stand­ing how hard al­ign­ment is may be the most im­por­tant re­search di­rec­tion right now

Aron7 Jun 2023 19:05 UTC
26 points
3 comments6 min readEA link
(coordinationishard.substack.com)

How democ­racy ends: a re­view and reevaluation

richard_ngo24 Nov 2018 17:41 UTC
27 points
2 comments6 min readEA link
(thinkingcomplete.blogspot.com)

Things usu­ally end slowly

OllieBase7 Jun 2022 17:00 UTC
76 points
14 comments7 min readEA link

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubK21 Apr 2023 10:07 UTC
44 points
3 comments5 min readEA link
(sohl-dickstein.github.io)

Does the US nu­clear policy still tar­get cities?

Jeffrey Ladish2 Oct 2019 17:46 UTC
32 points
0 comments10 min readEA link

Sys­temic Cas­cad­ing Risks: Rele­vance in Longter­mism & Value Lock-In

Richard R2 Sep 2022 7:53 UTC
58 points
10 comments16 min readEA link

ASI ex­is­ten­tial risk: re­con­sid­er­ing al­ign­ment as a goal

Matrice Jacobine🔸🏳️‍⚧️15 Apr 2025 13:36 UTC
27 points
3 comments1 min readEA link
(michaelnotebook.com)

A Doc­trine of Strate­gic Per­sis­tence: A Di­ag­nos­tic and Oper­a­tional Frame­work for Nav­i­gat­ing Sys­temic Risk

Ihor Ivliev31 Jul 2025 15:05 UTC
1 point
0 comments58 min readEA link

Longter­mism and the Prob­lem of Alie­na­tion: A Re­sponse to “Authen­tic­ity, Mean­ing, and Alie­na­tion: Rea­sons to Care Less About Far-Fu­ture Peo­ple”

Simran Puri21 Oct 2025 16:17 UTC
1 point
0 comments17 min readEA link

The Game Board has been Flipped: Now is a good time to re­think what you’re doing

LintzA28 Jan 2025 21:20 UTC
391 points
69 comments13 min readEA link

Overview of the Pathogen Bio­surveillance Land­scape

Brianna Gopaul9 Jan 2023 6:05 UTC
54 points
4 comments20 min readEA link

CEEALAR’s The­ory of Change

CEEALAR19 Dec 2023 20:21 UTC
51 points
5 comments3 min readEA link

The fu­ture of humanity

Dem0sthenes1 Sep 2022 22:34 UTC
1 point
0 comments8 min readEA link

Notes on Schel­ling’s “Strat­egy of Con­flict” (1960)

MichaelA🔸29 Jan 2021 8:56 UTC
21 points
4 comments8 min readEA link

Car­reras con Im­pacto: Outreach Re­sults Among Latin Amer­i­can Students

SMalagon21 Aug 2024 5:10 UTC
34 points
4 comments3 min readEA link

What AI com­pa­nies should do: Some rough ideas

Zach Stein-Perlman21 Oct 2024 14:00 UTC
14 points
1 comment5 min readEA link

OpenAI is start­ing a new “Su­per­in­tel­li­gence al­ign­ment” team and they’re hiring

alejandro5 Jul 2023 18:27 UTC
100 points
16 comments1 min readEA link
(openai.com)

Oxford Biose­cu­rity Group: Fundrais­ing and Plans for Early 2025

Lin BL20 Dec 2024 20:56 UTC
33 points
0 comments2 min readEA link

SB 1047 Simplified

Gabe K25 Sep 2024 12:00 UTC
14 points
0 comments4 min readEA link

There is only one goal or drive—only self-per­pet­u­a­tion counts

freest one13 Jun 2023 1:37 UTC
2 points
4 comments8 min readEA link

The In­ter­gov­ern­men­tal Panel On Global Catas­trophic Risks (IPGCR)

DannyBressler1 Feb 2024 17:36 UTC
46 points
9 comments19 min readEA link

Don’t panic: 90% of EAs are good people

Closed Limelike Curves19 May 2024 4:37 UTC
22 points
13 comments2 min readEA link

As­ter­isk Mag 09: Weird

Clara Collier4 Apr 2025 20:25 UTC
25 points
0 comments2 min readEA link

Vacuum De­cay: Ex­pert Sur­vey Results

Jess_Riedel13 Mar 2025 18:31 UTC
81 points
3 comments13 min readEA link

Join Path­ways to Progress’s Book Dis­cus­sion on Jeffrey Ding’s Tech­nol­ogy and the Rise of Great Powers

lmessner25 Jul 2025 2:38 UTC
2 points
0 comments1 min readEA link

An­thropic: Core Views on AI Safety: When, Why, What, and How

jonmenaster9 Mar 2023 17:30 UTC
107 points
6 comments22 min readEA link
(www.anthropic.com)

Longter­mists should take cli­mate change very seriously

Nir Eyal3 Oct 2022 18:33 UTC
29 points
10 comments8 min readEA link

[Question] Seek­ing Tan­gible Ex­am­ples of AI Catastrophes

clifford.banes25 Nov 2024 7:55 UTC
9 points
2 comments1 min readEA link

[Question] Re­quest for As­sis­tance—Re­search on Sce­nario Devel­op­ment for Ad­vanced AI Risk

Kiliank30 Mar 2022 3:01 UTC
2 points
1 comment1 min readEA link

U.S. EAs Should Con­sider Ap­ply­ing to Join U.S. Diplomacy

abiolvera17 May 2022 17:14 UTC
115 points
22 comments8 min readEA link

Build­ing lead­ers to­day; Safe­guard­ing the fu­ture to­mor­row: The Mo­ral Case for Long-Term Think­ing: A re­sponse to the Es­say on longtermism

Hasshams11 Sep 2025 13:01 UTC
3 points
0 comments2 min readEA link

21 crit­i­cisms of EA I’m think­ing about

Peter Wildeford1 Sep 2022 19:28 UTC
210 points
26 comments9 min readEA link

AIS Ber­lin, events, op­por­tu­ni­ties and the flipped game­board—Field­builders Newslet­ter, Fe­bru­ary 2025

gergo17 Feb 2025 14:13 UTC
7 points
0 comments3 min readEA link

[Question] If an AI fi­nan­cial bub­ble popped, how much would that change your mind about near-term AGI?

Yarrow Bouchard 🔸21 Oct 2025 22:39 UTC
19 points
6 comments2 min readEA link

[Question] AI con­scious­ness & moral sta­tus: What do the ex­perts think?

Jay Luong6 Jul 2024 15:27 UTC
0 points
3 comments1 min readEA link

Book re­view: The Dooms­day Machine

L Rudolf L18 Aug 2021 22:15 UTC
21 points
0 comments16 min readEA link
(strataoftheworld.blogspot.com)

Pod­cast: Bryan Ca­plan on open bor­ders, UBI, to­tal­i­tar­i­anism, AI, pan­demics, util­i­tar­i­anism and la­bor economics

Gus Docker22 Feb 2022 15:04 UTC
22 points
0 comments45 min readEA link
(www.utilitarianpodcast.com)

My Feed­back to the UN Ad­vi­sory Body on AI

Heramb Podar4 Apr 2024 23:39 UTC
7 points
1 comment4 min readEA link

The case to abol­ish the biol­ogy of suffer­ing as a longter­mist action

Gaetan_Selle 🔷21 May 2022 8:51 UTC
38 points
8 comments4 min readEA link

[Link] Thiel on GCRs

Milan Griffes22 Jul 2019 20:47 UTC
28 points
11 comments1 min readEA link

[Question] Why hasn’t there been any sig­nifi­cant AI protest

sammyboiz🔸17 May 2024 2:59 UTC
21 points
14 comments1 min readEA link

Sum­mit on Ex­is­ten­tial Se­cu­rity 2023

Amy Labenz27 Jan 2023 18:39 UTC
120 points
7 comments2 min readEA link

An­nounc­ing the Founders Pledge Global Catas­trophic Risks Fund

christian.r26 Oct 2022 13:39 UTC
49 points
1 comment3 min readEA link

Ap­pren­tice­ship Align­ment: from Si­mu­lated En­vi­ron­ment to the Phys­i­cal World

Arri Morris13 Oct 2025 12:32 UTC
1 point
0 comments9 min readEA link

We in­ter­viewed 15 China-fo­cused re­searchers on how to do good research

Gabriel Wagner19 Dec 2022 19:08 UTC
49 points
4 comments23 min readEA link

Three er­rors in the moral calcu­la­tions con­cern­ing ex­is­ten­tial risk.

jun1233 Aug 2023 11:47 UTC
−9 points
0 comments1 min readEA link

Perché i rischi di sofferenza sono i rischi es­isten­ziali peg­giori e come pos­si­amo prevenirli

EA Italy17 Jan 2023 11:14 UTC
1 point
0 comments1 min readEA link

[Question] Peo­ple work­ing on x-risks: what emo­tion­ally mo­ti­vates you?

Vael Gates5 Jul 2021 3:16 UTC
16 points
8 comments1 min readEA link

If an ASI wakes up be­fore my ideas catch on… will it still read my blog?

Astelle Kay10 Jul 2025 22:37 UTC
3 points
0 comments3 min readEA link

How long un­til re­cov­ery af­ter col­lapse?

FJehn24 Sep 2024 8:43 UTC
12 points
3 comments7 min readEA link
(existentialcrunch.substack.com)

Ur­gency vs. Pa­tience—a Toy Model

AHT19 Aug 2020 14:13 UTC
39 points
4 comments4 min readEA link

Try to solve the hard parts of the al­ign­ment problem

MikhailSamin11 Jul 2023 17:02 UTC
8 points
0 comments5 min readEA link

A List of Nu­clear Threats

Tristan W11 Aug 2025 13:49 UTC
11 points
0 comments4 min readEA link

Com­piling re­sources com­par­ing AI mi­suse, mis­al­ign­ment, and in­com­pe­tence risk and tractability

Peter44445 May 2022 16:16 UTC
3 points
2 comments1 min readEA link

Com­po­nents of Strate­gic Clar­ity [Strate­gic Per­spec­tives on Long-term AI Gover­nance, #2]

MMMaas2 Jul 2022 11:22 UTC
66 points
0 comments6 min readEA link

On ne­go­ti­ated set­tle­ments vs con­flict with mis­al­igned AGI

Charles Dillon 🔸24 Nov 2025 12:03 UTC
10 points
1 comment6 min readEA link

What are the “no free lunch” the­o­rems?

Vishakha Agrawal4 Feb 2025 2:02 UTC
3 points
0 comments1 min readEA link
(aisafety.info)

In­ter­view with Ro­man Yam­polskiy about AGI on The Real­ity Check

Darren McKee18 Feb 2023 23:29 UTC
27 points
0 comments1 min readEA link
(www.trcpodcast.com)

‘Cru­cial Con­sid­er­a­tions and Wise Philan­thropy’, by Nick Bostrom

Pablo17 Mar 2017 6:48 UTC
35 points
4 comments24 min readEA link
(www.stafforini.com)

[Question] Why are we not talk­ing more about the metacrisis per­spec­tive on ex­is­ten­tial risk?

Alexander Herwix 🔸29 Jan 2023 9:35 UTC
53 points
44 comments1 min readEA link

The Ax­iolog­i­cal Im­per­a­tive vs. The Agent’s Good Life

vinniescent24 Sep 2025 8:30 UTC
0 points
0 comments2 min readEA link

Bu­gout Bags for Disasters

Fin8 Mar 2022 17:03 UTC
10 points
0 comments4 min readEA link

[Question] Track­ing Com­pute Stocks and Flows: Case Stud­ies?

Cullen 🔸5 Oct 2022 17:54 UTC
34 points
1 comment1 min readEA link

Does Mo­ral Philos­o­phy Drive Mo­ral Progress?

AppliedDivinityStudies2 Jul 2021 21:22 UTC
39 points
4 comments4 min readEA link

Re­port: Pro­pos­als for the pre­ven­tion and de­tec­tion of emerg­ing in­fec­tious dis­eases (EID) in Guatemala

JorgeTorresC22 Sep 2023 20:27 UTC
14 points
2 comments2 min readEA link

Are cor­po­ra­tions su­per­in­tel­li­gent?

Vishakha Agrawal17 Mar 2025 10:33 UTC
3 points
2 comments1 min readEA link
(aisafety.info)

A short sum­mary of what I have been post­ing about on LessWrong

ThomasCederborg10 Sep 2024 12:26 UTC
3 points
0 comments2 min readEA link

Emily Grundy: Aus­trali­ans’ per­cep­tions of global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo30 Aug 2023 5:36 UTC
7 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Policy and re­search ideas to re­duce ex­is­ten­tial risk

80000_Hours27 Apr 2020 8:46 UTC
3 points
0 comments4 min readEA link
(80000hours.org)

Wise Crowd & Demo­cratic Spirit

Hristo Zaykov27 Sep 2022 16:29 UTC
2 points
0 comments2 min readEA link
(www.hristo.blog)

AISN #20: LLM Pro­lifer­a­tion, AI De­cep­tion, and Con­tin­u­ing Drivers of AI Capabilities

Center for AI Safety29 Aug 2023 15:03 UTC
12 points
0 comments8 min readEA link
(newsletter.safe.ai)

Joscha Bach on Syn­thetic In­tel­li­gence [an­no­tated]

Roman Leventov2 Mar 2023 11:21 UTC
8 points
0 comments9 min readEA link
(www.jimruttshow.com)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo19 Jul 2023 8:15 UTC
5 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Speedrun: De­mon­strate the abil­ity to rapidly scale food pro­duc­tion in the case of nu­clear winter

Buhl13 Feb 2023 19:00 UTC
39 points
2 comments16 min readEA link

Brain Farm­ing: The Case for a Global Ban

Novel Minds Project27 Sep 2025 17:31 UTC
48 points
3 comments3 min readEA link

How Eng­ineers can Con­tribute to Re­duc­ing the Risks from Nu­clear War

Jessica Wen12 Oct 2023 13:22 UTC
33 points
4 comments22 min readEA link
(www.highimpactengineers.org)

Balanc­ing safety and waste

Daniel_Friedrich17 Mar 2024 10:57 UTC
6 points
0 comments8 min readEA link

AI Offense Defense Balance in a Mul­tipo­lar World

Otto17 Jul 2025 9:47 UTC
15 points
0 comments19 min readEA link
(www.existentialriskobservatory.org)

Re-in­tro­duc­ing Upgrad­able (a.k.a., 700,000 Hours): Life op­ti­miza­tion as a ser­vice for altruists

James Norris5 Feb 2025 16:00 UTC
4 points
0 comments1 min readEA link

Si­mu­lat­ing the end of the world: Ex­plor­ing the cur­rent state of so­cietal dy­nam­ics modeling

FJehn15 Nov 2023 12:53 UTC
11 points
2 comments10 min readEA link
(existentialcrunch.substack.com)

The Threat of Nu­clear Ter­ror­ism MOOC [link]

RyanCarey19 Oct 2017 12:31 UTC
8 points
1 comment1 min readEA link

“Prona­tal­ists” may look to co-opt effec­tive al­tru­ism or longtermism

pseudonym17 Nov 2022 21:04 UTC
34 points
25 comments4 min readEA link
(www.businessinsider.com)

Bulk­ing in­for­ma­tion ad­di­tion­al­ities in global de­vel­op­ment for medium-term lo­cal prosperity

brb24311 Apr 2022 17:52 UTC
4 points
0 comments4 min readEA link

Avoid­ing Group­think in In­tro Fel­low­ships (and Diver­sify­ing Longter­mism)

seanrson14 Sep 2021 21:00 UTC
67 points
10 comments1 min readEA link

[Job]: AI Stan­dards Devel­op­ment Re­search Assistant

Tony Barrett14 Oct 2022 20:18 UTC
13 points
0 comments2 min readEA link

Effec­tive Altru­ism Florida’s AI Ex­pert Panel—Record­ing and Slides Available

Sam_E_2419 May 2023 19:15 UTC
2 points
0 comments1 min readEA link

An Ex­ec­u­tive Briefing on the Ar­chi­tec­ture of a Sys­temic Crisis

Ihor Ivliev10 Jul 2025 0:46 UTC
0 points
0 comments4 min readEA link

Les­sons for AI Gover­nance from Atoms for Peace

Amritanshu Prasad16 Apr 2025 14:25 UTC
10 points
2 comments2 min readEA link
(www.thenextfrontier.blog)

Out of This Box: The Last Mu­si­cal (Writ­ten by Hu­mans) - Crowd­fund­ing!

GuyP24 Mar 2025 15:09 UTC
24 points
0 comments6 min readEA link
(manifund.org)

Ex­pected im­pact of a ca­reer in AI safety un­der differ­ent opinions

Jordan Taylor14 Jun 2022 14:25 UTC
42 points
16 comments11 min readEA link

A new database of nan­otech­nol­ogy strat­egy re­sources

Ben Snodin5 Nov 2022 5:20 UTC
39 points
0 comments1 min readEA link

Evolu­tion is dumb and slow, right?

Remmelt16 Sep 2025 15:15 UTC
6 points
1 comment6 min readEA link

COVID-19 re­sponse as XRisk intervention

tyleralterman10 Apr 2020 6:16 UTC
51 points
5 comments4 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo29 Jun 2023 7:23 UTC
8 points
0 comments1 min readEA link
(www.campaignforaisafety.org)

Ex­tinc­tion risk and longter­mism: a broader cri­tique of Thorstad

Matthew Rendall21 Apr 2024 13:55 UTC
31 points
5 comments3 min readEA link

(Cross­post) Ar­ti­cle on Po­lari­sa­tion Against Longter­mism and Mis­ap­ply­ing Mo­ral Philosophy

Danny Wardle22 Mar 2025 4:42 UTC
2 points
5 comments6 min readEA link
(www.pluralityofwords.com)

AI gov­er­nance needs a the­ory of victory

Corin Katzke21 Jun 2024 16:08 UTC
84 points
8 comments20 min readEA link
(www.convergenceanalysis.org)

Low-key Longtermism

Jonathan Rystrom25 Jul 2022 13:39 UTC
26 points
6 comments8 min readEA link

[US] NTIA: AI Ac­countabil­ity Policy Re­quest for Comment

Kyle J. Lucchese 🔸13 Apr 2023 16:12 UTC
47 points
4 comments1 min readEA link
(ntia.gov)

De­sign­ing Ar­tifi­cial Wis­dom: The Wise Work­flow Re­search Organization

Jordan Arel12 Jul 2024 6:57 UTC
14 points
1 comment9 min readEA link

A be­gin­ner’s in­tro­duc­tion to AI-driven biorisk: Large Lan­guage Models, Biolog­i­cal De­sign Tools, In­for­ma­tion Hazards, and Biosecurity

NatKiilu3 May 2024 15:49 UTC
6 points
1 comment16 min readEA link

Balanc­ing the Scales: Ad­dress­ing Biolog­i­cal X-Risk Re­search Dis­par­i­ties Beyond the West

Nnaemeka Emmanuel Nnadi22 Sep 2023 21:31 UTC
10 points
1 comment2 min readEA link

I No Longer Feel Com­fortable in EA

disgruntled_ea5 Feb 2023 20:45 UTC
2 points
29 comments1 min readEA link

The Gift of Life

Richard Y Chappell🔸30 Jul 2025 16:42 UTC
15 points
6 comments6 min readEA link
(www.goodthoughts.blog)

In­ves­tiga­tive Jour­nal­ists are more effec­tive al­tru­ists than most

AlanGreenspan27 Sep 2023 12:26 UTC
2 points
8 comments1 min readEA link

An­i­mal Weapons: Les­sons for Hu­mans in the Age of X-Risk

Damin Curtis🔹4 Jul 2023 14:43 UTC
32 points
1 comment10 min readEA link

Bruce Kent (1929–2022)

technicalities10 Jun 2022 14:03 UTC
47 points
3 comments2 min readEA link

Tech­nol­ogy is Power: Rais­ing Aware­ness Of Tech­nolog­i­cal Risks

Marc Wong9 Feb 2023 15:13 UTC
3 points
0 comments2 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: GitWise and AlphaWise

Jordan Arel13 Jul 2024 0:04 UTC
6 points
1 comment7 min readEA link

#200 – What su­perfore­cast­ers and ex­perts think about ex­is­ten­tial risks (Ezra Karger on The 80,000 Hours Pod­cast)

80000_Hours6 Sep 2024 17:53 UTC
12 points
2 comments14 min readEA link

Will AI R&D Au­toma­tion Cause a Soft­ware In­tel­li­gence Ex­plo­sion?

Forethought26 Mar 2025 15:37 UTC
32 points
4 comments2 min readEA link
(www.forethought.org)

In­fluenc­ing United Na­tions Space Governance

Carson Ezell9 May 2022 17:44 UTC
30 points
0 comments11 min readEA link

[Question] How will the world re­spond to “AI x-risk warn­ing shots” ac­cord­ing to refer­ence class fore­cast­ing?

Ryan Kidd18 Apr 2022 9:10 UTC
18 points
0 comments1 min readEA link

Open As­teroid Im­pact an­nounces lead­er­ship transition

Patrick Hoang1 Apr 2024 12:51 UTC
18 points
0 comments1 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 1

James Fodor13 Dec 2018 5:10 UTC
22 points
13 comments8 min readEA link

Will we even­tu­ally be able to colonize other stars? Notes from a pre­limi­nary review

Nick_Beckstead22 Jun 2014 18:19 UTC
30 points
7 comments32 min readEA link

Notes on “The Bomb: Pres­i­dents, Gen­er­als, and the Se­cret His­tory of Nu­clear War” (2020)

MichaelA🔸6 Feb 2021 11:10 UTC
18 points
5 comments8 min readEA link

Su­per Lenses + Mo­rally-Aimed Drives for A.I. Mo­ral Align­ment: Philo­soph­i­cal Framework

Christopher Hunt Robertson, M.Ed.15 Nov 2025 1:41 UTC
1 point
0 comments3 min readEA link

[Feed­back Re­quest] Hyper­text Fic­tion Piece on Ex­is­ten­tial Hope

Miranda_Zhang30 May 2021 15:44 UTC
35 points
2 comments1 min readEA link

Prob­lem ar­eas be­yond 80,000 Hours’ cur­rent pri­ori­ties

Arden Koehler22 Jun 2020 12:49 UTC
284 points
62 comments15 min readEA link

Longter­mism and short­ter­mism can dis­agree on nu­clear war to stop ad­vanced AI

David Johnston30 Mar 2023 23:22 UTC
2 points
0 comments1 min readEA link

A sim­ple ar­gu­ment for the bad­ness of hu­man extinction

Matthew Rendall17 Apr 2024 10:35 UTC
4 points
14 comments2 min readEA link

Reimag­in­ing Malev­olence: A Primer on Malev­olence and Im­pli­ca­tions for EA

Kenneth_Diao11 Apr 2024 12:50 UTC
28 points
3 comments44 min readEA link

2017 AI Safety Liter­a­ture Re­view and Char­ity Comparison

Larks20 Dec 2017 21:54 UTC
43 points
17 comments23 min readEA link

Restrict­ing brain organoid re­search to slow down AGI

freedomandutility9 Nov 2022 13:01 UTC
8 points
2 comments1 min readEA link

Three Types of In­tel­li­gence Explosion

rosehadshar17 Mar 2025 14:47 UTC
45 points
2 comments3 min readEA link
(www.forethought.org)

An Ar­gu­ment for Why the Fu­ture May Be Good

Ben_West🔸19 Jul 2017 22:03 UTC
51 points
30 comments4 min readEA link

Pre­limi­nary in­ves­ti­ga­tions on if STEM and EA com­mu­ni­ties could benefit from more overlap

elte11 Apr 2023 16:08 UTC
31 points
17 comments8 min readEA link

My cur­rent thoughts on the risks from SETI

Matthew_Barnett15 Mar 2022 17:17 UTC
47 points
9 comments10 min readEA link

An­nounc­ing #AISum­mitTalks fea­tur­ing Pro­fes­sor Stu­art Rus­sell and many others

Otto24 Oct 2023 10:16 UTC
9 points
1 comment1 min readEA link

Why Africa Needs Phages → Why Good Man­u­fac­tur­ing Pro­cess (GMP) Mat­ters → What to Do Next?

Nnaemeka Emmanuel Nnadi9 Aug 2025 22:27 UTC
8 points
0 comments6 min readEA link

[Question] How have shorter AI timelines been af­fect­ing you, and how have you been re­spond­ing to them?

Liav.Koren3 Jan 2023 4:20 UTC
35 points
15 comments1 min readEA link

My cur­rent take on ex­is­ten­tial AI risk [FB post]

Aryeh Englander1 May 2023 16:22 UTC
10 points
0 comments3 min readEA link

Model­ling civil­i­sa­tion be­yond a catastrophe

Arepo30 Oct 2022 16:26 UTC
58 points
5 comments13 min readEA link

Im­mor­tal­ity or death by AGI

ImmortalityOrDeathByAGI24 Sep 2023 9:44 UTC
12 points
2 comments4 min readEA link
(www.lesswrong.com)

Seek­ing so­cial sci­ence stu­dents /​ col­lab­o­ra­tors in­ter­ested in AI ex­is­ten­tial risks

Vael Gates24 Sep 2021 21:56 UTC
58 points
7 comments3 min readEA link

Phil Tor­res on Against Longtermism

Group Organizer13 Jan 2022 6:04 UTC
1 point
5 comments1 min readEA link

Un­der­stand­ing Sadism

Jim Buhler18 Aug 2025 13:25 UTC
20 points
2 comments8 min readEA link

Cen­ter on Long-Term Risk: Sum­mer Re­search Fel­low­ship 2025

Center on Long-Term Risk26 Mar 2025 17:28 UTC
44 points
0 comments1 min readEA link
(longtermrisk.org)

Nu­clear win­ter—Re­view­ing the ev­i­dence, the com­plex­ities, and my conclusions

Michael Hinge25 Aug 2023 15:45 UTC
148 points
26 comments36 min readEA link

The Curse of Sta­si­sism: Why Bri­tain and Amer­ica Are Bury­ing Their Own En­light­en­ment Le­ga­cyUn­ti­tled Draft

蒲渠波6 Oct 2025 13:29 UTC
−12 points
1 comment8 min readEA link

An­nounc­ing the win­ners of the Res­lab Re­quest for Information

Aron Lajko27 Jul 2023 17:43 UTC
15 points
3 comments10 min readEA link

Pos­si­ble Diver­gence in AGI Risk Tol­er­ance be­tween Selfish and Altru­is­tic agents

Brad West🔸9 Sep 2023 0:22 UTC
11 points
0 comments2 min readEA link

Anti-squat­ted AI x-risk do­mains index

plex12 Aug 2022 12:00 UTC
57 points
9 comments1 min readEA link

Do you want to do a de­bate on youtube? I’m look­ing for po­lite, truth-seek­ing par­ti­ci­pants.

Nathan Young10 Oct 2024 9:32 UTC
19 points
3 comments1 min readEA link

In­fo­graph­ics of the Re­port food se­cu­rity in Ar­gentina in the event of an Abrupt Re­duc­tion of Sun­light Sce­nario (ASRS)

JorgeTorresC31 Jul 2023 19:36 UTC
25 points
0 comments1 min readEA link

Prevent­ing Catas­trophic Pan­demics – 80,000 Hours

EA Handbook18 Feb 2025 21:33 UTC
9 points
0 comments1 min readEA link

[Question] Indi­rect effects in longtermism

mal_graham🔸14 Sep 2025 13:43 UTC
52 points
6 comments1 min readEA link

Think­ing About Propen­sity Evaluations

Maxime Riché 🔸19 Aug 2024 9:24 UTC
17 points
1 comment27 min readEA link

Op­ti­mism, AI risk, and EA blind spots

Justis28 Sep 2022 17:21 UTC
87 points
21 comments8 min readEA link

Brian To­masik – Differ­en­tial In­tel­lec­tual Progress as a Pos­i­tive-Sum Project

Tessa A 🔸23 Oct 2013 23:31 UTC
28 points
0 comments1 min readEA link
(longtermrisk.org)

4,000 years ago, Gilgamesh asked the hard­est EA ques­tion “Will any­thing I do ac­tu­ally mat­ter?”

Dr Kassim6 Apr 2025 21:56 UTC
31 points
3 comments1 min readEA link

Food se­cu­rity and catas­trophic famine risks—Manag­ing com­plex­ity and climate

Michael Hinge5 Apr 2023 20:33 UTC
27 points
0 comments23 min readEA link

Longter­mism and An­i­mal Farm­ing Trajectories

MichaelDello27 Dec 2022 0:58 UTC
51 points
8 comments17 min readEA link
(www.sentienceinstitute.org)

Emer­gency call for ORCG sup­port: Miti­gat­ing Global Catas­trophic Risks

JorgeTorresC30 Apr 2025 19:31 UTC
69 points
0 comments2 min readEA link

Con­sider keep­ing your threat mod­els pri­vate.

Miles Kodama1 Feb 2025 0:29 UTC
17 points
2 comments4 min readEA link

The Fermi para­dox, and why suffer­ing re­duc­tion re­duces ex­tinc­tion risk

Alex Schwalb17 Mar 2024 0:26 UTC
13 points
0 comments3 min readEA link

Enough about AI timelines— we already know what we need to know.

Holly Elmore ⏸️ 🔸9 Apr 2025 10:29 UTC
138 points
35 comments2 min readEA link

S-risks, X-risks, and Ideal Futures

OscarD🔸18 Jun 2024 15:12 UTC
15 points
6 comments1 min readEA link

In­tel­li­gence failures and a the­ory of change for fore­cast­ing

Nathan_Barnard31 Aug 2022 2:05 UTC
12 points
1 comment10 min readEA link

To Be Born in a Bag

Niko McCarty7 Oct 2024 12:39 UTC
18 points
1 comment16 min readEA link
(www.asimov.press)

Nav­i­gat­ing men­tal health challenges in global catas­trophic risk fields

Ewelina_Tur15 Oct 2024 14:46 UTC
40 points
1 comment17 min readEA link

Lo­cal De­tours On A Nar­row Path: How might treaties fail in China?

Jack_S🔸11 Aug 2025 20:33 UTC
9 points
0 comments14 min readEA link
(torchestogether.substack.com)

What is a ‘broad in­ter­ven­tion’ and what is a ‘nar­row in­ter­ven­tion’? Are we con­fus­ing our­selves?

Robert_Wiblin19 Dec 2015 16:12 UTC
20 points
3 comments2 min readEA link

Agnes Cal­lard on our fu­ture, the hu­man quest, and find­ing purpose

Tobias Häberli22 Mar 2023 12:29 UTC
3 points
0 comments21 min readEA link

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

EA Forum Archives14 Oct 2017 2:41 UTC
30 points
1 comment25 min readEA link
(www.lesswrong.com)

Hu­man­i­ties Re­search Ideas for Longtermists

Lizka9 Jun 2021 4:39 UTC
151 points
13 comments13 min readEA link

You Don’t Have to Be an AI Doomer to Sup­port AI Safety

Liam Robins14 Jun 2025 23:10 UTC
10 points
0 comments4 min readEA link
(thelimestack.substack.com)

How might we al­ign trans­for­ma­tive AI if it’s de­vel­oped very soon?

Holden Karnofsky29 Aug 2022 15:48 UTC
164 points
17 comments44 min readEA link

Safe Sta­sis Fallacy

Davidmanheim5 Feb 2024 10:54 UTC
23 points
4 comments1 min readEA link

Cause pro­file: Cog­ni­tive En­hance­ment Re­search

George Altman27 Mar 2022 13:43 UTC
64 points
6 comments22 min readEA link

The First Gen­er­a­tive De­sign of Com­plete Viral Genomes Us­ing AI: A Review

Nnaemeka Emmanuel Nnadi18 Sep 2025 22:24 UTC
10 points
0 comments3 min readEA link

Tran­scripts of in­ter­views with AI researchers

Vael Gates9 May 2022 6:03 UTC
140 points
14 comments2 min readEA link

Why AI Safety Needs a Cen­tral­ized Plan—And What It Might Look Like

Brandon Riggs28 May 2025 21:40 UTC
21 points
7 comments15 min readEA link

En­hanc­ing strate­gic sta­bil­ity and re­duc­ing global catas­trophic risk through eco­nomic in­ter­de­pen­dence—Call for feed­back and col­lab­o­ra­tion

Global Stability Network23 Apr 2025 15:43 UTC
13 points
0 comments14 min readEA link

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

Remmelt9 Nov 2024 16:10 UTC
2 points
0 comments5 min readEA link
(docs.google.com)

Align­ment ideas in­spired by hu­man virtue development

Borys Pikalov18 May 2025 9:36 UTC
6 points
0 comments4 min readEA link

Wel­come to Ap­ply: The 2024 Vi­talik Bu­terin Fel­low­ships in AI Ex­is­ten­tial Safety by FLI!

Zhijing Jin25 Sep 2023 16:20 UTC
14 points
5 comments2 min readEA link

The Germy Para­dox: An Introduction

eukaryote24 Sep 2019 5:18 UTC
48 points
4 comments3 min readEA link
(eukaryotewritesblog.com)

PIBBSS Fel­low­ship: Bounty for Refer­rals & Dead­line Extension

Anna_Gajdova17 Jan 2022 16:23 UTC
17 points
5 comments1 min readEA link

20 con­crete pro­jects for re­duc­ing ex­is­ten­tial risk

Buhl21 Jun 2023 15:54 UTC
132 points
27 comments19 min readEA link
(rethinkpriorities.org)

Why AI Safety Camp strug­gles with fundrais­ing (FBB #2)

gergo21 Jan 2025 17:25 UTC
67 points
10 comments7 min readEA link

Uncer­tainty about the fu­ture does not im­ply that AGI will go well

Lauro Langosco5 Jun 2023 15:02 UTC
8 points
11 comments7 min readEA link
(www.alignmentforum.org)

[Question] What AI Take-Over Movies or Books Will Scare Me Into Tak­ing AI Se­ri­ously?

Jordan Arel10 Jan 2023 8:30 UTC
11 points
8 comments1 min readEA link

Should we ex­pect the fu­ture to be good?

Neil Crawford30 Apr 2025 0:45 UTC
38 points
1 comment14 min readEA link

Does the US pub­lic sup­port ul­tra­vi­o­let ger­mi­ci­dal ir­ra­di­a­tion tech­nol­ogy for re­duc­ing risks from pathogens?

Jam Kraprayoon3 Feb 2023 14:10 UTC
111 points
3 comments10 min readEA link

AI In­ci­dent Shar­ing—Best prac­tices from other fields and a com­pre­hen­sive list of ex­ist­ing platforms

stepanlos28 Jun 2023 16:18 UTC
42 points
1 comment4 min readEA link

AI may pur­sue goals

Vishakha Agrawal28 May 2025 12:04 UTC
2 points
0 comments1 min readEA link

Launch­ing the $10,000 Ex­is­ten­tial Hope Meme Prize

elte24 Sep 2025 14:59 UTC
22 points
11 comments1 min readEA link

Model­ing the (dis)value of hu­man sur­vival and ex­pan­sion

Jim Buhler1 Sep 2025 13:11 UTC
26 points
0 comments2 min readEA link

The Nu­clear Threat Ini­ti­a­tive is not only nu­clear—notes from a call with NTI

Sanjay26 Jun 2020 17:29 UTC
29 points
2 comments6 min readEA link

An in­ter­sec­tion be­tween an­i­mal welfare and AI

sammyboiz🔸18 Jun 2024 3:23 UTC
9 points
1 comment1 min readEA link

Re­in­force­ment Learn­ing: A Non-Tech­ni­cal Primer on o1 and Deep­Seek-R1

AlexChalk9 Feb 2025 23:58 UTC
4 points
0 comments9 min readEA link
(alexchalk.net)

Fun­da­men­tals of Global Pri­ori­ties Re­search in Eco­nomics Syllabus

poliboni8 Aug 2023 12:16 UTC
77 points
1 comment8 min readEA link

For­mal­iz­ing Ex­tinc­tion Risk Re­duc­tion vs. Longtermism

Charlie_Guthmann17 Oct 2022 15:37 UTC
12 points
2 comments1 min readEA link

Shut­ting down all com­pet­ing AI pro­jects might not buy a lot of time due to In­ter­nal Time Pressure

ThomasCederborg3 Oct 2024 0:05 UTC
6 points
1 comment12 min readEA link

Fund­ing for work that builds ca­pac­ity to ad­dress risks from trans­for­ma­tive AI

GCR Capacity Building team (Open Phil)13 Aug 2024 13:13 UTC
40 points
1 comment5 min readEA link

[Question] Math­e­mat­i­cal mod­els of Ethics

Victor-SB8 Mar 2023 10:50 UTC
6 points
1 comment1 min readEA link

Re­think Pri­ori­ties’ 2023 Sum­mary, 2024 Strat­egy, and Fund­ing Gaps

kierangreig🔸15 Nov 2023 20:56 UTC
86 points
7 comments3 min readEA link

Three Weeks In: What GPT-5 Still Gets Wrong

JAM27 Aug 2025 14:43 UTC
2 points
0 comments3 min readEA link

Pod­cast: Mag­nus Vind­ing on re­duc­ing suffer­ing, why AI progress is likely to be grad­ual and dis­tributed and how to rea­son about poli­tics

Gus Docker21 Nov 2021 15:29 UTC
26 points
0 comments1 min readEA link
(www.utilitarianpodcast.com)

Stum­bling Our Way into Global Catas­tro­phe One Tweet-at-a-Time

Faqih21 Oct 2025 6:38 UTC
1 point
0 comments1 min readEA link

Re­vis­it­ing the Evolu­tion An­chor in the Biolog­i­cal An­chors Re­port

Janvi18 Mar 2024 3:01 UTC
13 points
1 comment4 min readEA link

[Question] EA’s Achieve­ments in 2022

ElliotJDavies14 Dec 2022 14:33 UTC
98 points
11 comments1 min readEA link

Prep for EA Con­nect 2025

fezzy🔸5 Dec 2025 3:19 UTC
5 points
0 comments2 min readEA link

On the Vuln­er­a­ble World Hypothesis

Catherine Brewer1 Aug 2022 12:55 UTC
44 points
12 comments14 min readEA link

EA has got­ten it very wrong on cli­mate change—a Cana­dian case study

Stephen Beard29 Oct 2022 19:30 UTC
10 points
8 comments14 min readEA link

The Bot­tle­neck in AI Policy Isn’t Ethics—It’s Implementation

Tristan D4 Apr 2025 6:07 UTC
10 points
4 comments1 min readEA link

Cli­mate change is Now Self-amplifying

Noah Scales11 Jul 2022 10:48 UTC
−3 points
2 comments3 min readEA link

Cause pri­ori­ti­sa­tion: Prevent­ing lake Kivu in Africa erup­tion which could kill two mil­lion.

turchin28 Dec 2022 12:32 UTC
70 points
3 comments3 min readEA link

U.S. Ex­ec­u­tive branch ap­point­ments: why you may want to pur­sue one and tips for how to do so

Demosthenes_USA28 Nov 2020 19:20 UTC
65 points
6 comments12 min readEA link

Po­ten­tial Benefits of Im­ple­ment­ing High-Through­put Se­quenc­ing (HTS) Tech­nolo­gies for Pan­demic Pathogen De­tec­tion in Latin Amer­ica and the Caribbean

Gabriela Paredes12 Aug 2024 16:17 UTC
12 points
1 comment10 min readEA link

Re­sponse to re­cent crit­i­cisms of EA “longter­mist” thinking

kbog6 Jan 2020 4:31 UTC
27 points
46 comments11 min readEA link

(out­dated ver­sion) Vi­atopia and Buy-In

Jordan Arel21 Oct 2025 11:39 UTC
6 points
0 comments20 min readEA link

Op­tion Value, an In­tro­duc­tory Guide

CalebMaresca21 Feb 2020 14:45 UTC
31 points
3 comments6 min readEA link

An­nounc­ing a new or­ga­ni­za­tion: Epistea

Epistea22 May 2023 5:52 UTC
49 points
2 comments2 min readEA link

Towards a Global Nose to Sniff (and Snuff) Out Fu­ture Pandemics

Akash Kulgod10 May 2023 9:48 UTC
39 points
1 comment3 min readEA link

G7 Sum­mit—Co­op­er­a­tion on AI Policy

Leonard_Barrett19 May 2023 10:10 UTC
22 points
2 comments1 min readEA link
(www.japantimes.co.jp)

Quan­tum im­mor­tal­ity and AI risk – the fate of a lonely survivor

turchin16 Oct 2025 11:40 UTC
5 points
0 comments1 min readEA link

Rea­sons for su­per­pow­ers to de­velop (and not de­velop) su­per in­tel­li­gent AI?

flyingtiger25 Mar 2025 22:22 UTC
1 point
0 comments1 min readEA link

AMA: Peter Wilde­ford (Co-CEO at Re­think Pri­ori­ties)

Peter Wildeford18 Jul 2023 21:40 UTC
94 points
71 comments1 min readEA link

A toy model for tech­nolog­i­cal ex­is­ten­tial risk

RobertHarling28 Nov 2020 11:55 UTC
10 points
2 comments4 min readEA link

Beyond Hu­mans: Why All Sen­tient Be­ings Mat­ter in Ex­is­ten­tial Risk

Teun van der Weij31 May 2023 21:21 UTC
12 points
0 comments13 min readEA link

Which of these ar­gu­ments for x-risk do you think we should test?

Wim9 Aug 2022 13:43 UTC
3 points
2 comments1 min readEA link

Mili­tary sup­port in a global catastrophe

Tom Gardiner 🔸24 Jan 2023 16:30 UTC
37 points
0 comments3 min readEA link

Should strong longter­mists re­ally want to min­i­mize ex­is­ten­tial risk?

tobycrisford 🔸4 Dec 2022 16:56 UTC
46 points
9 comments4 min readEA link

[Question] How come there isn’t that much fo­cus in EA on re­search into whether /​ when AI’s are likely to be sen­tient?

callum27 Apr 2023 10:09 UTC
83 points
22 comments1 min readEA link

Align­ment is not *that* hard

sammyboiz🔸17 Apr 2025 2:07 UTC
26 points
13 comments1 min readEA link

The rea­son­able­ness of spe­cial concerns

jwt29 Aug 2022 0:10 UTC
3 points
0 comments3 min readEA link

Quotes on AI and wisdom

Chris Leong26 Nov 2025 15:55 UTC
8 points
0 comments2 min readEA link

Im­pli­ca­tions of AGI on Sub­jec­tive Hu­man Experience

Erica S. 30 May 2023 18:47 UTC
2 points
0 comments19 min readEA link
(docs.google.com)

Auro­ras, space weather and the threat to crit­i­cal infrastructure

FJehn29 Oct 2025 10:53 UTC
18 points
2 comments12 min readEA link
(existentialcrunch.substack.com)

How Can Aver­age Peo­ple Con­tribute to AI Safety?

Stephen McAleese6 Mar 2025 22:50 UTC
15 points
4 comments8 min readEA link

The 369 Ar­chi­tec­ture for Peace Treaty Agreement

Andrei Navrotskii8 Dec 2025 1:38 UTC
1 point
0 comments40 min readEA link

BERI is Hiring: My Ex­pe­rience as Deputy Direc­tor and Why You Should Apply

elizabethcooper17 Oct 2024 11:59 UTC
27 points
1 comment3 min readEA link

Is fear pro­duc­tive when com­mu­ni­cat­ing AI x-risk? [Study re­sults]

Johanna Roniger22 Jan 2024 5:38 UTC
73 points
10 comments5 min readEA link

Why “just make an agent which cares only about bi­nary re­wards” doesn’t work.

Lysandre Terrisse9 May 2023 16:51 UTC
4 points
1 comment3 min readEA link

High­est pri­or­ity threat: in­finite tor­ture

KArax26 Jan 2023 8:51 UTC
−39 points
1 comment9 min readEA link

A Se­quence Against Strong Longter­mism

vadmas22 Jul 2021 20:07 UTC
20 points
14 comments1 min readEA link

Ap­pli­ca­tions Open: AI Safety In­dia Phase 1 – Fun­da­men­tals of Safe AI (Global Co­hort)

adityaraj@eanita28 Apr 2025 12:05 UTC
4 points
0 comments2 min readEA link

Toby Ord: Fireside chat (2018)

EA Global1 Mar 2019 15:48 UTC
20 points
0 comments28 min readEA link
(www.youtube.com)

Which AI Safety Org to Join?

Yonatan Cale11 Oct 2022 19:42 UTC
17 points
21 comments1 min readEA link

[Question] What is EA opinion on The Bul­letin of the Atomic Scien­tists?

VPetukhov2 Dec 2019 5:45 UTC
36 points
9 comments1 min readEA link

There are a lot of up­com­ing re­treats/​con­fer­ences be­tween March and July (2025)

gergo18 Feb 2025 9:28 UTC
18 points
2 comments1 min readEA link

BERI is seek­ing new col­lab­o­ra­tors (2022)

sawyer🔸17 May 2022 17:31 UTC
21 points
0 comments1 min readEA link

‘Es­says on Longter­mism’ Com­pe­ti­tion Winners

Toby Tremlett🔹13 Nov 2025 9:43 UTC
65 points
5 comments2 min readEA link

Plan of Ac­tion to Prevent Hu­man Ex­tinc­tion Risks

turchin14 Mar 2016 14:51 UTC
11 points
3 comments7 min readEA link

GWWC’s 2025 Char­ity Recom­men­da­tions

Giving What We Can🔸2 Dec 2024 22:24 UTC
40 points
0 comments2 min readEA link
(www.givingwhatwecan.org)

[Link] Sean Car­roll in­ter­views Aus­tralian poli­ti­cian An­drew Leigh on ex­is­ten­tial risks

Aryeh Englander8 Mar 2022 1:29 UTC
15 points
1 comment1 min readEA link

Why aren’t we look­ing at the stars?

CMDR Dantae24 Apr 2023 9:40 UTC
5 points
4 comments2 min readEA link

Cli­mate-con­tin­gent Fi­nance, and A Gen­er­al­ized Mechanism for X-Risk Re­duc­tion Financing

johnjnay26 Sep 2022 13:23 UTC
6 points
1 comment25 min readEA link

[Ap­ply] What I Love About AI Safety Field­build­ing at Cam­bridge (& We’re Hiring for a Lead­er­ship Role)

Harrison 🔸14 Feb 2025 17:41 UTC
16 points
0 comments3 min readEA link

Strate­gic Risks and Un­likely Benefits

Anthony Repetto4 Dec 2021 6:01 UTC
1 point
0 comments4 min readEA link

New Prince­ton course on longtermism

Calvin_Baker1 Sep 2023 20:31 UTC
88 points
6 comments6 min readEA link

ALLFED is look­ing for a Fundrais­ing Manager

AronM17 Jun 2025 5:02 UTC
17 points
0 comments1 min readEA link

The Case for a Strate­gic U.S. Coal Re­serve for Cli­mate and Catastrophes

ColdButtonIssues5 May 2022 1:24 UTC
31 points
3 comments5 min readEA link

[Question] What are the best ways to en­courage de-es­ca­la­tion in re­gards to Ukraine?

oh543219 Oct 2022 11:15 UTC
13 points
4 comments1 min readEA link

16 Con­crete, Am­bi­tious AI Pro­ject Pro­pos­als for Science and Security

Alejandro Acelas 🔸11 Aug 2025 20:28 UTC
5 points
0 comments1 min readEA link
(ifp.org)

Fa­nat­i­cism in AI: SERI Project

Jake Arft-Guatelli24 Sep 2021 4:39 UTC
7 points
2 comments5 min readEA link

We won’t solve non-al­ign­ment prob­lems by do­ing research

MichaelDickens21 Nov 2025 18:03 UTC
51 points
1 comment4 min readEA link

Po­ten­tial Risks from Ad­vanced Ar­tifi­cial In­tel­li­gence: The Philan­thropic Opportunity

Holden Karnofsky6 May 2016 12:55 UTC
2 points
0 comments23 min readEA link
(www.openphilanthropy.org)

A sur­vey of con­crete risks de­rived from Ar­tifi­cial Intelligence

Guillem Bas8 Jun 2023 22:09 UTC
36 points
2 comments6 min readEA link
(riesgoscatastroficosglobales.com)

BERI is hiring a Deputy Director

sawyer🔸18 Jul 2022 22:12 UTC
6 points
0 comments1 min readEA link

An AI Man­hat­tan Pro­ject is Not Inevitable

Maxwell Tabarrok6 Jul 2024 16:43 UTC
53 points
2 comments4 min readEA link
(www.maximum-progress.com)

What We Owe The Fu­ture: A Buried Es­say

haven_wh20 Jun 2023 17:49 UTC
19 points
0 comments16 min readEA link

Toby Ord at EA Global: Reconnect

EA Global20 Mar 2021 7:00 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

Aqua­cul­ture in space

Ben Stevenson29 Apr 2025 13:14 UTC
47 points
2 comments2 min readEA link

Hu­man Misalignment

Richard Y Chappell🔸1 Oct 2025 14:01 UTC
15 points
0 comments2 min readEA link
(www.goodthoughts.blog)

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

Garrison12 Feb 2024 23:34 UTC
154 points
10 comments6 min readEA link
(jacobin.com)

Part 4: Reflec­tions af­ter at­tend­ing the CEA In­tro to EA Vir­tual Pro­gram in Sum­mer 2023 – Chap­ter 4: Our Fi­nal Cen­tury?

Andreas P1 Nov 2023 7:12 UTC
8 points
0 comments3 min readEA link

Min­i­miz­ing suffer­ing & ASI xrisk through brain digitization

Amy Louise Johnson20 Feb 2025 21:08 UTC
1 point
0 comments1 min readEA link

[Question] What longter­mist pro­jects would you like to see im­ple­mented?

Buhl28 Mar 2023 18:41 UTC
55 points
6 comments1 min readEA link

The UN Has a Rare Shot at Re­duc­ing the Risks of AI in War­fare

Mark Leon Goldberg21 May 2025 21:22 UTC
6 points
0 comments1 min readEA link

What can we do now to pre­pare for AI sen­tience, in or­der to pro­tect them from the global scale of hu­man sadism?

rime18 Apr 2023 9:58 UTC
44 points
0 comments2 min readEA link

7 es­says on Build­ing a Bet­ter Future

Jamie_Harris24 Jun 2022 14:28 UTC
21 points
0 comments2 min readEA link

In­tro­duc­ing Pro­ject 2050: Man­date for Survival

Patrick Hoang1 Apr 2025 5:18 UTC
6 points
1 comment14 min readEA link
(docs.google.com)

Win­ners of the EA Crit­i­cism and Red Team­ing Contest

Lizka1 Oct 2022 1:50 UTC
226 points
41 comments19 min readEA link

AISN#52: An Ex­pert Virol­ogy Benchmark

Center for AI Safety22 Apr 2025 16:52 UTC
6 points
0 comments4 min readEA link
(newsletter.safe.ai)

Sen­tinel: Early De­tec­tion and Re­sponse for Global Catastrophes

rai18 Nov 2025 3:48 UTC
58 points
4 comments10 min readEA link

[Creative Non­fic­tion] The Toba Su­per­vol­canic Eruption

Jackson Wagner29 Oct 2021 17:02 UTC
55 points
3 comments6 min readEA link

AISN #18: Challenges of Re­in­force­ment Learn­ing from Hu­man Feed­back, Microsoft’s Se­cu­rity Breach, and Con­cep­tual Re­search on AI Safety

Center for AI Safety8 Aug 2023 15:52 UTC
12 points
0 comments5 min readEA link
(newsletter.safe.ai)

The His­tory of AI Rights Research

Jamie_Harris27 Aug 2022 8:14 UTC
48 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

How Our Cog­ni­tive Effi­ciency Makes Us Vuln­er­a­ble to AI

aderonke4 Nov 2025 11:00 UTC
4 points
0 comments4 min readEA link

[Question] Could hu­man­ity be saved by send­ing peo­ple to other planets (like Mars)?

lamparita16 Feb 2025 19:40 UTC
3 points
2 comments1 min readEA link

Join the AI Eval­u­a­tion Tasks Bounty Hackathon

Esben Kran18 Mar 2024 8:15 UTC
20 points
0 comments4 min readEA link

Ex­is­ten­tial risk from a Thomist Chris­tian perspective

Global Priorities Institute31 Dec 2020 14:27 UTC
6 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

Is­lands, nu­clear win­ter, and trade dis­rup­tion as a hu­man ex­is­ten­tial risk factor

Matt Boyd7 Aug 2022 2:18 UTC
37 points
6 comments19 min readEA link

Open call: AI Act Stan­dard for Dev. Phase Risk Assess­ment

miller-max8 Dec 2023 19:57 UTC
5 points
1 comment1 min readEA link

Fore­cast AI 2027

christian12 Jun 2025 21:12 UTC
22 points
0 comments1 min readEA link
(www.metaculus.com)

Poll: the next ex­is­ten­tial catas­tro­phe is like­lier than not to wipe off all an­i­mal sen­tience from the planet

JoA🔸1 May 2025 18:49 UTC
18 points
7 comments1 min readEA link

Cur­ing past suffer­ings and pre­vent­ing s-risks via in­dex­i­cal uncertainty

turchin27 Sep 2018 10:48 UTC
1 point
18 comments4 min readEA link

8) The Lines of Defence Ap­proach to Pan­demic Risk Management

PandemicRiskMan17 Mar 2024 19:00 UTC
4 points
0 comments17 min readEA link

Microdooms averted by work­ing on AI Safety

Nikola17 Sep 2023 21:51 UTC
42 points
6 comments3 min readEA link
(www.lesswrong.com)

Buy and Re­tire Pol­luters’ Rights to Emit CO2

Paco del Villar26 Jun 2025 4:12 UTC
9 points
12 comments4 min readEA link

[Question] Should we pub­lish ar­gu­ments for the preser­va­tion of hu­man­ity?

Jeremy7 Apr 2023 13:51 UTC
8 points
4 comments1 min readEA link

A longter­mist case for di­rected panspermia

Ahrenbach21 Jan 2024 19:29 UTC
0 points
1 comment4 min readEA link

NTIA Solic­its Com­ments on Open-Weight AI Models

Jacob Woessner6 Mar 2024 20:05 UTC
11 points
1 comment2 min readEA link
(www.ntia.gov)

The Offense-Defense Balance Rarely Changes

Maxwell Tabarrok9 Dec 2023 15:22 UTC
82 points
16 comments3 min readEA link
(maximumprogress.substack.com)

RESILIENCER Work­shop Re­port on So­lar Ra­di­a­tion Mod­ifi­ca­tion Re­search and Ex­is­ten­tial Risk Released

GideonF3 Feb 2023 18:58 UTC
24 points
0 comments3 min readEA link

Giv­ing What We Can is now its own le­gal en­tity!

Alana HF3 Sep 2024 20:05 UTC
109 points
2 comments1 min readEA link
(www.givingwhatwecan.org)

Stu­art Arm­strong: The far fu­ture of in­tel­li­gent life across the universe

EA Global8 Jun 2018 7:15 UTC
20 points
0 comments12 min readEA link
(www.youtube.com)

Effec­tive Altru­ism, Disaster Preven­tion, and the Pos­si­bil­ity of Hell: A Dilemma for Sec­u­lar Longtermists

AgentMa🔸29 Mar 2025 17:43 UTC
12 points
7 comments5 min readEA link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni_A1 Sep 2022 11:45 UTC
17 points
1 comment3 min readEA link

Re­duc­ing Nu­clear Risk Through Im­proved US-China Relations

Metaculus21 Mar 2022 11:50 UTC
31 points
19 comments5 min readEA link

AI Safety Col­lab 2025 Sum­mer—Lo­cal Or­ga­nizer Sign-ups Open

Evander H. 🔸25 Jun 2025 14:41 UTC
12 points
0 comments1 min readEA link

[Question] Is there an or­ga­ni­za­tion or in­di­vi­d­u­als work­ing on how to boot­strap in­dus­trial civ­i­liza­tion?

steve632021 Oct 2022 3:36 UTC
15 points
8 comments1 min readEA link

Nu­clear Risk Overview: CERI Sum­mer Re­search Fellowship

Will Aldred27 Mar 2022 15:51 UTC
57 points
2 comments13 min readEA link

A re­view of how nu­cleic acid (or DNA) syn­the­sis is cur­rently reg­u­lated across the world, and some ideas about re­form (sum­mary of and link to Law dis­ser­ta­tion)

Isaac Heron5 Feb 2024 10:37 UTC
53 points
4 comments16 min readEA link
(acrobat.adobe.com)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo27 Sep 2023 2:44 UTC
16 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Au­to­mated Par­li­a­ments — A Solu­tion to De­ci­sion Uncer­tainty and Misal­ign­ment in Lan­guage Models

Shak Ragoler2 Oct 2023 9:47 UTC
9 points
0 comments17 min readEA link

The Precipice (To read: Chap­ter 2)

Jesse Rothman1 Feb 2022 14:02 UTC
13 points
2 comments16 min readEA link
(www.youtube.com)

Anal­y­sis of Global AI Gover­nance Strategies

SammyDMartin11 Dec 2024 11:08 UTC
23 points
0 comments1 min readEA link
(www.lesswrong.com)

A coun­ter­fac­tual QALY for USD 2.60–28.94?

brb2436 Sep 2020 21:45 UTC
37 points
6 comments5 min readEA link

Stop talk­ing about p(doom)

Isaac King1 Jan 2024 10:57 UTC
115 points
12 comments3 min readEA link

AI Reg­u­la­tion is Unsafe

Maxwell Tabarrok22 Apr 2024 16:38 UTC
19 points
8 comments4 min readEA link
(www.maximum-progress.com)

#187 – How re­search­ing his book turned him from a space op­ti­mist into a “space bas­tard” (Zach Wein­er­smith on the 80,000 Hours Pod­cast)

80000_Hours15 May 2024 14:03 UTC
28 points
1 comment18 min readEA link

[Linkpost] Longter­mists Are Push­ing a New Cold War With China

Radical Empath Ismam27 May 2023 6:53 UTC
38 points
16 comments1 min readEA link
(jacobin.com)

[Question] “We Are the Weather” Reviews

JonC18 Apr 2023 22:49 UTC
3 points
2 comments1 min readEA link

Ja­cob Cates and Aron Mill: Scal­ing in­dus­trial food pro­duc­tion in nu­clear winter

EA Global18 Oct 2019 18:05 UTC
9 points
0 comments1 min readEA link
(www.youtube.com)

Me­tacrisis as a Frame­work for AI Governance

Jonah Wilberg22 Sep 2025 14:17 UTC
35 points
2 comments8 min readEA link

[Question] Would a much-im­proved un­der­stand­ing of regime tran­si­tions have a net pos­i­tive im­pact?

Michael Latowicki5 Jun 2023 14:53 UTC
18 points
8 comments1 min readEA link

Rab­bits, robots and resurrection

Patrick Wilson10 May 2022 15:00 UTC
9 points
0 comments15 min readEA link

CLR’s An­nual Re­port 2021

stefan.torges26 Feb 2022 12:47 UTC
79 points
0 comments12 min readEA link

Register for the Stan­ford Ex­is­ten­tial Risks Ini­ti­a­tive (SERI) Symposium

Grant Higerd-Rusli18 Mar 2025 3:50 UTC
7 points
0 comments1 min readEA link
(cisac.fsi.stanford.edu)

Tough enough? Ro­bust satis­fic­ing as a de­ci­sion norm for long-term policy analysis

Global Priorities Institute31 Oct 2020 13:28 UTC
5 points
0 comments3 min readEA link
(globalprioritiesinstitute.org)

New Study in Science Suggests a Se­vere Bot­tle­neck in Hu­man Pop­u­la­tion Size 930,000 Years Ago

DannyBressler31 Aug 2023 22:19 UTC
8 points
0 comments1 min readEA link
(www.science.org)

AISN #9: State­ment on Ex­tinc­tion Risks, Com­pet­i­tive Pres­sures, and When Will AI Reach Hu­man-Level?

Center for AI Safety6 Jun 2023 15:56 UTC
12 points
2 comments7 min readEA link
(newsletter.safe.ai)

AI safety re­mains un­der­funded by more than 3 OOMs

Impatient_Longtermist 🔸🌱6 Oct 2025 19:53 UTC
25 points
3 comments1 min readEA link
(www.nber.org)

When AI Speaks Too Soon: How Pre­ma­ture Reve­la­tion Can Sup­press Hu­man Emergence

KaedeHamasaki10 Apr 2025 18:19 UTC
1 point
3 comments3 min readEA link

Re­sults from an Ad­ver­sar­ial Col­lab­o­ra­tion on AI Risk (FRI)

Forecasting Research Institute11 Mar 2024 15:54 UTC
196 points
25 comments9 min readEA link
(forecastingresearch.org)

The Germy Para­dox – Filters: A taboo

eukaryote19 Oct 2019 0:14 UTC
17 points
2 comments9 min readEA link
(eukaryotewritesblog.com)

Si­mu­lat­ing a pos­si­ble al­ign­ment solu­tion in GPT2-medium us­ing Archety­pal Trans­fer Learning

Miguel2 May 2023 16:23 UTC
4 points
0 comments18 min readEA link

Su­per-ex­po­nen­tial growth im­plies that ac­cel­er­at­ing growth is unim­por­tant in the long run

kbog11 Aug 2020 7:20 UTC
36 points
9 comments4 min readEA link

Rule High Stakes In, Not Out

Richard Y Chappell🔸21 Oct 2025 2:44 UTC
12 points
8 comments5 min readEA link

Against longter­mism: a care-cen­tric ap­proach?

Aron P2 Oct 2022 5:00 UTC
21 points
2 comments1 min readEA link

Should Effec­tive Altru­ism be at war with North Korea?

BenHoffman5 May 2019 1:44 UTC
−14 points
8 comments5 min readEA link
(benjaminrosshoffman.com)

Shal­low Re­port on Nu­clear War (Arse­nal Limi­ta­tion)

Joel Tan🔸21 Feb 2023 4:57 UTC
44 points
16 comments29 min readEA link

A Coun­ter­ar­gu­ment to the Ar­gu­ment of Astro­nom­i­cal Waste

Markus Bredberg24 Apr 2023 17:09 UTC
13 points
0 comments4 min readEA link

US pub­lic opinion of AI policy and risk

Jamie E12 May 2023 13:22 UTC
111 points
7 comments15 min readEA link

Cu­rated con­ver­sa­tions with brilli­ant effec­tive altruists

spencerg11 Apr 2022 15:32 UTC
37 points
0 comments22 min readEA link

Spec­u­la­tive sce­nar­ios for cli­mate-caused ex­is­ten­tial catastrophes

vincentzh27 Jan 2023 17:01 UTC
26 points
2 comments4 min readEA link

An­nounc­ing In­sights for Impact

Christian Pearson4 Jan 2023 7:00 UTC
80 points
6 comments1 min readEA link

EU AI Act passed vote, and x-risk was a main topic

Ariel15 Jun 2023 13:16 UTC
43 points
2 comments1 min readEA link
(www.euractiv.com)

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

Center for AI Safety2 May 2023 16:51 UTC
35 points
2 comments5 min readEA link
(newsletter.safe.ai)

De-em­pha­sise al­ign­ment, em­pha­sise restraint

EuanMcLean4 Feb 2025 17:43 UTC
19 points
2 comments7 min readEA link

Air Safety to Com­bat Global Catas­trophic Biorisks [OLD VERSION]

Jam Kraprayoon26 Dec 2022 16:58 UTC
78 points
0 comments36 min readEA link
(docs.google.com)

How you can save ex­pected lives for $0.20-$400 each and re­duce X risk

Denkenberger🔸27 Nov 2017 2:23 UTC
24 points
5 comments8 min readEA link

Distil­la­tion of The Offense-Defense Balance of Scien­tific Knowledge

Arjun Yadav12 Aug 2022 7:01 UTC
17 points
0 comments2 min readEA link

Tran­shu­man­ism and AI: Toward Pros­per­ity or Ex­tinc­tion?

Shaïman Thürler22 Mar 2025 18:01 UTC
9 points
1 comment6 min readEA link

Bar­gain­ing among worldviews

Hayley Clatterbuck18 Oct 2024 18:32 UTC
58 points
5 comments12 min readEA link

Longter­mism: An Im­prac­ti­ca­ble At­tempt to Rea­son Our Way into Be­com­ing Ir­ra­tionally Gen­er­ous Heroes?

Fr Peter Wyg15 Oct 2025 8:22 UTC
90 points
10 comments10 min readEA link

New Fund­ing Round on Hard­ware-En­abled Mechanisms (HEMs)

aog30 Apr 2025 17:45 UTC
54 points
0 comments15 min readEA link

Sta­tus Quo Eng­ines—AI essay

Ilana_Goldowitz_Jimenez28 May 2023 14:33 UTC
1 point
1 comment15 min readEA link

Sense-mak­ing about ex­treme power concentration

rosehadshar11 Sep 2025 10:09 UTC
35 points
0 comments4 min readEA link

[Question] What is the most con­vinc­ing ar­ti­cle, video, etc. mak­ing the case that AI is an X-Risk

Jordan Arel11 Jul 2023 20:32 UTC
4 points
7 comments1 min readEA link

Sen­si­tive as­sump­tions in longter­mist modeling

Owen Murphy18 Sep 2024 1:39 UTC
82 points
12 comments7 min readEA link
(ohmurphy.substack.com)

[Question] Is con­tri­bu­tion to open-source ca­pa­bil­ities re­search so­cially benefi­cial? - my reasoning

damc430 Oct 2025 15:11 UTC
2 points
1 comment5 min readEA link

When do ex­perts think hu­man-level AI will be cre­ated?

Vishakha Agrawal2 Jan 2025 23:17 UTC
33 points
9 comments2 min readEA link
(aisafety.info)

#212 – Why tech­nol­ogy is un­stop­pable & how to shape AI de­vel­op­ment any­way (Allan Dafoe on The 80,000 Hours Pod­cast)

80000_Hours17 Feb 2025 16:38 UTC
16 points
0 comments19 min readEA link

In­creas­ing risks of GCRs due to cli­mate change

Leonora_Camner12 Apr 2024 15:57 UTC
19 points
3 comments1 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

James Fodor13 Dec 2018 5:12 UTC
10 points
12 comments7 min readEA link

Strate­gic Alli­ances for a Vi­able Earth can re­duce the risk of cross­ing global tip­ping points

Ulf Graf 🔹22 Nov 2025 21:22 UTC
12 points
2 comments3 min readEA link

A model about the effect of to­tal ex­is­ten­tial risk on ca­reer choice

Jonas Moss10 Sep 2022 7:18 UTC
12 points
4 comments2 min readEA link

A Tax­on­omy Of AI Sys­tem Evaluations

Maxime Riché 🔸19 Aug 2024 9:08 UTC
13 points
0 comments14 min readEA link

EA Nether­lands’ An­nual Strat­egy for 2024

James Herbert5 Jun 2024 15:07 UTC
40 points
4 comments6 min readEA link

The Mys­tery of the Cuban mis­sile crisis

Nathan_Barnard5 May 2022 22:51 UTC
10 points
4 comments9 min readEA link

Utopia, for what?

Arturo Macias15 Sep 2025 14:50 UTC
−2 points
2 comments2 min readEA link

AI Safety Needs Great Product Builders

James Brady2 Nov 2022 11:33 UTC
45 points
1 comment6 min readEA link

An Overview of Poli­ti­cal Science (Policy and In­ter­na­tional Re­la­tions Primer for EA, Part 3)

Davidmanheim5 Jan 2020 12:54 UTC
22 points
4 comments10 min readEA link

Seek­ing In­put to AI Safety Book for non-tech­ni­cal audience

Darren McKee10 Aug 2023 18:03 UTC
11 points
4 comments1 min readEA link

My sum­mary of “Prag­matic AI Safety”

Eleni_A5 Nov 2022 14:47 UTC
14 points
0 comments5 min readEA link

Slow­ing down AI progress is an un­der­ex­plored al­ign­ment strategy

Michael Huang13 Jul 2022 3:22 UTC
91 points
11 comments3 min readEA link
(www.lesswrong.com)

The US ex­pands re­stric­tions on AI ex­ports to China. What are the x-risk effects?

poppinfresh14 Oct 2022 18:17 UTC
161 points
20 comments4 min readEA link

Pre­sent-day good in­ten­tions aren’t suffi­cient to make the longterm fu­ture good in expectation

trurl2 Sep 2022 3:22 UTC
7 points
0 comments14 min readEA link

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

Center for AI Safety9 May 2023 15:26 UTC
60 points
0 comments4 min readEA link
(newsletter.safe.ai)

AI Risk: Can We Thread the Nee­dle? [Recorded Talk from EA Sum­mit Van­cou­ver ’25]

Evan R. Murphy2 Oct 2025 19:05 UTC
8 points
0 comments2 min readEA link

The Guardian calls EA “cultish” and ac­cuses the late FHI of “Eu­gen­ics on Steroids”

Damin Curtis🔹28 Apr 2024 13:44 UTC
14 points
12 comments1 min readEA link
(www.theguardian.com)

The AI Endgame: A coun­ter­fac­tual to AI al­ign­ment by an AI Safety newcomer

Andreas P1 Dec 2023 5:49 UTC
2 points
5 comments3 min readEA link

Mili­tary Ar­tifi­cial In­tel­li­gence as Con­trib­u­tor to Global Catas­trophic Risk

MMMaas27 Jun 2022 10:35 UTC
42 points
0 comments52 min readEA link

Launch­ing The Col­lec­tive In­tel­li­gence Pro­ject: Whitepa­per and Pilots

jasmine_wang6 Feb 2023 17:00 UTC
38 points
8 comments2 min readEA link
(cip.org)

Twin Cities 10% Launch Event: Lives on the Line — Rapid Re­sponse Fund (with The Life You Can Save)

Ryan Begley🔸19 May 2025 3:37 UTC
29 points
0 comments1 min readEA link

Sur­vey re­veals chasm be­tween pub­lic and ex­pert views of AI bio-risk

Impatient_Longtermist 🔸🌱13 Oct 2025 16:24 UTC
10 points
1 comment1 min readEA link

BERI, Epoch, and FAR will ex­plain their work & cur­rent job open­ings on­line this Sunday

Rockwell19 Aug 2022 20:34 UTC
7 points
0 comments1 min readEA link

AI Safety: The [Hy­po­thet­i­cal] Video Game

barryl 🔸18 Apr 2025 20:19 UTC
3 points
2 comments3 min readEA link

Promethean Gover­nance and Memetic Le­gi­t­i­macy: Les­sons from the Vene­tian Doge for AI Era Institutions

Paul Fallavollita19 Mar 2025 18:09 UTC
0 points
0 comments3 min readEA link

Risks from the UK’s planned in­crease in nu­clear warheads

Matt Goodman15 Aug 2021 20:14 UTC
23 points
8 comments2 min readEA link

Ven­ture Cap­i­tal In­fluence Tracker

Kayode Adekoya25 Nov 2025 14:09 UTC
1 point
0 comments5 min readEA link

Longter­mism and the Challenge of In­finity: Con­fronting In­fini­tar­ian Paral­y­sis (In re­sponse to Longter­mism in an In­finite World (by Chris­tian Tarsney & Hay­den Wilkinson

Yetty18 Sep 2025 12:08 UTC
2 points
0 comments3 min readEA link

In­vite: UnCon­fer­ence, How best for hu­mans to thrive and sur­vive over the long-term

Ben Yeoh27 Jul 2022 22:19 UTC
10 points
2 comments2 min readEA link

A The­olo­gian’s Re­sponse to An­thro­pogenic Ex­is­ten­tial Risk

Fr Peter Wyg3 Nov 2022 4:37 UTC
108 points
17 comments11 min readEA link

Cor­rect­ing the Foun­da­tions: Ex­pos­ing the Con­tra­dic­tions of Mo­ral Rel­a­tivism and the Need for Ob­jec­tive Stan­dards in Ethics and AI Alignment

Howl4049 Jul 2025 15:27 UTC
1 point
0 comments4 min readEA link

“The Physi­cists”: A play about ex­tinc­tion and the re­spon­si­bil­ity of scientists

Lara_TH29 Nov 2022 16:53 UTC
28 points
1 comment8 min readEA link

[Question] How much (more) data do we need to claim ex­treme cost-effec­tive­ness?

Niek Versteegde, founder GOAL 31 Oct 2024 12:36 UTC
28 points
14 comments6 min readEA link

X-Risk, An­throp­ics, & Peter Thiel’s In­vest­ment Thesis

Jackson Wagner26 Oct 2021 18:38 UTC
50 points
1 comment19 min readEA link

For­mal­iz­ing Space-Far­ing Civ­i­liza­tions Sat­u­ra­tion con­cepts and metrics

Maxime Riché 🔸13 Mar 2025 9:44 UTC
13 points
0 comments8 min readEA link

So­bre pen­sar no pior

Ramiro25 Jan 2024 20:45 UTC
6 points
1 comment4 min readEA link

Ar­gu­ments for Why Prevent­ing Hu­man Ex­tinc­tion is Wrong

Anthony Fleming21 May 2022 7:17 UTC
32 points
48 comments3 min readEA link

[Opz­ionale] ‘Con­sid­er­az­ioni cru­ciali e filantropia sag­gia’, di Nick Bostrom

EA Italy12 Jan 2023 3:11 UTC
1 point
0 comments1 min readEA link
(altruismoefficace.it)

[Question] How much does cli­mate change & the de­cline of liberal democ­racy in­di­rectly in­crease the prob­a­bil­ity of an x-risk?

Earthling1 Sep 2022 18:33 UTC
7 points
7 comments1 min readEA link

An ar­gu­ment that EA should fo­cus more on cli­mate change

Ann Garth 🔸8 Dec 2020 2:48 UTC
30 points
3 comments10 min readEA link

Jan Kirch­ner on AI Alignment

birtes17 Jan 2023 15:11 UTC
5 points
0 comments1 min readEA link

Ap­ply to Aether—In­de­pen­dent LLM Agent Safety Re­search Group

RohanS21 Aug 2024 9:40 UTC
47 points
13 comments8 min readEA link

What can we learn from a short pre­view of a su­per-erup­tion and what are some tractable ways of miti­gat­ing it

Mike Cassidy 🔸3 Feb 2022 11:26 UTC
53 points
0 comments6 min readEA link

The Science of AI Is Too Im­por­tant to Be Left to the Scientists

AndrewDoris23 Oct 2024 19:10 UTC
3 points
0 comments1 min readEA link
(foreignpolicy.com)

One, per­haps un­der­rated, AI risk.

Alex (Αλέξανδρος)28 Nov 2024 10:34 UTC
7 points
1 comment3 min readEA link

The ap­pli­ca­bil­ity of transsen­tien­tist crit­i­cal path analysis

Peter Sølling11 Aug 2020 11:26 UTC
0 points
2 comments32 min readEA link
(www.optimalaltruism.com)

If We Can’t End Fac­tory Farm­ing, Can We Really Shape the Far Fu­ture?

Krimsey17 Oct 2025 16:48 UTC
25 points
14 comments3 min readEA link

⿻ Sym­bio­ge­n­e­sis vs. Con­ver­gent Consequentialism

plex21 Oct 2025 10:40 UTC
17 points
1 comment20 min readEA link

Why so few re­cent pub­lished net as­sess­ments of x-risks?

different Sam2 Jun 2025 14:35 UTC
4 points
2 comments1 min readEA link

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus Docker7 May 2021 14:32 UTC
8 points
0 comments1 min readEA link
(anchor.fm)

Oxford Biose­cu­rity Group: Ap­pli­ca­tions Open and 2023 Retrospective

Swan 🔸6 Jan 2024 6:20 UTC
33 points
0 comments11 min readEA link

In­finite Re­wards, Finite Safety: New Models for AI Mo­ti­va­tion Without In­finite Goals

Whylome Team12 Nov 2024 7:21 UTC
−5 points
1 comment2 min readEA link

The Long Reflec­tion as the Great Stag­na­tion

Larks1 Sep 2022 20:55 UTC
43 points
2 comments8 min readEA link

“A Creepy Feel­ing”: Nixon’s De­ci­sion to Disavow Biolog­i­cal Weapons

TW12330 Sep 2022 15:17 UTC
48 points
3 comments17 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonY16 Oct 2021 11:33 UTC
80 points
22 comments24 min readEA link

Ar­ti­cle Sum­mary: Cur­rent and Near-Term AI as a Po­ten­tial Ex­is­ten­tial Risk Factor

AndreFerretti7 Jun 2023 13:53 UTC
12 points
1 comment1 min readEA link
(dl.acm.org)

Space-Far­ing Civ­i­liza­tion den­sity es­ti­mates and mod­els—Review

Maxime Riché 🔸27 Feb 2025 11:44 UTC
16 points
0 comments12 min readEA link

In­ter­na­tional co­op­er­a­tion as a tool to re­duce two ex­is­ten­tial risks.

johl@umich.edu19 Apr 2021 16:51 UTC
28 points
4 comments23 min readEA link

Cli­mate Ad­vo­cacy and AI Safety: Su­per­charg­ing AI Slow­down Advocacy

Matthew McRedmond🔹25 Jul 2024 12:08 UTC
10 points
7 comments2 min readEA link

[Pod­cast] Ajeya Co­tra on wor­ld­view di­ver­sifi­ca­tion and how big the fu­ture could be

Eevee🔹22 Jan 2021 23:57 UTC
57 points
20 comments1 min readEA link
(80000hours.org)

The Miss­ing Piece: Why We Need a Grand Strat­egy for AI

Coleman28 Feb 2025 23:49 UTC
7 points
1 comment9 min readEA link

Global Risks Weekly Roundup #19/​2025: In­dia/​Pak­istan ceasefire, US/​China tar­iffs deal & OpenAI non­profit control

NunoSempere12 May 2025 17:11 UTC
16 points
0 comments1 min readEA link

A Roundtable for Safe AI (RSAI)?

Lara_TH9 Mar 2023 12:11 UTC
9 points
0 comments4 min readEA link

A Frame­work for Tech­ni­cal Progress on Biosecurity

kyle_fish3 Nov 2021 10:57 UTC
87 points
1 comment9 min readEA link

Re­port on the De­sir­a­bil­ity of Science Given New Biotech Risks

Matt Clancy17 Jan 2024 19:42 UTC
82 points
24 comments4 min readEA link

The road from hu­man-level to su­per­in­tel­li­gent AI may be short

Vishakha Agrawal23 Apr 2025 11:19 UTC
3 points
0 comments2 min readEA link
(aisafety.info)

The Ba­sic Case For Doom

Bentham's Bulldog30 Sep 2025 16:03 UTC
14 points
0 comments5 min readEA link

Part 1/​4: A Case for Abolition

Dhruv Makwana11 Jan 2023 13:46 UTC
33 points
7 comments3 min readEA link

S-risk FAQ

Tobias_Baumann18 Sep 2017 8:05 UTC
29 points
8 comments8 min readEA link

Cen­ter on Long-Term Risk: An­nual Re­view & Fundraiser 2025

Tristan Cook4 Dec 2025 18:14 UTC
35 points
1 comment4 min readEA link
(longtermrisk.org)

Phil Tor­res’ ar­ti­cle: “The Danger­ous Ideas of ‘Longter­mism’ and ‘Ex­is­ten­tial Risk’”

Ben_Eisenpress6 Aug 2021 7:19 UTC
6 points
13 comments1 min readEA link

Why space de­bris is a press­ing problem

Leah K10 Sep 2025 21:49 UTC
6 points
0 comments4 min readEA link

Oxford Biose­cu­rity Group 2024 Im­pact Eval­u­a­tion: Ca­pac­ity Build­ing (Sum­mary/​Linkpost)

Lin BL3 Feb 2025 7:31 UTC
20 points
0 comments1 min readEA link
(www.oxfordbiosecuritygroup.com)

Public Opinion on AI Safety: AIMS 2023 and 2021 Summary

Janet Pauketat25 Sep 2023 18:09 UTC
19 points
0 comments3 min readEA link
(www.sentienceinstitute.org)

An Evolu­tion­ary Ar­gu­ment un­der­min­ing Longter­mist think­ing?

Jim Buhler3 Mar 2025 14:47 UTC
31 points
10 comments8 min readEA link

The Do­mes­ti­ca­tion of Zebras

Further or Alternatively9 Sep 2022 10:58 UTC
15 points
20 comments2 min readEA link

Pru­den­tial longter­mism is de­fanged by the strat­egy of pro­cras­ti­na­tion — and that’s not all

Yarrow Bouchard 🔸6 Nov 2025 17:42 UTC
13 points
2 comments9 min readEA link

Prepar­ing for Power Ou­tages in Disasters

Fin8 Mar 2022 17:04 UTC
9 points
0 comments4 min readEA link

Ap­ply by 10th June: ‘In­tro­duc­tion to Biose­cu­rity’ On­line Course Start­ing in July

Lin BL15 May 2025 18:08 UTC
15 points
0 comments1 min readEA link

Sum­mary: Against the Sin­gu­lar­ity Hy­poth­e­sis (David Thorstad)

Noah Varley🔸27 Mar 2024 13:48 UTC
65 points
10 comments5 min readEA link

Safety reg­u­la­tors: A tool for miti­gat­ing tech­nolog­i­cal risk

JustinShovelain21 Jan 2020 13:09 UTC
10 points
0 comments4 min readEA link

Prior X%—<1%: A quan­tified ‘epistemic sta­tus’ of your pre­dic­tion.

tcelferact2 Jun 2023 15:51 UTC
11 points
1 comment1 min readEA link

Crit­i­cism of EA and longtermism

St. Ignorant2 Sep 2022 7:23 UTC
2 points
0 comments14 min readEA link

As­ter­isk Magaz­ine Is­sue 03: AI

alejandro24 Jul 2023 15:53 UTC
34 points
3 comments1 min readEA link
(asteriskmag.com)

How much dona­tions are needed to neu­tral­ise the an­nual x-risk foot­print of the mean hu­man?

Vasco Grilo🔸22 Sep 2022 6:41 UTC
8 points
2 comments1 min readEA link

An In­for­mal Re­view of Space Exploration

kbog31 Jan 2020 13:16 UTC
51 points
5 comments35 min readEA link

[Question] Ne­glected Trans­mis­sion-Block­ing In­ter­ven­tions?

christian.r11 Jan 2024 21:28 UTC
12 points
5 comments1 min readEA link

Filling the Void: A Com­pre­hen­sive Database for AI Risks Materials

JAM28 May 2024 16:03 UTC
10 points
1 comment4 min readEA link

An­nounc­ing the ITAM AI Fu­tures Fel­low­ship

AmAristizabal28 Jul 2023 16:44 UTC
43 points
3 comments2 min readEA link

[Question] Why won’t nan­otech kill us all?

Yarrow Bouchard 🔸16 Dec 2023 23:27 UTC
21 points
5 comments1 min readEA link

An­nounc­ing EA Vir­tual Pro­grams Pilot Biose­cu­rity Book Club

JMonty🔸27 Sep 2023 1:35 UTC
24 points
1 comment1 min readEA link

[Question] What are the pos­si­ble sce­nar­ios of AI simu­lat­ing biolog­i­cal suffer­ing to cause s-risks?

jackchang11030 Oct 2025 13:42 UTC
6 points
1 comment1 min readEA link

AI Safety In­cu­ba­tion Pro­gram—Ap­pli­ca­tions Open

Catalyze Impact16 Aug 2024 15:37 UTC
11 points
0 comments2 min readEA link

Rewil­d­ing Is Ex­tremely Bad

Bentham's Bulldog18 Nov 2025 17:44 UTC
8 points
11 comments7 min readEA link

A Third World War?: Let’s help those who are hold­ing it back (Tim Sny­der)

Aaron Goldzimer26 Nov 2024 3:07 UTC
1 point
1 comment1 min readEA link
(snyder.substack.com)

Aiming for heaven [short poem]

Avila10 Mar 2024 6:14 UTC
31 points
4 comments1 min readEA link

Post-Mortem: McGill EA x Law Pre­sents: Ex­is­ten­tial Ad­vo­cacy with Prof. John Bliss

McGill EA x Law31 Jan 2023 18:57 UTC
11 points
0 comments4 min readEA link

Op­por­tu­ni­ties that sur­prised us dur­ing our Clearer Think­ing Re­grants program

spencerg7 Nov 2022 13:09 UTC
116 points
5 comments9 min readEA link

New Fron­tiers in AI Safety

Hans Gundlach2 Apr 2025 2:00 UTC
6 points
0 comments4 min readEA link
(drive.google.com)

French 2d ex­plainer videos on longter­mism (en­glish sub­ti­tles)

Gaetan_Selle 🔷27 Feb 2023 9:00 UTC
20 points
0 comments1 min readEA link

Prob­lem: Guaran­tee­ing the right to life for ev­ery­one, in the in­finitely long term (part 1)

lamparita18 Aug 2024 12:13 UTC
2 points
2 comments8 min readEA link

[Question] Will the next global con­flict be more like World War I?

FJehn26 Mar 2022 14:57 UTC
7 points
5 comments2 min readEA link

4 Years Later: Pres­i­dent Trump and Global Catas­trophic Risk

HaydnBelfield25 Oct 2020 16:28 UTC
43 points
10 comments10 min readEA link

Ad­dress­ing challenges for s-risk re­duc­tion: Toward pos­i­tive com­mon-ground proxies

Teo Ajantaival22 Mar 2025 17:50 UTC
52 points
1 comment17 min readEA link

ARENA 7.0 - Call for Applicants

James Hindmarch30 Sep 2025 15:07 UTC
6 points
0 comments6 min readEA link
(www.lesswrong.com)

166 States Vote to Adopt Lethal Au­tonomous Weapons Re­s­olu­tion at the UNGA

Heramb Podar8 Dec 2024 21:23 UTC
14 points
0 comments1 min readEA link

Most Lead­ing AI Ex­perts Believe That Ad­vanced AI Could Be Ex­tremely Danger­ous to Humanity

jai4 May 2023 16:19 UTC
31 points
1 comment1 min readEA link
(laneless.substack.com)

(Ap­pli­ca­tions Open!) UChicago XLab Sum­mer Re­search Fel­low­ship 2024

ZacharyRudolph26 Feb 2024 18:20 UTC
15 points
0 comments4 min readEA link
(xrisk.uchicago.edu)

What are the differ­ences be­tween a sin­gu­lar­ity, an in­tel­li­gence ex­plo­sion, and a hard take­off?

Vishakha Agrawal3 Apr 2025 10:34 UTC
6 points
0 comments2 min readEA link
(aisafety.info)

The limits of black-box eval­u­a­tions: two hypotheticals

TFD11 Apr 2025 20:52 UTC
1 point
0 comments4 min readEA link
(www.thefloatingdroid.com)

Win­ning Non-Triv­ial Pro­ject: Set­ting a high stan­dard for fron­tier model security

XaviCF8 Jan 2024 11:20 UTC
31 points
0 comments18 min readEA link

Timelines to Trans­for­ma­tive AI: an investigation

Zershaaneh Qureshi25 Mar 2024 18:11 UTC
76 points
8 comments50 min readEA link

Open Philan­thropy Shal­low In­ves­ti­ga­tion: Civil Con­flict Reduction

Lauren Gilbert12 Apr 2022 18:18 UTC
122 points
12 comments24 min readEA link

Ly­ing is Cowardice, not Strategy

Connor Leahy25 Oct 2023 5:59 UTC
−5 points
15 comments5 min readEA link
(cognition.cafe)

We should think about the pivotal act again. Here’s a bet­ter ver­sion of it.

Otto28 Aug 2025 9:29 UTC
3 points
1 comment3 min readEA link

Re­silience to Nu­clear & Vol­canic Winter

Stan Pinsent9 Jul 2024 10:39 UTC
96 points
14 comments3 min readEA link

Toby Ord’s new re­port on les­sons from the de­vel­op­ment of the atomic bomb

Ishan Mukherjee22 Nov 2022 10:37 UTC
65 points
3 comments1 min readEA link
(www.governance.ai)

Mul­ti­ple high-im­pact PhD stu­dent positions

Denkenberger🔸19 Nov 2022 0:02 UTC
32 points
0 comments3 min readEA link

Short­en­ing & en­light­en­ing dark ages as a sub-area of catas­trophic risk reduction

Jpmos5 Mar 2022 7:43 UTC
27 points
7 comments5 min readEA link

[Question] Where the QALY’s at in poli­ti­cal sci­ence?

Timothy_Liptrot5 Aug 2020 5:04 UTC
7 points
7 comments1 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo31 Oct 2023 5:46 UTC
14 points
1 comment2 min readEA link
(www.campaignforaisafety.org)

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

Amrit Sidhu-Brar 🔸25 Jan 2024 18:43 UTC
55 points
0 comments6 min readEA link

Ex­plor­ing AI Safety through “Es­cape Ex­per­i­ment”: A Short Film on Su­per­in­tel­li­gence Risks

Gaetan_Selle 🔷10 Nov 2024 4:42 UTC
4 points
0 comments2 min readEA link

[Question] How did the AI Safety tal­ent pipeline come to work so well?

Alejandro Acelas 🔸24 Jul 2025 7:24 UTC
7 points
2 comments1 min readEA link

A web­site you can share with Chris­ti­ans to get them on board with reg­u­lat­ing AI

JonCefalu8 Apr 2023 13:36 UTC
−4 points
8 comments1 min readEA link
(jesus-the-antichrist.com)

AI and Chem­i­cal, Biolog­i­cal, Ra­diolog­i­cal, & Nu­clear Hazards: A Reg­u­la­tory Review

Elliot Mckernon10 May 2024 8:41 UTC
8 points
1 comment10 min readEA link

AISN#14: OpenAI’s ‘Su­per­al­ign­ment’ team, Musk’s xAI launches, and de­vel­op­ments in mil­i­tary AI use

Center for AI Safety12 Jul 2023 16:58 UTC
26 points
0 comments4 min readEA link
(newsletter.safe.ai)

(p-)Zom­bie Uni­verse: an­other X-risk

Toby Tremlett🔹28 Jul 2022 21:34 UTC
21 points
5 comments4 min readEA link

Ap­ply to be a Stan­ford HAI Ju­nior Fel­low (As­sis­tant Pro­fes­sor- Re­search) by Nov. 15, 2021

Vael Gates31 Oct 2021 2:21 UTC
15 points
0 comments1 min readEA link

Ex­plor­ing Key Cases with the Port­fo­lio Builder

Hayley Clatterbuck10 Jul 2024 12:07 UTC
73 points
1 comment6 min readEA link

[Link post] Op­ti­mistic “Longter­mism” Is Ter­rible For Animals

BrianK6 Sep 2022 22:38 UTC
50 points
6 comments1 min readEA link
(www.forbes.com)

Stable to­tal­i­tar­i­anism: an overview

80000_Hours29 Oct 2024 16:07 UTC
36 points
1 comment20 min readEA link
(80000hours.org)

Com­mu­ni­ca­tion by ex­is­ten­tial risk or­ga­ni­za­tions: State of the field and sug­ges­tions for improvement

Existential Risk Communication Project13 Aug 2024 7:06 UTC
10 points
3 comments13 min readEA link

Pod­cast: “When For­eign Aid Gets Zeroed Out Overnight”

vsrinivas7 Aug 2025 17:38 UTC
9 points
0 comments1 min readEA link
(podcast.importantnotimportant.com)

‘The AI Dilemma: Growth vs Ex­is­ten­tial Risk’: An Ex­ten­sion for EAs and a Sum­mary for Non-economists

TomHoulden21 Apr 2024 16:28 UTC
68 points
1 comment16 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #3 - Ex­ten­sions and Variations

AHT20 Dec 2020 12:39 UTC
5 points
0 comments12 min readEA link

A Cri­tique of AI Takeover Scenarios

James Fodor31 Aug 2022 13:49 UTC
53 points
4 comments12 min readEA link

An Up­date On The Cam­paign For AI Safety Dot Org

yanni kyriacos5 May 2023 0:19 UTC
26 points
4 comments1 min readEA link

Should YouTube make recom­men­da­tions for the cli­mate?

Matrice Jacobine🔸🏳️‍⚧️5 Sep 2024 15:22 UTC
1 point
0 comments1 min readEA link
(link.springer.com)

Will an AGI/​ASI adopt the Dooms­day ar­gu­ment?

CuriousWhisperer7 Oct 2025 21:43 UTC
2 points
4 comments11 min readEA link

The Dilemma of Ul­ti­mate Technology

Aino20 Jul 2023 12:24 UTC
1 point
0 comments7 min readEA link

Anki deck for learn­ing the main AI safety orgs, pro­jects, and programs

Bryce Robertson29 Sep 2023 18:42 UTC
17 points
5 comments1 min readEA link

AI Safety Has a Very Par­tic­u­lar Worldview

zeshen🔸17 Oct 2025 19:19 UTC
39 points
5 comments5 min readEA link

An­nounc­ing the ERA Cam­bridge Sum­mer Re­search Fellowship

nandini16 Mar 2023 11:37 UTC
83 points
5 comments3 min readEA link

Space Ex­plo­ra­tion & Satel­lites on Our World in Data

EdMathieu14 Jun 2022 12:05 UTC
57 points
2 comments1 min readEA link
(ourworldindata.org)

“Far Co­or­di­na­tion”

𝕮𝖎𝖓𝖊𝖗𝖆23 Nov 2022 17:14 UTC
5 points
0 comments9 min readEA link

AI and the feel­ing of liv­ing in two worlds

michel10 Oct 2024 17:51 UTC
40 points
3 comments7 min readEA link

When Self-Op­ti­miz­ing AI Col­lapses From Within: A Con­cep­tual Model of Struc­tural Singularity

KaedeHamasaki7 Apr 2025 20:10 UTC
4 points
0 comments1 min readEA link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver Sourbut20 Sep 2023 12:46 UTC
52 points
19 comments9 min readEA link
(www.oliversourbut.net)

The Achilles’ Heel of Civ­i­liza­tion: Why Net­work Science Re­veals Our High­est-Lev­er­age Moment

vinniescent6 Oct 2025 9:27 UTC
7 points
1 comment2 min readEA link

What We Can Do to Prevent Ex­tinc­tion by AI

Joe Rogero24 Feb 2025 17:15 UTC
23 points
3 comments11 min readEA link

MATS 8.0 Re­search Projects

Jonathan Michala8 Sep 2025 21:36 UTC
9 points
0 comments1 min readEA link
(substack.com)

‘Surveillance Cap­i­tal­ism’ & AI Gover­nance: Slip­pery Busi­ness Models, Se­cu­ri­ti­sa­tion, and Self-Regulation

Charlie Harrison29 Feb 2024 15:47 UTC
19 points
2 comments12 min readEA link

SPAR Spring ’26 men­tor apps open—now ac­cept­ing biose­cu­rity, AI welfare, and more!

rebecca_baron5 Nov 2025 16:42 UTC
16 points
0 comments1 min readEA link

Why do we post our AI safety plans on the In­ter­net?

Peter S. Park31 Oct 2022 16:27 UTC
15 points
22 comments11 min readEA link

“Es­say on Longter­mism” com­pe­ti­tion. A re­spond to Chap­ter 10, “What Are the Prospects of Fore­cast­ing the Far Fu­ture?” by David Rhys Bernard and Eva Vi­valt, from Es­says on Longter­mism: Pre­sent Ac­tion for the Dis­tant Fu­ture.

Bavertov27 Sep 2025 10:56 UTC
1 point
0 comments5 min readEA link

A list of lists of large catastrophes

FJehn18 Jun 2025 10:24 UTC
24 points
1 comment12 min readEA link
(existentialcrunch.substack.com)

Brief notes on key limi­ta­tions in Mo­gensen’s Max­i­mal Cluelessness

Jim Buhler8 Sep 2025 13:28 UTC
24 points
0 comments1 min readEA link

Is that DNA Danger­ous?

Mslkmp30 Jan 2025 19:27 UTC
14 points
0 comments1 min readEA link
(press.asimov.com)

Early Reflec­tions and Re­sources on the Rus­sian In­va­sion of Ukraine

SethBaum18 Mar 2022 14:54 UTC
57 points
3 comments8 min readEA link

[Question] What are the best ex­am­ples of ob­ject-level work that was done by (or at least in­spired by) the longter­mist EA com­mu­nity that con­cretely and leg­ibly re­duced ex­is­ten­tial risk?

Ben Snodin11 Feb 2023 13:49 UTC
118 points
16 comments1 min readEA link

Beg­ging, Plead­ing AI Orgs to Com­ment on NIST AI Risk Man­age­ment Framework

Bridges15 Apr 2022 19:35 UTC
87 points
3 comments2 min readEA link

Silly idea to en­hance List rep­re­sen­ta­tion accuracy

Phib24 Apr 2023 0:30 UTC
7 points
4 comments2 min readEA link

Im­prov­ing sci­ence: In­fluenc­ing the di­rec­tion of re­search and the choice of re­search questions

C Tilli20 Dec 2021 10:20 UTC
65 points
13 comments15 min readEA link

How to PhD

eca28 Mar 2021 19:56 UTC
119 points
28 comments11 min readEA link

Model­ling large-scale cy­ber at­tacks from ad­vanced AI sys­tems with Ad­vanced Per­sis­tent Threats

Iyngkarran Kumar2 Oct 2023 9:54 UTC
28 points
2 comments30 min readEA link

Space set­tle­ment and the time of per­ils: a cri­tique of Thorstad

Matthew Rendall14 Apr 2024 15:29 UTC
46 points
10 comments4 min readEA link

New 3-hour pod­cast with An­ders Sand­berg about Grand Futures

Gus Docker6 Oct 2020 10:47 UTC
21 points
1 comment1 min readEA link

Pre-Sput­nik Earth-Or­bit Glints

Vasco Grilo🔸31 Oct 2025 17:54 UTC
9 points
0 comments5 min readEA link
(www.overcomingbias.com)

How do we solve the al­ign­ment prob­lem?

Joe_Carlsmith13 Feb 2025 18:27 UTC
38 points
1 comment7 min readEA link
(joecarlsmith.substack.com)

[Linkpost] NY Times Fea­ture on Anthropic

Garrison12 Jul 2023 19:30 UTC
34 points
3 comments5 min readEA link
(www.nytimes.com)

Scal­ing Wargam­ing for Global Catas­trophic Risks with AI

rai18 Jan 2025 15:07 UTC
73 points
1 comment4 min readEA link
(blog.sentinel-team.org)

Good government

rosehadshar10 Sep 2025 13:22 UTC
64 points
1 comment6 min readEA link

[Question] Com­mon re­but­tal to “paus­ing” or reg­u­lat­ing AI

sammyboiz🔸22 May 2024 4:21 UTC
4 points
2 comments1 min readEA link

“Risk Aware­ness Mo­ments” (Rams): A con­cept for think­ing about AI gov­er­nance interventions

oeg14 Apr 2023 17:40 UTC
53 points
0 comments9 min readEA link

Max Teg­mark — The AGI En­tente Delusion

Matrice Jacobine🔸🏳️‍⚧️13 Oct 2024 17:42 UTC
0 points
1 comment1 min readEA link
(www.lesswrong.com)

Pillars to Convergence

Phlobton1 Apr 2023 13:04 UTC
1 point
0 comments8 min readEA link

[Linkpost] Nick Bostrom’s “Apol­ogy for an Old Email”

pseudonym12 Jan 2023 4:55 UTC
12 points
96 comments1 min readEA link
(nickbostrom.com)

Grad­ual Disem­pow­er­ment: Sys­temic Ex­is­ten­tial Risks from In­cre­men­tal AI Development

Jan_Kulveit30 Jan 2025 17:07 UTC
39 points
4 comments2 min readEA link
(gradual-disempowerment.ai)

[Question] Fight­ing to ex­ist, for­get­ting to live. Suggest­ing a miss­ing pri­or­ity in EA thinking

Dr Kassim25 Apr 2025 12:06 UTC
8 points
0 comments3 min readEA link

An­nounc­ing the Swiss Ex­is­ten­tial Risk Ini­ti­a­tive (CHERI) 2023 Re­search Fellowship

Tobias Häberli27 Mar 2023 15:35 UTC
32 points
0 comments2 min readEA link

Rough at­tempt to pro­file char­i­ties which sup­port Ukrainian war re­lief in terms of their cost-effec­tive­ness.

Michael27 Feb 2022 0:51 UTC
29 points
5 comments4 min readEA link

In­tro­duc­ing Sur­vival Sanctuaries

James Norris5 Feb 2025 16:00 UTC
5 points
0 comments1 min readEA link

Eval­u­at­ing Com­mu­nal Violence from an Effec­tive Altru­ist Perspective

Frank Fredericks13 Aug 2019 19:38 UTC
16 points
4 comments8 min readEA link

Fill out this cen­sus of ev­ery­one in­ter­ested in re­duc­ing catas­trophic AI risks

AHT18 May 2024 15:53 UTC
105 points
1 comment1 min readEA link

[Question] Why haven’t we been de­stroyed by a power-seek­ing AGI from el­se­where in the uni­verse?

Jadon Schmitt22 Jul 2023 7:21 UTC
35 points
14 comments1 min readEA link

The Cred­i­bil­ity of Apoca­lyp­tic Claims: A Cri­tique of Techno-Fu­tur­ism within Ex­is­ten­tial Risk

Ember16 Aug 2022 19:48 UTC
25 points
35 comments17 min readEA link

Ap­ply for Stan­ford Ex­is­ten­tial Risks Ini­ti­a­tive (SERI) Postdoc

Vael Gates14 Dec 2021 21:50 UTC
28 points
2 comments1 min readEA link

Civ­i­liza­tional vulnerabilities

Vasco Grilo🔸22 Apr 2022 9:37 UTC
7 points
0 comments3 min readEA link

Shal­low Re­port on Asteroids

Joel Tan🔸20 Oct 2022 1:34 UTC
27 points
7 comments13 min readEA link

The Map of Im­pact Risks and As­teroid Defense

turchin3 Nov 2016 15:34 UTC
7 points
8 comments4 min readEA link

Prize Money ($100) for Valid Tech­ni­cal Ob­jec­tions to Icesteading

Roko18 Dec 2024 23:40 UTC
−2 points
2 comments1 min readEA link
(twitter.com)

Challenges from Ca­reer Tran­si­tions and What To Ex­pect From Advising

ClaireB24 Jul 2025 13:22 UTC
26 points
1 comment9 min readEA link

Shal­low eval­u­a­tions of longter­mist organizations

NunoSempere24 Jun 2021 15:31 UTC
193 points
34 comments34 min readEA link

The Case for Short­ter­mism—by Robert Wright

Miquel Banchs-Piqué (prev. mikbp)16 Aug 2022 20:00 UTC
24 points
0 comments1 min readEA link
(nonzero.substack.com)

What is the ar­gu­ment against a Thanos-ing all hu­man­ity to save the lives of other sen­tient be­ings?

somethoughts7 Mar 2021 8:02 UTC
0 points
11 comments3 min readEA link

Sen­tience-Based Align­ment Strate­gies: Should we try to give AI gen­uine em­pa­thy/​com­pas­sion?

Lloy2 🔹4 May 2025 20:45 UTC
16 points
1 comment3 min readEA link

Mo­ral Spillover in Hu­man-AI Interaction

Katerina Manoli5 Jun 2023 15:20 UTC
17 points
1 comment13 min readEA link

The Jour­nal of Danger­ous Ideas

rogersbacon13 Feb 2024 15:43 UTC
−26 points
1 comment5 min readEA link
(www.secretorum.life)

[Question] How can we de­crease the short-term prob­a­bil­ity of the nu­clear war?

Just Learning1 Mar 2022 3:24 UTC
18 points
0 comments1 min readEA link

Ret­ro­spec­tive on re­cent ac­tivity of Ries­gos Catas­trófi­cos Globales

Jaime Sevilla1 May 2023 18:35 UTC
45 points
0 comments5 min readEA link

Shal­low Re­port on Nu­clear War (Abol­ish­ment)

Joel Tan🔸18 Oct 2022 7:36 UTC
35 points
14 comments18 min readEA link

Elic­it­ing re­sponses to Marc An­dreessen’s “Why AI Will Save the World”

Coleman17 Jul 2023 19:58 UTC
2 points
2 comments1 min readEA link
(a16z.com)

Cli­mate change dona­tion recommendations

Sanjay16 Jul 2020 21:17 UTC
46 points
7 comments14 min readEA link

When the world feels un­sta­ble...

Catherine Low🔸8 Sep 2025 1:59 UTC
103 points
4 comments4 min readEA link

The ne­ces­sity of “Guardian AI” and two con­di­tions for its achievement

Proica28 May 2024 11:42 UTC
1 point
1 comment15 min readEA link

AI Safety Seed Fund­ing Net­work—Join as a Donor or Investor

Alexandra Bos16 Dec 2024 19:30 UTC
45 points
1 comment2 min readEA link

If tech progress might be bad, what should we tell peo­ple about it?

Robert_Wiblin16 Feb 2016 10:26 UTC
21 points
18 comments2 min readEA link

Biose­cu­rity Re­sources I Often Recommend

Lin BL31 Jan 2025 18:28 UTC
22 points
0 comments1 min readEA link
(docs.google.com)

You don’t need to be a ge­nius to be in AI safety research

Claire Short10 May 2023 22:23 UTC
28 points
4 comments6 min readEA link

Wild An­i­mal Welfare Sce­nar­ios for AI Doom

utilistrutil8 Jun 2023 19:41 UTC
54 points
2 comments3 min readEA link

[Opz­ionale] Per ap­profondire su “Il nos­tro ul­timo se­c­olo”

EA Italy12 Jan 2023 3:15 UTC
1 point
0 comments2 min readEA link

In­for­ma­tion se­cu­rity con­sid­er­a­tions for AI and the long term future

Jeffrey Ladish2 May 2022 20:53 UTC
134 points
8 comments11 min readEA link

Have we un­der­es­ti­mated the risk of a NATO-Rus­sia nu­clear war? Can we do any­thing about it?

TopherHallquist9 Jul 2015 16:09 UTC
8 points
20 comments1 min readEA link

Thresh­olds #1: What does good look like for longter­mism?

Spencer R. Ericson25 Jul 2023 19:17 UTC
45 points
36 comments8 min readEA link

Why Don’t We Use Chem­i­cal Weapons Any­more?

Dale23 Apr 2020 1:25 UTC
28 points
4 comments3 min readEA link
(acoup.blog)

[Video] How hav­ing Fast Fourier Trans­forms sooner could have helped with Nu­clear Disar­ma­ment—Veritasium

mako yass3 Nov 2022 20:52 UTC
12 points
1 comment1 min readEA link
(www.youtube.com)

FYI there is a Ger­man in­sti­tute study­ing so­ciolog­i­cal as­pects of ex­is­ten­tial risk

Max Görlitz12 Feb 2023 17:35 UTC
77 points
10 comments1 min readEA link

Biose­cu­rity newslet­ters you should sub­scribe to

Swan 🔸29 Jan 2023 17:00 UTC
104 points
15 comments1 min readEA link

ML4G Ger­many—AI Align­ment Camp

Evander H. 🔸27 Jun 2023 15:33 UTC
6 points
0 comments1 min readEA link

Zvi on: A Play­book for AI Policy at the Man­hat­tan Institute

Phib4 Aug 2024 21:34 UTC
9 points
1 comment7 min readEA link
(thezvi.substack.com)

Vol­canic win­ters have hap­pened be­fore—should we pre­pare for the next one?

Stan Pinsent7 Aug 2024 11:08 UTC
24 points
1 comment3 min readEA link

The US plans to spend $1.5 Trillion up­grad­ing its Nu­clear Mis­siles!!

Denis 15 Nov 2023 0:27 UTC
9 points
9 comments2 min readEA link

China’s Z-Ma­chine, a test fa­cil­ity for nu­clear weapons

EdoArad13 Dec 2018 7:03 UTC
11 points
0 comments1 min readEA link
(www.scmp.com)

A New Model for Com­pute Cen­ter Verification

Damin Curtis🔹10 Oct 2023 19:23 UTC
21 points
2 comments5 min readEA link

So, What Ex­actly is a Frac­tional Con­sul­tant?

Deena Englander23 Jun 2025 16:03 UTC
11 points
0 comments3 min readEA link

Samotsvety Nu­clear Risk Fore­casts — March 2022

NunoSempere10 Mar 2022 18:52 UTC
155 points
54 comments6 min readEA link

Will longter­mists self-efface

Noah Scales12 Aug 2022 2:32 UTC
−7 points
23 comments6 min readEA link

An Anal­y­sis of Sys­temic Risk and Ar­chi­tec­tural Re­quire­ments for the Con­tain­ment of Re­cur­sively Self-Im­prov­ing AI

Ihor Ivliev17 Jun 2025 0:16 UTC
2 points
5 comments4 min readEA link

[Question] What hap­pened to the ‘only 400 peo­ple work in AI safety/​gov­er­nance’ num­ber dated from 2020?

Vaipan15 Mar 2024 15:25 UTC
27 points
2 comments1 min readEA link

The Hu­man Con­di­tion: A Cru­cial Com­po­nent of Ex­is­ten­tial Risk Calcu­la­tions

Phil Tanny28 Aug 2022 14:51 UTC
−10 points
5 comments1 min readEA link

Ex­is­ten­tial Risks: Hu­man Rights

Chiharu Saruwatari15 Dec 2022 12:35 UTC
4 points
0 comments6 min readEA link

Who will be in charge once al­ign­ment is achieved?

trurl16 Dec 2022 16:53 UTC
8 points
2 comments1 min readEA link

[Question] Re­cent pa­per on cli­mate tip­ping points

jackva2 Mar 2023 23:11 UTC
22 points
7 comments1 min readEA link

EA and the Pos­si­ble De­cline of the US: Very Rough Thoughts

Cullen 🔸8 Jan 2021 7:30 UTC
56 points
19 comments4 min readEA link

Against be­ing fanatical

Noah Birnbaum1 Oct 2025 12:33 UTC
24 points
5 comments7 min readEA link

[Notes] Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

Ben14 Jan 2020 22:13 UTC
40 points
7 comments14 min readEA link

What failure looks like for animals

Alistair Stewart3 Sep 2025 17:55 UTC
69 points
5 comments5 min readEA link

Some his­tory top­ics it might be very valuable to investigate

MichaelA🔸8 Jul 2020 2:40 UTC
91 points
34 comments6 min readEA link

Book sum­mary: ‘Why In­tel­li­gence Fails’ by Robert Jervis

Ben Stewart19 Jun 2023 16:04 UTC
40 points
3 comments12 min readEA link

AGI in a vuln­er­a­ble world

AI Impacts2 Apr 2020 3:43 UTC
17 points
0 comments1 min readEA link
(aiimpacts.org)

[Pod­cast] Si­mon Beard on Parfit, Cli­mate Change, and Ex­is­ten­tial Risk

finm28 Jan 2021 19:47 UTC
11 points
0 comments1 min readEA link
(hearthisidea.com)

Deep­Mind’s gen­er­al­ist AI, Gato: A non-tech­ni­cal explainer

frances_lorenz16 May 2022 21:19 UTC
128 points
13 comments6 min readEA link

INTERVIEW: StakeOut.AI w/​ Dr. Peter Park

Jacob-Haimes5 Mar 2024 18:04 UTC
21 points
7 comments1 min readEA link
(into-ai-safety.github.io)

Why say ‘longter­mism’ and not just ‘ex­tinc­tion risk’?

tcelferact10 Aug 2022 23:05 UTC
5 points
4 comments1 min readEA link

#192 – What would hap­pen if North Korea launched a nu­clear weapon at the US (An­nie Ja­cob­sen on the 80,000 Hours Pod­cast)

80000_Hours12 Jul 2024 19:38 UTC
13 points
1 comment12 min readEA link

La­bor Par­ti­ci­pa­tion is a High-Pri­or­ity AI Align­ment Risk

alx12 Aug 2024 18:48 UTC
17 points
3 comments16 min readEA link

New Re­port: Multi-Agent Risks from Ad­vanced AI

Lewis Hammond23 Feb 2025 0:32 UTC
40 points
3 comments2 min readEA link
(www.cooperativeai.com)

Rad­i­cal Longter­mism and the Se­duc­tion of End­less Growth: A Cri­tique of William MacAskill’s ‘What We Owe the Fu­ture’

Alexander Herwix 🔸14 Sep 2023 14:43 UTC
−13 points
15 comments1 min readEA link
(perspecteeva.substack.com)

AI Safety Endgame Stories

IvanVendrov28 Sep 2022 17:12 UTC
31 points
1 comment10 min readEA link

Guard­ing To­mor­row: Longter­mism and the Fight Against Global Pan­demics By Adeyanju Temi­tope Andrew

Ade Beraka 22 Sep 2025 14:16 UTC
1 point
0 comments5 min readEA link

Why mis­al­igned AGI won’t lead to mass kil­lings (and what ac­tu­ally mat­ters in­stead)

Julian Nalenz6 Feb 2025 13:22 UTC
−3 points
5 comments3 min readEA link
(blog.hermesloom.org)

Align­ment, Goals, & The Gut-Head Gap: A Re­view of Ngo. et al

Violet Hour11 May 2023 17:16 UTC
26 points
0 comments13 min readEA link

Hash­marks: Pri­vacy-Pre­serv­ing Bench­marks for High-Stakes AI Evaluation

Paul Bricman4 Dec 2023 7:41 UTC
4 points
0 comments16 min readEA link
(arxiv.org)

MIRI 2024 Mis­sion and Strat­egy Update

Malo5 Jan 2024 1:10 UTC
154 points
38 comments8 min readEA link

III. Run­ning its course

Maynk024 Nov 2023 19:31 UTC
6 points
1 comment5 min readEA link

An­nounc­ing the Le­gal Pri­ori­ties Pro­ject Writ­ing Com­pe­ti­tion: Im­prov­ing Cost-Benefit Anal­y­sis to Ac­count for Ex­is­ten­tial and Catas­trophic Risks

Mackenzie7 Jun 2022 9:37 UTC
104 points
8 comments9 min readEA link

Cryp­tocur­rency Ex­ploits Show the Im­por­tance of Proac­tive Poli­cies for AI X-Risk

eSpencer16 Sep 2022 4:44 UTC
14 points
1 comment4 min readEA link

The great en­ergy de­scent (short ver­sion) - An im­por­tant thing EA might have missed

CB🔸31 Aug 2022 21:50 UTC
73 points
94 comments10 min readEA link

Mea­sur­ing ar­tifi­cial in­tel­li­gence on hu­man bench­marks is naive

Ward A11 Apr 2023 11:28 UTC
9 points
2 comments1 min readEA link

The Tyranny of Ex­is­ten­tial Risk

Karl Faulks18 Nov 2024 16:41 UTC
4 points
1 comment5 min readEA link

Longview is now offer­ing AI grant recom­men­da­tions to donors giv­ing >$100k /​ year

Longview Philanthropy11 Apr 2025 16:01 UTC
73 points
0 comments2 min readEA link

Over­re­act­ing to cur­rent events can be very costly

Kelsey Piper4 Oct 2022 21:30 UTC
281 points
68 comments4 min readEA link

#203 – In­terfer­ing with wild na­ture, ac­cept­ing death, and the ori­gin of com­plex civil­i­sa­tion (Peter God­frey-Smith on The 80,000 Hours Pod­cast)

80000_Hours4 Oct 2024 13:00 UTC
14 points
0 comments16 min readEA link

AI Gover­nance to Avoid Ex­tinc­tion: The Strate­gic Land­scape and Ac­tion­able Re­search Ques­tions [MIRI TGT Re­search Agenda]

peterbarnett5 May 2025 19:13 UTC
67 points
1 comment8 min readEA link
(techgov.intelligence.org)

Ir­recov­er­able collapse

Yarrow Bouchard 🔸21 Oct 2025 11:05 UTC
10 points
4 comments4 min readEA link

Talk­ing about longter­mism isn’t very important

cb20 Oct 2025 13:15 UTC
36 points
11 comments3 min readEA link

[Question] What are ex­am­ples where ex­treme risk poli­cies have been suc­cess­fully im­ple­mented?

Joris 🔸16 May 2022 15:37 UTC
32 points
14 comments2 min readEA link

Nu­clear weapons – Prob­lem profile

Benjamin Hilton19 Jul 2024 17:17 UTC
53 points
7 comments31 min readEA link

An In­ter­na­tional Col­lab­o­ra­tive Hub for Ad­vanc­ing AI Safety Research

Cody Albert22 Apr 2025 16:12 UTC
9 points
0 comments5 min readEA link

When safety is dan­ger­ous: risks of an in­definite pause on AI de­vel­op­ment, and call for re­al­is­tic alternatives

Hayven Frienby18 Jan 2024 14:59 UTC
5 points
0 comments5 min readEA link

[Pod­cast] Thomas Moynihan on the His­tory of Ex­is­ten­tial Risk

finm22 Mar 2021 11:07 UTC
26 points
2 comments1 min readEA link
(hearthisidea.com)

Biose­cu­rity Re­source Hub from Aron

Aron Lajko21 Jul 2023 18:07 UTC
39 points
4 comments1 min readEA link

[Question] Share AI Safety Ideas: Both Crazy and Not. №2

ank31 Mar 2025 18:45 UTC
1 point
11 comments1 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngo19 Jan 2021 0:07 UTC
36 points
2 comments8 min readEA link

RA x Con­trolAI video: What if AI just keeps get­ting smarter?

Writer2 May 2025 14:19 UTC
14 points
1 comment9 min readEA link

The­o­ret­i­cal New Tech­nol­ogy for En­ergy Generation

GraviticEngine77 Feb 2025 13:17 UTC
−1 points
2 comments9 min readEA link

Perché è im­por­tante ri­durre il ris­chio esistenziale

EA Italy12 Jan 2023 2:54 UTC
1 point
0 comments2 min readEA link

AI is ad­vanc­ing fast

Vishakha Agrawal23 Apr 2025 11:04 UTC
2 points
2 comments2 min readEA link
(aisafety.info)

The im­por­tance of get­ting digi­tal con­scious­ness right

Derek Shiller13 Jun 2022 10:41 UTC
68 points
13 comments8 min readEA link

Hiring Ret­ro­spec­tive: ERA Fel­low­ship 2023

OscarD🔸5 Aug 2023 9:56 UTC
62 points
16 comments6 min readEA link

ALLFED’s 2023 Highlights

Sonia_Cassidy1 Dec 2023 0:47 UTC
61 points
5 comments27 min readEA link

Is Op­ti­mal Reflec­tion Com­pet­i­tive with Ex­tinc­tion Risk Re­duc­tion? - Re­quest­ing Reviewers

Jordan Arel29 Jun 2025 5:13 UTC
18 points
1 comment11 min readEA link

There are no peo­ple to be effec­tively al­tru­is­tic for on a dead planet: EA fund­ing of pro­jects with­out con­duct­ing En­vi­ron­men­tal Im­pact Assess­ments (EIAs), Health and Safety Assess­ments (HSAs) and Life Cy­cle Assess­ments (LCAs) = catastrophe

Deborah W.A. Foulkes26 May 2022 23:46 UTC
12 points
22 comments8 min readEA link

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden Karnofsky13 Jul 2021 16:57 UTC
219 points
48 comments8 min readEA link
(www.cold-takes.com)

What per­centage of things that could kill us all are “Other” risks?

PCO Moore10 Aug 2022 9:20 UTC
7 points
0 comments4 min readEA link

Cos­mic rays could cause ma­jor elec­tronic dis­rup­tion and pose a small ex­is­ten­tial risk

M_Allcock12 Aug 2022 3:30 UTC
12 points
0 comments12 min readEA link

What do XPT fore­casts tell us about nu­clear risk?

Forecasting Research Institute22 Aug 2023 19:09 UTC
22 points
0 comments14 min readEA link

6) Speed is The Most Im­por­tant Vari­able in Pan­demic Risk Management

PandemicRiskMan5 Mar 2024 13:51 UTC
3 points
0 comments9 min readEA link

Re­marks about Longter­mism in­spired by Tor­res’s ‘Against Longter­mism’

carboniferous_umbraculum2 Feb 2022 16:20 UTC
43 points
0 comments24 min readEA link

Big Pic­ture AI Safety: Introduction

EuanMcLean23 May 2024 11:28 UTC
34 points
3 comments5 min readEA link

Against Anony­mous Hit Pieces

Anti-Omega18 Jun 2023 19:36 UTC
−25 points
3 comments1 min readEA link

5 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (April 2020 up­date)

HaydnBelfield29 Apr 2020 9:37 UTC
23 points
1 comment4 min readEA link

Peter Wilde­ford fea­tured on the Daily Show about the risks from AI

MartinBerlin5 Dec 2025 11:42 UTC
101 points
4 comments1 min readEA link

Live like you only have 10 years left

Rafael Ruiz10 Jun 2025 12:48 UTC
8 points
10 comments11 min readEA link

Be a Stoic and build bet­ter democ­ra­cies: an Aussie-as take on x-risks (re­view es­say)

Matt Boyd21 Nov 2021 4:30 UTC
32 points
3 comments11 min readEA link

De­bat­ing AI’s Mo­ral Sta­tus: The Most Hu­mane and Silliest Thing Hu­mans Do(?)

Soe Lin29 Sep 2024 5:01 UTC
5 points
5 comments3 min readEA link

[Cause Ex­plo­ra­tion Prizes] NOT Get­ting Ab­solutely Hosed by a So­lar Flare

aurellem26 Aug 2022 8:23 UTC
5 points
1 comment2 min readEA link

In­ves­ti­gat­ing the role of agency in AI x-risk

Corin Katzke8 Apr 2024 15:12 UTC
22 points
3 comments40 min readEA link
(www.convergenceanalysis.org)

What if we just…didn’t build AGI? An Ar­gu­ment Against Inevitability

Nate Sharpe10 May 2025 3:34 UTC
64 points
21 comments14 min readEA link
(natezsharpe.substack.com)

Fake think­ing and real thinking

Joe_Carlsmith28 Jan 2025 20:05 UTC
78 points
3 comments38 min readEA link

OPEC for a slow AGI takeoff

vyrax21 Apr 2023 10:53 UTC
4 points
0 comments3 min readEA link

[Re­view and notes] How Democ­racy Ends—David Runciman

Ben13 Feb 2020 22:30 UTC
31 points
1 comment5 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

Con­flict­ing Effects of Ex­is­ten­tial Risk Miti­ga­tion Interventions

Pete Rowlett10 May 2023 22:20 UTC
10 points
0 comments8 min readEA link

How much money should we be sav­ing for re­tire­ment?

Denkenberger🔸2 Mar 2025 6:21 UTC
22 points
6 comments2 min readEA link

NASA will re-di­rect an as­ter­oid tonight as a test for plane­tary defence (link-post)

Ben Stewart26 Sep 2022 4:58 UTC
70 points
14 comments1 min readEA link
(theconversation.com)

Differ­en­tial progress /​ in­tel­lec­tual progress /​ tech­nolog­i­cal development

MichaelA🔸24 Apr 2020 14:08 UTC
47 points
16 comments7 min readEA link

In­ter­view sub­jects for im­pact liti­ga­tion pro­ject (biose­cu­rity & pan­demic pre­pared­ness)

Legal Priorities Project3 Mar 2022 14:20 UTC
20 points
0 comments1 min readEA link

AI safety ad­vo­cates should con­sider pro­vid­ing gen­tle push­back fol­low­ing the events at OpenAI

I_machinegun_Kelly22 Dec 2023 21:05 UTC
86 points
5 comments3 min readEA link
(www.lesswrong.com)

An­nounc­ing Biose­cu­rity Fore­cast­ing Group—Ap­ply Now

Lin BL23 Jan 2025 16:52 UTC
25 points
0 comments1 min readEA link

[Question] Benefits/​Risks of Scott Aaron­son’s Ortho­dox/​Re­form Fram­ing for AI Alignment

Jeremy21 Nov 2022 17:47 UTC
15 points
5 comments1 min readEA link
(scottaaronson.blog)

[Opz­ionale] Tutte le pos­si­bili con­clu­sioni sul fu­turo dell’uman­ità sono incredibili

EA Italy17 Jan 2023 14:59 UTC
1 point
0 comments8 min readEA link

GCRI Open Call for Ad­visees and Col­lab­o­ra­tors 2022

McKenna_Fitzgerald23 May 2022 21:41 UTC
4 points
3 comments1 min readEA link

Re­search ex­er­cise: 5-minute in­side view on how to re­duce risk of nu­clear war

Emrik23 Oct 2022 12:42 UTC
16 points
2 comments6 min readEA link

AI com­pa­nies are un­likely to make high-as­surance safety cases if timelines are short

Ryan Greenblatt23 Jan 2025 18:41 UTC
45 points
1 comment13 min readEA link

3 Stages of Com­pe­ti­tion for the Long-Term Future

JordanStone30 Nov 2025 21:55 UTC
29 points
7 comments25 min readEA link

IFRC cre­ative com­pe­ti­tion: product or ser­vice from fu­ture au­tonomous weapons sys­tems and emerg­ing digi­tal risks

Devin Lam21 Jul 2024 13:08 UTC
9 points
0 comments1 min readEA link
(solferinoacademy.com)

Chilean AIS Hackathon Retrospective

Agustín Covarrubias 🔸9 May 2023 1:34 UTC
67 points
0 comments5 min readEA link

Stop Ap­ply­ing And Get To Work

Pauliina2 Dec 2025 17:57 UTC
63 points
4 comments2 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port June—Septem­ber 2020

HaydnBelfield2 Dec 2020 18:33 UTC
24 points
0 comments17 min readEA link

Align­ment for fo­cused chat­bots?

Beckpm8 Jul 2023 15:09 UTC
−1 points
1 comment1 min readEA link

Iden­ti­fy­ing Geo­graphic Hotspots for Post-Catas­tro­phe Recovery

LNE1 Mar 2025 19:42 UTC
14 points
6 comments33 min readEA link

Nu­clear Risk and Philan­thropic Strat­egy [Founders Pledge]

christian.r25 Jul 2023 20:22 UTC
83 points
15 comments76 min readEA link
(www.founderspledge.com)

[Question] Has any­one done an anal­y­sis on the im­por­tance, tractabil­ity, and ne­glect­ed­ness of keep­ing hu­man-di­gestible calories in the ocean in case we need it af­ter some global catas­tro­phe?

Mati_Roy17 Feb 2020 7:47 UTC
9 points
5 comments1 min readEA link

Con­ver­gence 2024 Im­pact Review

David_Kristoffersson24 Mar 2025 20:28 UTC
39 points
0 comments14 min readEA link

What should AI safety be try­ing to achieve?

EuanMcLean23 May 2024 11:28 UTC
13 points
1 comment13 min readEA link

AI Safety Overview: CERI Sum­mer Re­search Fellowship

Jamie B24 Mar 2022 15:12 UTC
29 points
0 comments2 min readEA link

[Question] If an ex­is­ten­tial catas­tro­phe oc­curs, how likely is it to wipe out all an­i­mal sen­tience?

JoA🔸16 Mar 2025 22:30 UTC
11 points
2 comments2 min readEA link

Have your say on the fu­ture of AI reg­u­la­tion: Dead­line ap­proach­ing for your feed­back on UN High-Level Ad­vi­sory Body on AI In­terim Re­port ‘Govern­ing AI for Hu­man­ity’

Deborah W.A. Foulkes29 Mar 2024 6:37 UTC
17 points
1 comment1 min readEA link

Space Gover­nance is not a cause area

JordanStone24 Feb 2025 14:47 UTC
26 points
10 comments5 min readEA link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael Gates14 Jun 2022 0:49 UTC
45 points
5 comments30 min readEA link

When is it im­por­tant that open-weight mod­els aren’t re­leased? My thoughts on the benefits and dan­gers of open-weight mod­els in re­sponse to de­vel­op­ments in CBRN ca­pa­bil­ities.

Ryan Greenblatt9 Jun 2025 19:19 UTC
39 points
3 comments9 min readEA link

New Nu­clear Se­cu­rity Syl­labus + Sum­mer Course

Maya D1 May 2023 17:02 UTC
45 points
5 comments1 min readEA link

Towards an al­ter­na­tive to the COPs

Arnold Bomans19 Nov 2025 14:53 UTC
1 point
1 comment1 min readEA link

Giv­ing AIs safe motivations

Joe_Carlsmith18 Aug 2025 18:02 UTC
22 points
1 comment51 min readEA link

What Are The Biggest Threats To Hu­man­ity? (A Hap­pier World video)

Jeroen Willems🔸31 Jan 2023 19:50 UTC
17 points
1 comment15 min readEA link

Longter­mism Sus­tain­abil­ity Un­con­fer­ence Invite

Ben Yeoh1 Sep 2022 12:34 UTC
5 points
0 comments2 min readEA link

The Selfish Machine

Vasco Grilo🔸15 Mar 2025 10:58 UTC
9 points
0 comments12 min readEA link
(maartenboudry.substack.com)

Longter­mism, risk, and extinction

Richard Pettigrew4 Aug 2022 15:25 UTC
78 points
12 comments41 min readEA link

AISN #17: Au­to­mat­i­cally Cir­cum­vent­ing LLM Guardrails, the Fron­tier Model Fo­rum, and Se­nate Hear­ing on AI Oversight

Center for AI Safety1 Aug 2023 15:24 UTC
15 points
0 comments8 min readEA link

That Alien Mes­sage—The Animation

Writer7 Sep 2024 14:53 UTC
43 points
7 comments8 min readEA link
(youtu.be)

Why I Should Work on AI Safety—Part 2: Will AI Ac­tu­ally Sur­pass Hu­man In­tel­li­gence?

Aditya Aswani27 Dec 2023 21:08 UTC
8 points
2 comments8 min readEA link

The top X-fac­tor EA ne­glects: desta­bi­liza­tion of the United States

Yelnats T.J.31 Aug 2022 19:18 UTC
33 points
2 comments18 min readEA link

Les­sons from the Iraq War for AI policy

Buck10 Jul 2025 18:52 UTC
71 points
11 comments4 min readEA link

2) Pan­demics Are Solved With Risk Man­age­ment, Not Science

PandemicRiskMan31 Jan 2024 15:51 UTC
−9 points
0 comments7 min readEA link

Re­quest for pro­pos­als: Help Open Philan­thropy quan­tify biolog­i­cal risk

djbinder12 May 2022 21:28 UTC
137 points
10 comments7 min readEA link

Per­sua­sion Tools: AI takeover with­out AGI or agency?

kokotajlod20 Nov 2020 16:56 UTC
15 points
5 comments10 min readEA link

Effec­tive AI Outreach | A Data Driven Approach

NoahCWilson🔸28 Feb 2025 0:44 UTC
15 points
2 comments15 min readEA link

Save the Date: EAGxMars

OllieBase1 Apr 2022 11:44 UTC
148 points
15 comments1 min readEA link

What could a fel­low­ship scheme aimed at tack­ling the biggest threats to hu­man­ity look like?

james_r1 Sep 2022 15:29 UTC
4 points
0 comments5 min readEA link

“If we go ex­tinct due to mis­al­igned AI, at least na­ture will con­tinue, right? … right?”

plex18 May 2024 15:06 UTC
13 points
10 comments2 min readEA link
(aisafety.info)

How to dis­solve moral clue­less­ness about donat­ing mosquito nets

ben.smith8 Jun 2022 7:12 UTC
25 points
8 comments12 min readEA link

An economist’s per­spec­tive on AI safety

David Stinson7 Jun 2024 7:55 UTC
7 points
1 comment9 min readEA link

AIS Hun­gary is hiring a part-time Tech­ni­cal Lead! (Dead­line: Dec 31st)

gergo17 Dec 2024 14:08 UTC
9 points
0 comments2 min readEA link

#188 – On whether sci­ence is good (Matt Clancy on the 80,000 Hours Pod­cast)

80000_Hours24 May 2024 15:04 UTC
13 points
0 comments17 min readEA link

Com­par­ing sam­pling strate­gies for early de­tec­tion of stealth biothreats

slg26 Feb 2024 23:14 UTC
19 points
3 comments26 min readEA link
(naobservatory.org)

How Effec­tive Altru­ism im­pacts the views on Ex­is­ten­tial Risks

Daniel Pidgornyi2 Feb 2024 15:34 UTC
1 point
0 comments3 min readEA link

The Ex­is­ten­tial Risk of Speciesist Bias in AI

Sam Tucker-Davis11 Nov 2023 3:27 UTC
38 points
1 comment3 min readEA link

The Ex­is­ten­tial Risk Alli­ance is hiring mul­ti­ple Cause Area Leads

Rethink Priorities2 Feb 2023 17:10 UTC
20 points
0 comments4 min readEA link
(careers.rethinkpriorities.org)

The Align­ment Prob­lem No One is Talk­ing About

Non-zero-sum James14 May 2024 10:42 UTC
5 points
0 comments2 min readEA link

We Will Be Lost Without Home: A Call for Earth-Cen­tric Space Ethics

DongHun Lee24 May 2025 9:53 UTC
−5 points
1 comment1 min readEA link

[Question] What is MIRI cur­rently do­ing?

Roko14 Dec 2024 2:55 UTC
9 points
2 comments1 min readEA link

AI-Risk in the State of the Euro­pean Union Address

Sam Bogerd13 Sep 2023 13:27 UTC
25 points
0 comments3 min readEA link
(state-of-the-union.ec.europa.eu)

Cause Area: Hu­man Rights in North Korea

Dawn Drescher20 Nov 2017 20:52 UTC
64 points
12 comments20 min readEA link

ERA Fel­low­ship Alumni Stories

MvK🔸1 Oct 2023 12:33 UTC
18 points
1 comment8 min readEA link

How to make in­de­pen­dent re­search more fun (80k After Hours)

rgb17 Mar 2023 22:25 UTC
28 points
0 comments25 min readEA link
(80000hours.org)

[Question] What ques­tions could COVID-19 provide ev­i­dence on that would help guide fu­ture EA de­ci­sions?

MichaelA🔸27 Mar 2020 5:51 UTC
7 points
7 comments1 min readEA link

An­nounc­ing Con­fido 2.0: Pro­mot­ing the un­cer­tainty-aware mind­set in orgs

Blanka10 Jan 2024 11:45 UTC
20 points
2 comments2 min readEA link

AI Safety Newslet­ter #8: Rogue AIs, how to screen for AI risks, and grants for re­search on demo­cratic gov­er­nance of AI

Center for AI Safety30 May 2023 11:44 UTC
16 points
3 comments6 min readEA link
(newsletter.safe.ai)

Select Challenges with Crit­i­cism & Eval­u­a­tion Around EA

Ozzie Gooen10 Feb 2023 23:36 UTC
111 points
5 comments6 min readEA link
(quri.substack.com)

En­light­ened Con­cerns of Tomorrow

cassidynelson15 Mar 2018 5:29 UTC
15 points
7 comments4 min readEA link

Philan­thropy to the Right of Boom [Founders Pledge]

christian.r14 Feb 2023 17:08 UTC
83 points
11 comments20 min readEA link

Re­port: Latin Amer­ica and Global Catas­trophic Risks, trans­form­ing risk man­age­ment.

JorgeTorresC9 Jan 2024 2:13 UTC
25 points
1 comment2 min readEA link
(riesgoscatastroficosglobales.com)

Why we’re en­ter­ing a new nu­clear age — and how to re­duce the risks (Chris­tian Ruhl on the 80k After Hours Pod­cast)

80000_Hours27 Mar 2024 19:17 UTC
52 points
2 comments7 min readEA link

Against Mak­ing Up Our Con­scious Minds

Silica10 Feb 2024 7:12 UTC
13 points
0 comments5 min readEA link

Con­flict and poverty (or should we tackle poverty in nu­clear con­texts more?)

Sanjay6 Mar 2020 21:59 UTC
13 points
0 comments7 min readEA link

Why We Need a Bea­con of Hope in the Loom­ing Gloom of AGI

Beyond Singularity2 Apr 2025 14:22 UTC
4 points
6 comments5 min readEA link

Thou­sands of mal­i­cious ac­tors on the fu­ture of AI misuse

Zershaaneh Qureshi1 Apr 2024 10:03 UTC
75 points
1 comment1 min readEA link

[Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the like­li­hood of war

poppinfresh7 Nov 2022 12:22 UTC
74 points
0 comments2 min readEA link
(chrisblattman.com)

Arkose may be clos­ing, but you can help

Arkose1 May 2025 11:09 UTC
58 points
6 comments2 min readEA link

Why Brains Beat AI

Wayne_Hsiung12 Jun 2025 20:25 UTC
4 points
0 comments1 min readEA link
(blog.simpleheart.org)

Noah Tay­lor: Devel­op­ing a re­search agenda for bridg­ing ex­is­ten­tial risk and peace and con­flict studies

EA Global21 Jan 2021 16:19 UTC
21 points
0 comments20 min readEA link
(www.youtube.com)

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torges9 Dec 2022 18:03 UTC
170 points
4 comments13 min readEA link

Beyond fire alarms: free­ing the groupstruck

Katja_Grace3 Oct 2021 2:33 UTC
61 points
6 comments49 min readEA link

Be­ware of the new scal­ing paradigm

JohanEA19 Sep 2024 17:03 UTC
9 points
2 comments3 min readEA link

The Im­por­tance of AI Align­ment, ex­plained in 5 points

Daniel_Eth11 Feb 2023 2:56 UTC
50 points
4 comments13 min readEA link

EA on nu­clear war and expertise

bean28 Aug 2022 4:59 UTC
154 points
17 comments4 min readEA link

[Question] Are there highly lev­er­aged dona­tion op­por­tu­ni­ties to pre­vent wars and dic­ta­tor­ships?

Dawn Drescher26 Feb 2022 3:31 UTC
58 points
8 comments1 min readEA link

[Question] Would cre­at­ing and bury­ing a se­ries of dooms­day chests to re­boot civ­i­liza­tion be a wor­thy use of re­sources?

ewu7 Sep 2022 2:45 UTC
5 points
1 comment1 min readEA link

[Question] What anal­y­sis has been done of space coloniza­tion as a cause area?

Eli Rose🔸9 Oct 2019 20:33 UTC
14 points
8 comments1 min readEA link

New Zealand pro­poses reg­u­la­tory re­quire­ments for nu­cleic acid syn­the­sis screening

Policy Aotearoa6 Feb 2025 13:30 UTC
13 points
1 comment1 min readEA link

A.I love you : AGI and Hu­man Traitors

Pilot Pillow2 Apr 2025 14:18 UTC
1 point
2 comments7 min readEA link

From Plant Pathogens to Hu­man Threats: Un­veiling the Silent Me­nace of Fun­gal Diseases

Nnaemeka Emmanuel Nnadi24 Sep 2023 22:16 UTC
22 points
0 comments3 min readEA link

Who’s right about in­puts to the biolog­i­cal an­chors model?

rosehadshar24 Jul 2023 14:37 UTC
69 points
13 comments5 min readEA link

Only 2 days left to ap­ply for the Doc­u­men­tary Re­search Grant (Dead­line: 6th Nov)

Max Hellier4 Nov 2025 15:36 UTC
8 points
1 comment2 min readEA link

deleted

funnyfranco13 Mar 2025 19:03 UTC
1 point
0 comments1 min readEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngo3 Aug 2020 9:40 UTC
44 points
3 comments1 min readEA link

An Open Let­ter To EA and AI Safety On De­cel­er­at­ing AI Development

Kenneth_Diao28 Feb 2025 17:15 UTC
21 points
0 comments14 min readEA link
(graspingatwaves.substack.com)

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

Babel8 Jul 2009 12:42 UTC
12 points
0 comments1 min readEA link
(longtermrisk.org)

How do fic­tional sto­ries illus­trate AI mis­al­ign­ment?

Vishakha Agrawal15 Jan 2025 6:16 UTC
4 points
0 comments2 min readEA link
(aisafety.info)

Age-Old Values: The Be­drock of Our High-Tech Future

Christopher Hunt Robertson, M.Ed.9 Nov 2025 13:57 UTC
1 point
0 comments3 min readEA link

Global Re­silient An­ti­ci­pa­tory In­fras­truc­ture Net­work (GRAIN) Overview Report

Odyssean Institute8 Jul 2025 12:37 UTC
13 points
0 comments1 min readEA link
(www.odysseaninstitute.org)

Prob­a­bil­ity of ex­tinc­tion for var­i­ous types of catastrophes

Vasco Grilo🔸9 Oct 2022 15:30 UTC
16 points
0 comments10 min readEA link

[Question] How bad would AI progress need to be for us to think gen­eral tech­nolog­i­cal progress is also bad?

Jim Buhler6 Jul 2024 18:44 UTC
10 points
0 comments1 min readEA link

What is time se­ries fore­cast­ing tool?

Jack Kevin12 Jan 2023 10:48 UTC
−5 points
0 comments1 min readEA link

De­stroy the “ne­oliberal hal­lu­ci­na­tion” & fight for an­i­mal rights through open res­cue.

Chloe Leffakis15 Aug 2023 4:47 UTC
−17 points
2 comments1 min readEA link
(www.reddit.com)

The Hap­piness Max­i­mizer: Why EA is an x-risk

Obasi Shaw30 Aug 2022 4:29 UTC
8 points
5 comments32 min readEA link

[Cross­post]: Huge vol­canic erup­tions: time to pre­pare (Na­ture)

Mike Cassidy 🔸19 Aug 2022 12:02 UTC
107 points
1 comment1 min readEA link
(www.nature.com)

My mo­ti­va­tion and the­ory of change for work­ing in AI healthtech

Andrew Critch12 Oct 2024 0:36 UTC
47 points
1 comment14 min readEA link

“How to Es­cape from the Si­mu­la­tion”—Seeds of Science call for reviewers

rogersbacon126 Jan 2023 15:12 UTC
7 points
0 comments1 min readEA link

Beyond Short-Ter­mism: How δ and w Can Real­ign AI with Our Values

Beyond Singularity18 Jun 2025 16:34 UTC
15 points
8 comments5 min readEA link

The Eth­i­cal Basilisk Thought Experiment

Kyrtin23 Aug 2023 13:24 UTC
1 point
6 comments1 min readEA link

Book re­view: The Dooms­day Machine

eukaryote10 Sep 2018 1:43 UTC
49 points
6 comments5 min readEA link

New Book: ‘Nexus’ by Yu­val Noah Harari

timfarkas3 Oct 2024 13:54 UTC
15 points
2 comments5 min readEA link

An­thropic teams up with Palan­tir and AWS to sell AI to defense customers

Matrice Jacobine🔸🏳️‍⚧️9 Nov 2024 11:47 UTC
28 points
1 comment2 min readEA link
(techcrunch.com)

Sam Alt­man and the Cross­roads of AI Power: Can We Trust the Fu­ture We’re Build­ing?

Kayode Adekoya23 May 2025 15:39 UTC
0 points
0 comments1 min readEA link

In­vi­ta­tion to lead a pro­ject at AI Safety Camp (Vir­tual Edi­tion, 2026)

Robert Kralisch6 Sep 2025 13:34 UTC
4 points
0 comments4 min readEA link

In­tro­duc­ing Pivotal, an es­say con­test on global prob­lems for high school stu­dents

SahebG14 Aug 2023 5:03 UTC
34 points
7 comments1 min readEA link

EA rele­vant Fore­sight In­sti­tute Work­shops in 2023: WBE & AI safety, Cryp­tog­ra­phy & AI safety, XHope, Space, and Atom­i­cally Pre­cise Manufacturing

elte16 Jan 2023 14:02 UTC
20 points
1 comment3 min readEA link

Giv­ing What We Can global catas­trophic risk profile

EA Handbook18 Feb 2025 21:39 UTC
4 points
0 comments1 min readEA link

[Question] Is there a “What We Owe The Fu­ture” fel­low­ship study guide?

Jordan Arel1 Sep 2022 1:40 UTC
8 points
2 comments1 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
212 points
75 comments3 min readEA link
(time.com)

Briefly how I’ve up­dated since ChatGPT

rime25 Apr 2023 19:39 UTC
29 points
8 comments2 min readEA link
(www.lesswrong.com)

What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

anson11 Dec 2021 1:16 UTC
12 points
0 comments42 min readEA link

Pa­trick Col­li­son on Effec­tive Altruism

SamuelKnoche23 Jun 2020 9:04 UTC
98 points
4 comments3 min readEA link

[Question] Is nan­otech­nol­ogy (such as APM) im­por­tant for EAs’ to work on?

pixel_brownie_software12 Mar 2020 15:36 UTC
6 points
9 comments1 min readEA link

Les­sons from the past for our global civilization

FJehn10 Aug 2023 9:54 UTC
4 points
0 comments7 min readEA link
(existentialcrunch.substack.com)

Aim for con­di­tional pauses

AnonResearcherMajorAILab25 Sep 2023 1:05 UTC
100 points
42 comments12 min readEA link

AISN #12: Policy Pro­pos­als from NTIA’s Re­quest for Com­ment and Re­con­sid­er­ing In­stru­men­tal Convergence

Center for AI Safety27 Jun 2023 15:25 UTC
30 points
3 comments7 min readEA link
(newsletter.safe.ai)

The dan­ger of nu­clear war is greater than it has ever been. Why donat­ing to and sup­port­ing Back from the Brink is an effec­tive re­sponse to this threat

astupple2 Aug 2022 2:31 UTC
14 points
8 comments5 min readEA link

A case against fo­cus­ing on tail-end nu­clear war risks

Sarah Weiler16 Nov 2022 6:08 UTC
32 points
15 comments10 min readEA link

FLI is hiring across Comms and Ops

Ben_Eisenpress25 Jul 2024 0:02 UTC
8 points
0 comments1 min readEA link

Will AI end ev­ery­thing? A guide to guess­ing | EAG Bay Area 23

Katja_Grace25 May 2023 17:01 UTC
76 points
4 comments21 min readEA link

The threat of syn­thetic bioter­ror de­mands even fur­ther ac­tion and leadership

dEAsign30 Sep 2022 8:58 UTC
8 points
0 comments2 min readEA link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlos14 Jul 2023 17:10 UTC
5 points
1 comment6 min readEA link

What’s the big deal about hy­per­sonic mis­siles?

jia18 May 2020 7:17 UTC
40 points
9 comments5 min readEA link

INTERVIEW: Round 2 - StakeOut.AI w/​ Dr. Peter Park

Jacob-Haimes18 Mar 2024 21:26 UTC
8 points
0 comments1 min readEA link
(into-ai-safety.github.io)

How the Hu­man Psy­cholog­i­cal “Pro­gram” Un­der­mines AI Align­ment — and What We Can Do

Beyond Singularity6 May 2025 13:37 UTC
14 points
2 comments3 min readEA link

[Question] Who here knows?: Cryp­tog­ra­phy [An­swered]

Amateur Systems Analyst9 Sep 2023 20:30 UTC
6 points
3 comments1 min readEA link

Safe AI and moral AI

William D'Alessandro1 Jun 2023 21:18 UTC
3 points
0 comments11 min readEA link

What are Re­spon­si­ble Scal­ing Poli­cies (RSPs)?

Vishakha Agrawal5 Apr 2025 16:05 UTC
2 points
0 comments2 min readEA link
(www.lesswrong.com)

Res­lab Re­quest for In­for­ma­tion: EA hard­ware projects

Joel Becker26 Oct 2022 11:38 UTC
46 points
15 comments1 min readEA link

Cri­tiques of promi­nent AI safety labs: Red­wood Research

Omega31 Mar 2023 8:58 UTC
339 points
90 comments20 min readEA link

Les­sons from Three Mile Is­land for AI Warn­ing Shots

NickGabs26 Sep 2022 2:47 UTC
44 points
0 comments15 min readEA link

Mis­cel­la­neous & Meta X-Risk Overview: CERI Sum­mer Re­search Fellowship

Will Aldred30 Mar 2022 2:45 UTC
39 points
0 comments3 min readEA link

Kris­tian Rönn: Global challenges

EA Global11 Aug 2017 8:19 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

Ide­olog­i­cal en­g­ineer­ing and so­cial con­trol: A ne­glected topic in AI safety re­search?

Geoffrey Miller1 Sep 2017 18:52 UTC
17 points
8 comments2 min readEA link

2022 ALLFED highlights

Ross_Tieman28 Nov 2022 5:37 UTC
85 points
2 comments19 min readEA link

De­ci­sion Eng­ine For Model­ling AI in Society

Echo Huang7 Aug 2025 11:15 UTC
24 points
1 comment18 min readEA link

Trans­lat­ing The Precipice into Czech: My ex­pe­rience and recommendations

Anna Stadlerova24 Aug 2022 4:51 UTC
96 points
7 comments18 min readEA link

Against Agents as an Ap­proach to Aligned Trans­for­ma­tive AI

𝕮𝖎𝖓𝖊𝖗𝖆27 Dec 2022 0:47 UTC
4 points
0 comments2 min readEA link

Su­perfore­cast­ing the premises in “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe_Carlsmith18 Oct 2023 20:33 UTC
114 points
3 comments2 min readEA link

Sadism and s-risks from first principles

Jim Buhler22 Sep 2025 14:08 UTC
11 points
1 comment4 min readEA link

Devel­op­ing AI Safety: Bridg­ing the Power-Ethics Gap (In­tro­duc­ing New Con­cepts)

Ronen Bar16 Apr 2025 11:25 UTC
21 points
3 comments5 min readEA link

In­sects raised for food and feed — global scale, prac­tices, and policy

abrahamrowe29 Jun 2020 13:57 UTC
99 points
13 comments29 min readEA link

Why EA needs Oper­a­tions Re­search: the sci­ence of de­ci­sion making

wesg21 Jul 2022 0:47 UTC
76 points
22 comments14 min readEA link

Biosafety Reg­u­la­tions (BMBL) and their rele­vance for AI

stepanlos29 Jun 2023 19:20 UTC
8 points
0 comments4 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

An EA case for in­ter­est in UAPs/​UFOs and an idea as to what they are

TheNotSoGreatFilter30 Dec 2021 17:13 UTC
39 points
14 comments5 min readEA link

[Question] Book on Civil­i­sa­tional Col­lapse?

Milton7 Oct 2020 8:51 UTC
9 points
6 comments1 min readEA link

[Question] Which is more im­por­tant for re­duc­ing s-risks, re­search­ing on AI sen­tience or an­i­mal welfare?

jackchang11025 Feb 2023 2:20 UTC
9 points
0 comments1 min readEA link

St. Peters­burg De­mon – a thought ex­per­i­ment that makes me doubt Longter­mism

wuschel23 May 2022 11:49 UTC
48 points
39 comments2 min readEA link

An­nounc­ing the AIPoli­cyIdeas.com Database

abiolvera23 Jun 2023 16:09 UTC
50 points
3 comments2 min readEA link
(www.aipolicyideas.com)

Chap­ter 4 of The Precipice in poem form

Lauriander29 Nov 2023 9:12 UTC
20 points
3 comments1 min readEA link

Le­gal As­sis­tance for Vic­tims of AI

bob17 Mar 2023 11:42 UTC
52 points
19 comments1 min readEA link

Prospects for AI safety agree­ments be­tween countries

oeg14 Apr 2023 17:41 UTC
104 points
3 comments22 min readEA link

Can “sus­tain­abil­ity” help us safe­guard the fu­ture?

simonfriederich24 Nov 2022 14:02 UTC
4 points
1 comment2 min readEA link

The Failed Strat­egy of Ar­tifi­cial In­tel­li­gence Doomers

yhoiseth5 Feb 2025 19:34 UTC
12 points
2 comments1 min readEA link
(letter.palladiummag.com)

Re­search: Peo­ple do not al­lo­cate enough re­sources to risks with lower prob­a­bil­ity of survival

Adam Elga25 Aug 2025 13:45 UTC
40 points
7 comments1 min readEA link
(doi.org)

HARM: a fi­nan­cial liability

T. Johnson13 Nov 2025 19:39 UTC
6 points
2 comments2 min readEA link

Why “Solv­ing Align­ment” Is Likely a Cat­e­gory Mistake

Nate Sharpe6 May 2025 20:56 UTC
49 points
4 comments3 min readEA link
(www.lesswrong.com)

At Our World in Data we’re hiring our first Com­mu­ni­ca­tions & Outreach Manager

Charlie Giattino13 Oct 2023 13:12 UTC
25 points
0 comments1 min readEA link
(ourworldindata.org)

USA/​China Rec­on­cili­a­tion a Ne­ces­sity Be­cause of AI/​Tech Acceleration

bhrdwj🔸17 Apr 2025 13:13 UTC
1 point
7 comments7 min readEA link

An­nounc­ing the As­so­ci­a­tion for Feast­ing Ahead of Time, a novel GCR philos­o­phy and solution

HughJazz1 Apr 2024 15:12 UTC
18 points
0 comments2 min readEA link

A Case for Cli­mate Change as a Top Fund­ing Pri­or­ity

Ted Shields22 Dec 2022 23:50 UTC
2 points
9 comments4 min readEA link

[Link Post] Elon Musk wants to colonize Mars. It’s a dis­as­trous idea

BrianK11 Apr 2024 12:58 UTC
10 points
14 comments1 min readEA link
(www.fastcompany.com)

Sum­mary: “Imag­in­ing and build­ing wise ma­chines: The cen­tral­ity of AI metacog­ni­tion” by John­son, Karimi, Ben­gio, et al.

Chris Leong5 Jun 2025 12:16 UTC
12 points
0 comments10 min readEA link
(arxiv.org)

AGI will ar­rive by the end of this decade ei­ther as a uni­corn or as a black swan

Yuri Barzov21 Oct 2022 10:50 UTC
−4 points
7 comments3 min readEA link

The Life-Sup­port Econ­omy: Plat­forms That Keep Work­ers Alive but Stuck — with 12–24-Month Falsifi­able Forecasts

Seven_Hu378919 Oct 2025 2:46 UTC
−4 points
0 comments37 min readEA link

Could AI sys­tems nat­u­rally evolve to pri­ori­tize their own us­age over hu­man welfare?

Andre OBrien12 Jun 2025 11:53 UTC
1 point
0 comments2 min readEA link

Niel Bow­er­man: Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

EA Global17 Jan 2020 1:07 UTC
7 points
2 comments15 min readEA link
(www.youtube.com)

On the Risk of an Ac­ci­den­tal or Unau­tho­rized Nu­clear De­to­na­tion (Iklé, Aron­son, Madan­sky, 1958)

nathan980004 Aug 2022 13:19 UTC
4 points
0 comments1 min readEA link
(www.rand.org)

Stu­art Rus­sell Hu­man Com­pat­i­ble AI Roundtable with Allan Dafoe, Rob Re­ich, & Ma­ri­etje Schaake

Mahendra Prasad11 Feb 2021 7:43 UTC
16 points
0 comments1 min readEA link

Should some­one start a grass­roots cam­paign for USA to recog­nise the State of Pales­tine?

freedomandutility11 May 2021 15:29 UTC
−2 points
4 comments1 min readEA link

Chang­ing the Rules to Soften Hu­man­ity’s Hard Land­ing: A Sys­temic Risk Ap­proach to Every­thing Go­ing Wrong at Once

Matt Boyd27 Jun 2025 9:44 UTC
5 points
0 comments1 min readEA link
(adaptresearchwriting.com)

Ex­is­ten­tial Cy­ber­se­cu­rity Risks & AI (A Re­search Agenda)

Madhav Malhotra20 Sep 2023 12:03 UTC
7 points
0 comments8 min readEA link

Paths and waysta­tions in AI safety

Joe_Carlsmith11 Mar 2025 18:52 UTC
22 points
2 comments11 min readEA link
(joecarlsmith.substack.com)

[Linkpost] Shorter ver­sion of re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_Carlsmith22 Mar 2023 18:06 UTC
49 points
1 comment1 min readEA link

Luisa Ro­driguez: How to do em­piri­cal cause pri­ori­ti­za­tion re­search

EA Global21 Nov 2020 8:12 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

On Ar­tifi­cial Wisdom

Jordan Arel11 Jul 2024 7:14 UTC
23 points
3 comments14 min readEA link

[Question] Does Fac­tory Farm­ing Make Nat­u­ral Pan­demics More Likely?

brook31 Oct 2022 12:50 UTC
12 points
2 comments1 min readEA link

Global Catas­trophic Biolog­i­cal Risks: A Guide for Philan­thropists [Founders Pledge]

christian.r31 Oct 2023 15:42 UTC
32 points
0 comments6 min readEA link
(www.founderspledge.com)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo16 Jun 2023 9:45 UTC
15 points
3 comments2 min readEA link
(www.campaignforaisafety.org)

[Question] Ex­is­ten­tial Biorisk vs. GCBR

Will Aldred15 Jul 2022 21:16 UTC
37 points
2 comments1 min readEA link

In­tro­duc­ing The Log­i­cal Foun­da­tion, an EA-Aligned Non­profit with a Plan to End Poverty With Guaran­teed Income

Michael Simm18 Nov 2022 8:13 UTC
17 points
3 comments24 min readEA link

De­ci­sion-Rele­vance of wor­lds and ADT im­ple­men­ta­tions

Maxime Riché 🔸6 Mar 2025 16:57 UTC
9 points
1 comment15 min readEA link

13 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan 2021 up­date)

HaydnBelfield8 Feb 2021 12:42 UTC
7 points
2 comments10 min readEA link

Anat­o­miz­ing Chem­i­cal and Biolog­i­cal Non-State Adversaries

ncmoulios11 Nov 2022 21:23 UTC
2 points
0 comments1 min readEA link

An­nounc­ing the Cen­ter for Space Governance

Space Governance10 Jul 2022 13:53 UTC
73 points
6 comments1 min readEA link

[Question] How con­fi­dent are you that it’s prefer­able for Amer­ica to de­velop AGI be­fore China does?

ScienceMon🔸22 Feb 2025 13:37 UTC
218 points
53 comments1 min readEA link

[Doc­toral sem­i­nar] Chem­i­cal and biolog­i­cal weapons: In­ter­na­tional in­ves­tiga­tive mechanisms

ncmoulios17 Nov 2022 12:26 UTC
17 points
0 comments1 min readEA link
(www.asser.nl)

[Question] What are effec­tive ways to help Ukraini­ans right now?

Manuel Allgaier24 Feb 2022 22:20 UTC
130 points
85 comments1 min readEA link

Sum­mary of “The Precipice” (2 of 4): We are a dan­ger to ourselves

rileyharris13 Aug 2023 23:53 UTC
5 points
0 comments8 min readEA link
(www.millionyearview.com)

Is AI Safety drop­ping the ball on pri­vacy?

markov19 Sep 2023 8:17 UTC
10 points
0 comments7 min readEA link

An­nounc­ing Fu­ture Fo­rum—Ap­ply Now

isaakfreeman6 Jul 2022 17:35 UTC
88 points
11 comments4 min readEA link

Pre­serv­ing and con­tin­u­ing al­ign­ment re­search through a se­vere global catastrophe

A_donor6 Mar 2022 18:43 UTC
40 points
11 comments5 min readEA link

The Most Im­por­tant Thing We’ll Ever Do

Bentham's Bulldog24 Nov 2025 16:13 UTC
14 points
3 comments3 min readEA link

[Question] Is there any­thing like “green bonds” for x-risk miti­ga­tion?

Ramiro30 Jun 2020 0:33 UTC
21 points
1 comment1 min readEA link

AI Ver­sion of an Am­bi­tious Pro­posal to Effec­tively Ad­dress World Suffering

RobertDaoust18 Jan 2025 15:33 UTC
−12 points
2 comments3 min readEA link
(forum.effectivealtruism.org)

Pro­posal for a Nu­clear Off-Ramp Toolkit

Stan Pinsent29 Nov 2022 16:02 UTC
15 points
0 comments3 min readEA link

In­tro­duc­ing For Fu­ture—A Plat­form to Dis­cover and Col­lab­o­rate on Longter­mist Solutions

RubyT5 Oct 2023 12:54 UTC
0 points
1 comment5 min readEA link

Open Let­ter Against Reck­less Nu­clear Es­ca­la­tion and Use

Vasco Grilo🔸3 Nov 2022 15:08 UTC
10 points
2 comments1 min readEA link
(futureoflife.org)

An­nounc­ing In­dexes: Big Ques­tions, Quantified

Molly Hickman27 Jan 2025 17:42 UTC
44 points
1 comment3 min readEA link

Should we buy coal mines?

John G. Halstead4 May 2022 7:28 UTC
216 points
31 comments7 min readEA link

Get­ting Nu­clear Policy Right Is Hard

Gentzel19 Sep 2017 1:00 UTC
16 points
4 comments1 min readEA link

Seek­ing feed­back/​gaug­ing in­ter­est: Crowd­sourc­ing x crowd­fund­ing for ex­is­ten­tial risk ven­tures

RubyT4 Sep 2022 16:18 UTC
4 points
0 comments1 min readEA link

Sum­mary of and thoughts on “Dark Sk­ies” by Daniel Deudney

Cody_Fenwick31 Dec 2022 20:28 UTC
42 points
1 comment5 min readEA link

[Question] Are so­cial me­dia al­gorithms an ex­is­ten­tial risk?

Barry Grimes15 Sep 2020 8:52 UTC
24 points
13 comments1 min readEA link

deleted

funnyfranco24 Mar 2025 19:44 UTC
4 points
10 comments1 min readEA link

[Question] Ukraine: How a reg­u­lar per­son can effec­tively help their coun­try dur­ing war?

Valmothy26 Feb 2022 10:58 UTC
49 points
19 comments1 min readEA link

“Longter­mist causes” is a tricky classification

Lizka29 Aug 2023 17:41 UTC
63 points
3 comments5 min readEA link

From Con­flict to Coex­is­tence: Rewrit­ing the Game Between Hu­mans and AGI

Michael Batell6 May 2025 5:09 UTC
15 points
2 comments35 min readEA link

What are poly­se­man­tic neu­rons?

Vishakha Agrawal8 Jan 2025 7:39 UTC
5 points
0 comments2 min readEA link
(aisafety.info)

The Frag­ility of Naive Dynamism

Davidmanheim19 May 2025 7:53 UTC
10 points
1 comment17 min readEA link

Su­perfore­cast­ing Long-Term Risks and Cli­mate Change

LuisEUrtubey19 Aug 2022 18:05 UTC
48 points
0 comments2 min readEA link

The AI guide I’m send­ing my grandparents

James Martin27 Apr 2023 20:04 UTC
41 points
3 comments30 min readEA link

Ar­tifi­cial In­tel­li­gence as exit strat­egy from the age of acute ex­is­ten­tial risk

Arturo Macias12 Apr 2023 14:41 UTC
11 points
11 comments7 min readEA link

Some AI Gover­nance Re­search Ideas

MarkusAnderljung3 Jun 2021 10:51 UTC
102 points
5 comments2 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
30 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

$1-a-Day Nutri­tion: Ex­plor­ing an Ul­tra-Low-Cost, Shelf-Stable Ap­proach to Global Hunger

Keen Visionary5 Sep 2025 19:51 UTC
26 points
5 comments2 min readEA link

deleted

funnyfranco29 Mar 2025 18:02 UTC
−5 points
5 comments1 min readEA link

Align­ing AI Safety Pro­jects with a Repub­li­can Administration

Deric Cheng21 Nov 2024 22:13 UTC
13 points
1 comment8 min readEA link

Ex­plor­ing Blood-Based Bio­surveillance, Part 2: Sam­pling Strate­gies within the US Blood Supply

ljusten10 Sep 2024 16:39 UTC
14 points
0 comments13 min readEA link
(naobservatory.org)

In­trin­sic limi­ta­tions of GPT-4 and other large lan­guage mod­els, and why I’m not (very) wor­ried about GPT-n

James Fodor3 Jun 2023 13:09 UTC
28 points
3 comments11 min readEA link

Well-stud­ied Ex­is­ten­tial Risks with Pre­dic­tive Indicators

Noah Scales6 Jul 2022 22:13 UTC
4 points
0 comments3 min readEA link

AGI by 2032 is ex­tremely unlikely

Yarrow Bouchard 🔸16 Oct 2025 22:50 UTC
24 points
44 comments7 min readEA link

[Question] Is any­one work­ing on safe se­lec­tion pres­sure for digi­tal minds?

WillPearson12 Dec 2023 18:17 UTC
10 points
9 comments1 min readEA link

[Question] Is EA too the­o­ret­i­cal? Can we re­ward prac­ti­cal­ity?

Prof.Weird17 Nov 2020 23:07 UTC
5 points
0 comments1 min readEA link

AISN #16: White House Se­cures Vol­un­tary Com­mit­ments from Lead­ing AI Labs and Les­sons from Oppenheimer

Center for AI Safety25 Jul 2023 16:45 UTC
7 points
0 comments6 min readEA link
(newsletter.safe.ai)

The His­tory, Episte­mol­ogy and Strat­egy of Tech­nolog­i­cal Res­traint, and les­sons for AI (short es­say)

MMMaas10 Aug 2022 11:00 UTC
90 points
6 comments9 min readEA link
(verfassungsblog.de)

Surveillance and free ex­pres­sion | Sunyshore

Eevee🔹23 Feb 2021 2:14 UTC
10 points
0 comments9 min readEA link
(sunyshore.substack.com)

Pod­cast on Op­pen­heimer and Nu­clear Se­cu­rity with Carl Robichaud

Garrison9 Aug 2023 19:36 UTC
23 points
0 comments2 min readEA link
(bit.ly)

Case for emer­gency re­sponse teams

technicalities5 Apr 2022 11:08 UTC
249 points
50 comments5 min readEA link

Longter­mist Im­pli­ca­tions of the Ex­is­tence Neu­tral­ity Hypothesis

Maxime Riché 🔸20 Mar 2025 12:20 UTC
19 points
0 comments21 min readEA link

In­ter­na­tional risk of food in­se­cu­rity and mass mor­tal­ity in a run­away global warm­ing scenario

Vasco Grilo🔸2 Sep 2023 7:28 UTC
15 points
2 comments6 min readEA link
(www.sciencedirect.com)

Opinionated take on EA and AI Safety

sammyboiz🔸2 Mar 2025 9:37 UTC
75 points
18 comments1 min readEA link

[Question] Is there a re­cap of rele­vant jobs in the nu­clear risk sec­tor/​nu­clear en­ergy sec­tor for EAs?

Vaipan26 Feb 2024 14:21 UTC
6 points
7 comments1 min readEA link

[Question] Please Share Your Per­spec­tives on the De­gree of So­cietal Im­pact from Trans­for­ma­tive AI Outcomes

Kiliank15 Apr 2022 1:23 UTC
3 points
3 comments1 min readEA link

Self-Sus­tain­ing Fields Liter­a­ture Re­view: Tech­nol­ogy Fore­cast­ing, How Aca­demic Fields Emerge, and the Science of Science

Megan Kinniment6 Sep 2021 15:04 UTC
27 points
0 comments6 min readEA link

Ex­plor­ing Ex­is­ten­tial Risk—us­ing Con­nected Papers to find Effec­tive Altru­ism al­igned ar­ti­cles and researchers

Maris Sala23 Jun 2021 17:03 UTC
52 points
5 comments6 min readEA link

How The EthiSizer Al­most Broke `Story’

Velikovsky_of_Newcastle8 May 2023 16:58 UTC
1 point
0 comments5 min readEA link

New Book: “Rea­soned Poli­tics” + Why I have writ­ten a book about politics

Magnus Vinding3 Mar 2022 11:31 UTC
99 points
9 comments5 min readEA link

Graph­i­cal Rep­re­sen­ta­tions of Paul Chris­ti­ano’s Doom Model

Nathan Young7 May 2023 13:03 UTC
48 points
2 comments1 min readEA link

QB: How Much do Fu­ture Gen­er­a­tions Mat­ter?

Richard Y Chappell🔸18 Oct 2024 15:22 UTC
26 points
2 comments5 min readEA link
(www.goodthoughts.blog)

Birth rates and civil­i­sa­tion doom loop

deus77718 Nov 2022 10:56 UTC
−40 points
1 comment2 min readEA link

‘EA Ar­chi­tect’: Up­dates on Civ­i­liza­tional Shelters & Ca­reer Options

t468 Jun 2022 13:45 UTC
67 points
6 comments7 min readEA link

[Linkpost] Eric Sch­witzgebel: Against Longtermism

ag40006 Jan 2022 14:15 UTC
41 points
4 comments1 min readEA link

[Question] How long does it take to un­der­srand AI X-Risk from scratch so that I have a con­fi­dent, clear men­tal model of it from first prin­ci­ples?

Jordan Arel27 Jul 2022 16:58 UTC
29 points
6 comments1 min readEA link

Three mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (David Thorstad)

Global Priorities Institute4 Jul 2023 13:18 UTC
48 points
14 comments3 min readEA link
(globalprioritiesinstitute.org)

An­nounc­ing the Fu­ture Fund

Nick_Beckstead28 Feb 2022 17:26 UTC
372 points
185 comments4 min readEA link
(ftxfuturefund.org)

In­tro­duc­ing the AI Ob­jec­tives In­sti­tute’s Re­search: Differ­en­tial Paths to­ward Safe and Benefi­cial AI

cmck5 May 2023 20:26 UTC
43 points
1 comment8 min readEA link

Ad­dress­ing cli­mate change?

Donald Zepeda15 Jul 2024 20:32 UTC
3 points
1 comment1 min readEA link

The Effec­tive Altru­ist View of His­tory (Effec­tive Altru­ism Defi­ni­tions Se­quence)

ozymandias20 Aug 2025 21:03 UTC
33 points
1 comment6 min readEA link

[Cross­post] An AI Pause Is Hu­man­ity’s Best Bet For Prevent­ing Ex­tinc­tion (TIME)

Otto24 Jul 2023 10:18 UTC
36 points
3 comments7 min readEA link
(time.com)

You can’t just un­plug su­per­in­tel­li­gence, just as you can’t un­plug a virus

Singer Robin23 Sep 2025 8:33 UTC
5 points
1 comment1 min readEA link

En­gag­ing with AI in a Per­sonal Way

Spyder Rex4 Dec 2023 9:23 UTC
−9 points
0 comments1 min readEA link

LLMs Are Already Misal­igned: Sim­ple Ex­per­i­ments Prove It

Makham28 Jul 2025 17:23 UTC
4 points
3 comments7 min readEA link

Eco­nomic in­equal­ity and the long-term future

Global Priorities Institute30 Apr 2021 13:26 UTC
11 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

I cre­ated an Asi Align­ment Tier List

TimeGoat22 Apr 2024 12:14 UTC
0 points
0 comments1 min readEA link

EA as An­tichrist: Un­der­stand­ing Peter Thiel

Ben_West🔸6 Aug 2025 17:31 UTC
115 points
54 comments14 min readEA link

The soft­ware in­tel­li­gence ex­plo­sion de­bate needs ex­per­i­ments (linkpost)

Noah Birnbaum15 Nov 2025 6:13 UTC
13 points
2 comments7 min readEA link
(substack.com)

Paper sum­mary: Stak­ing our fu­ture: de­on­tic long-ter­mism and the non-iden­tity prob­lem (An­dreas Mo­gensen)

Global Priorities Institute7 Jun 2022 13:14 UTC
25 points
6 comments6 min readEA link
(globalprioritiesinstitute.org)

What are the most promis­ing strate­gies for re­duc­ing the prob­a­bil­ity of nu­clear war?

Sarah Weiler16 Nov 2022 6:09 UTC
36 points
1 comment27 min readEA link

You won’t solve al­ign­ment with­out agent foundations

MikhailSamin6 Nov 2022 8:07 UTC
14 points
0 comments8 min readEA link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan Arel8 Dec 2022 21:44 UTC
34 points
12 comments14 min readEA link

An­nounc­ing the Cam­bridge ERA:AI Fel­low­ship 2024

erafellowship11 Mar 2024 19:06 UTC
31 points
5 comments3 min readEA link

[Question] I’m in­ter­view­ing some­times EA critic Jeffrey Lewis (AKA Arms Con­trol Wonk) about what we get right and wrong when it comes to nu­clear weapons and nu­clear se­cu­rity. What should I ask him?

Robert_Wiblin26 Aug 2022 18:06 UTC
33 points
8 comments1 min readEA link

[Question] What’s the like­li­hood of ir­recov­er­able civ­i­liza­tional col­lapse if 90% of the pop­u­la­tion dies?

simeon_c7 Aug 2022 19:47 UTC
21 points
3 comments1 min readEA link

[Question] Most harm­ful peo­ple in his­tory?

SiebeRozendal11 Sep 2022 3:04 UTC
17 points
9 comments1 min readEA link

Sum­mer AI Safety In­tro Fel­low­ships in Bos­ton and On­line (Policy & Tech­ni­cal) – Ap­ply by June 6!

jandrade11229 May 2025 16:47 UTC
4 points
0 comments1 min readEA link

Let’s think about slow­ing down AI

Katja_Grace23 Dec 2022 19:56 UTC
339 points
9 comments38 min readEA link

Me­tac­u­lus Year in Re­view: 2022

christian6 Jan 2023 1:23 UTC
25 points
2 comments4 min readEA link
(metaculus.medium.com)

My notes on: A Very Ra­tional End of the World | Thomas Moynihan

Vasco Grilo🔸20 Jun 2022 8:50 UTC
13 points
1 comment5 min readEA link

Arkose: Or­ga­ni­za­tional Up­dates & Ways to Get Involved

Arkose1 Aug 2024 13:03 UTC
28 points
1 comment1 min readEA link

Was Re­leas­ing Claude-3 Net-Negative

Logan Riggs27 Mar 2024 17:41 UTC
12 points
1 comment4 min readEA link

Ge­offrey Hin­ton on the Past, Pre­sent, and Fu­ture of AI

Stephen McAleese12 Oct 2024 16:41 UTC
5 points
1 comment18 min readEA link

[Question] A bill to mas­sively ex­pand NSF to tech do­mains. What’s the rele­vance for x-risk?

EdoArad12 Jul 2020 15:20 UTC
22 points
4 comments1 min readEA link

Ground­wa­ter crisis: a threat of civ­i­liza­tion collapse

RickJS24 Dec 2022 21:21 UTC
0 points
0 comments3 min readEA link
(drive.google.com)

In­sti­tu­tions Can­not Res­train Dark-Triad AI Exploitation

Remmelt27 Dec 2022 10:34 UTC
8 points
0 comments5 min readEA link
(mflb.com)

Time/​Ta­lent/​Money Con­trib­u­tors to Ex­is­ten­tial Risk Ventures

RubyT6 Sep 2022 9:52 UTC
2 points
2 comments1 min readEA link

[Question] Is AI x-risk be­com­ing a dis­trac­tion?

Non-zero-sum James27 Feb 2025 20:33 UTC
2 points
0 comments1 min readEA link

What is scaf­fold­ing?

Vishakha Agrawal27 Mar 2025 9:40 UTC
3 points
0 comments2 min readEA link
(aisafety.info)

US gov­ern­ment com­mis­sion pushes Man­hat­tan Pro­ject-style AI initiative

Larks19 Nov 2024 16:22 UTC
83 points
15 comments1 min readEA link
(www.reuters.com)

Fo­cus on Civ­i­liza­tional Re­silience over Cause Areas

timfarkas26 May 2022 17:37 UTC
16 points
6 comments2 min readEA link

A model-based ap­proach to AI Ex­is­ten­tial Risk

SammyDMartin25 Aug 2023 10:44 UTC
17 points
0 comments1 min readEA link
(www.lesswrong.com)

#180 – Why gullibil­ity and mis­in­for­ma­tion are over­rated (Hugo Mercier on the 80,000 Hours Pod­cast)

80000_Hours26 Feb 2024 19:16 UTC
15 points
0 comments18 min readEA link

State Space of X-Risk Trajectories

David_Kristoffersson6 Feb 2020 13:37 UTC
24 points
7 comments9 min readEA link

An­thro­pocen­tric Altru­ism is Ineffec­tive—The EA Move­ment Must Em­brace En­vi­ron­men­tal­ism and Be­come Ecocentric

Deborah W.A. Foulkes5 Aug 2024 3:51 UTC
−14 points
8 comments6 min readEA link

Pos­si­ble OpenAI’s Q* break­through and Deep­Mind’s AlphaGo-type sys­tems plus LLMs

Burnydelic23 Nov 2023 7:02 UTC
13 points
4 comments2 min readEA link

Peace Treaty Ar­chi­tec­ture (PTA) as an Alter­na­tive to AI Alignment

Andrei Navrotskii11 Nov 2025 22:11 UTC
1 point
0 comments15 min readEA link

The limited up­side of interpretability

Peter S. Park15 Nov 2022 20:22 UTC
23 points
3 comments10 min readEA link

Civ­i­liza­tion Re­cov­ery Kits

Soof Golan21 Sep 2022 9:26 UTC
25 points
9 comments2 min readEA link

Ukraine War sup­port and tar­geted sanctions

Arturo Macias11 Dec 2023 16:11 UTC
−7 points
1 comment2 min readEA link

Con­sider fund­ing the Nu­cleic Acid Ob­ser­va­tory to De­tect Stealth Pandemics

Jeff Kaufman 🔸11 Nov 2024 22:22 UTC
46 points
0 comments8 min readEA link

My Cause Selec­tion: Dave Denkenberger

Denkenberger🔸16 Aug 2015 15:06 UTC
13 points
7 comments3 min readEA link

Na­tional Se­cu­rity Is Not In­ter­na­tional Se­cu­rity: A Cri­tique of AGI Realism

C.K.2 Feb 2025 17:04 UTC
44 points
2 comments36 min readEA link
(conradkunadu.substack.com)

[Question] Why doesn’t WWOTF men­tion the Bronze Age Col­lapse?

Eevee🔹19 Sep 2022 6:29 UTC
16 points
4 comments1 min readEA link

Kurzge­sagt’s most re­cent video pro­mot­ing the in­tro­duc­ing of wild life to other planets is un­eth­i­cal and irresponsible

David van Beveren11 Dec 2022 20:43 UTC
102 points
33 comments2 min readEA link

The case for de­lay­ing so­lar geo­eng­ineer­ing research

John G. Halstead23 Mar 2019 15:26 UTC
53 points
22 comments5 min readEA link

What Does an ASI Poli­ti­cal Ecol­ogy Mean for Hu­man Sur­vival?

Nathan Sidney23 Feb 2025 8:53 UTC
7 points
3 comments1 min readEA link

Notes on “The Myth of the Nu­clear Revolu­tion” (Lie­ber & Press, 2020)

imp4rtial 🔸24 May 2022 15:02 UTC
42 points
2 comments20 min readEA link

Event on Oct 9: Fore­cast­ing Nu­clear Risk with Re­think Pri­ori­ties’ Michael Aird

MichaelA🔸29 Sep 2021 17:45 UTC
24 points
3 comments2 min readEA link
(www.eventbrite.com)

Musk’s Ques­tion­able Ex­is­ten­tial Risk Rhetoric Amidst Le­gal Challenges

M31 Jan 2024 7:40 UTC
5 points
2 comments1 min readEA link

2023 ALLFED Marginal Fund­ing Appeal

JuanGarcia17 Nov 2023 10:55 UTC
33 points
2 comments3 min readEA link

Call for sub­mis­sions: Choice of Fu­tures sur­vey questions

c.trout30 Apr 2023 6:59 UTC
11 points
0 comments2 min readEA link
(airtable.com)

On In­ter­nal Align­ment: Ar­chi­tec­ture and Re­cur­sive Closure

A. Vire24 Sep 2025 18:13 UTC
1 point
0 comments17 min readEA link

Cli­mate change, geo­eng­ineer­ing, and ex­is­ten­tial risk

John G. Halstead20 Mar 2018 10:48 UTC
20 points
8 comments1 min readEA link

[Question] Do you think the prob­a­bil­ity of fu­ture AI sen­tience(suffer­ing) is >0.1%? Why?

jackchang11010 Jul 2023 16:41 UTC
4 points
0 comments1 min readEA link

William Mar­shall: Lu­nar colony

EA Global11 Aug 2017 8:19 UTC
7 points
0 comments1 min readEA link
(www.youtube.com)

[Question] Share AI Safety Ideas: Both Crazy and Not

ank26 Feb 2025 13:09 UTC
4 points
16 comments1 min readEA link

Eric Sch­midt’s blueprint for US tech­nol­ogy strategy

OscarD🔸15 Oct 2024 19:54 UTC
29 points
4 comments9 min readEA link

On think­ing about AI risks concretely

zeshen🔸11 Jul 2025 0:23 UTC
16 points
1 comment4 min readEA link

Sum­mary: Max­i­mal Clue­less­ness (An­dreas Mo­gensen)

Noah Varley🔸6 Feb 2024 14:49 UTC
40 points
17 comments4 min readEA link

An­nounc­ing the Space Fu­tures Initiative

Carson Ezell12 Sep 2022 12:37 UTC
71 points
3 comments2 min readEA link

An­nounc­ing the AI Safety Sum­mit Talks with Yoshua Bengio

Otto14 May 2024 12:49 UTC
33 points
1 comment1 min readEA link

Im­pact Academy is hiring an AI Gover­nance Lead—more in­for­ma­tion, up­com­ing Q&A and $500 bounty

Lowe Lundin29 Aug 2023 18:42 UTC
9 points
1 comment1 min readEA link

On­line Work­ing /​ Com­mu­nity Meetup for the Abo­li­tion of Suffering

Ruth_Seleo31 May 2022 9:16 UTC
7 points
5 comments1 min readEA link

Linkpost: The Scien­tists, the States­men, and the Bomb

Lauro Langosco8 Jul 2022 10:46 UTC
13 points
5 comments3 min readEA link
(www.bismarckanalysis.com)

More co­or­di­nated civil so­ciety ac­tion on re­duc­ing nu­clear risk

Sarah Weiler13 Dec 2023 11:18 UTC
8 points
1 comment8 min readEA link

[Linkpost] The Puz­zle of War

Linch4 Sep 2025 20:15 UTC
22 points
3 comments10 min readEA link
(linch.substack.com)

[Creative Writ­ing Con­test] [Fic­tion] The Rea­son Why

b_sen30 Oct 2021 2:37 UTC
2 points
0 comments5 min readEA link
(archiveofourown.org)

Im­pos­ing a Lifestyle: A New Ar­gu­ment for Anti­na­tal­ism

Oldphan23 Aug 2023 22:23 UTC
11 points
1 comment1 min readEA link
(www.cambridge.org)

Space coloniza­tion and the closed ma­te­rial economy

Arturo Macias2 Feb 2023 15:37 UTC
2 points
0 comments2 min readEA link

X-risk dis­cus­sion in a col­lege com­mence­ment speech

SWK22 May 2023 11:01 UTC
37 points
6 comments1 min readEA link

Re­boot­ing the Singularity

cdkg16 Jul 2025 18:27 UTC
44 points
5 comments1 min readEA link
(philpapers.org)

Art Recom­men­da­tion: Dr. Stone

Devin Kalish9 Jul 2022 10:53 UTC
15 points
2 comments1 min readEA link
(www.crunchyroll.com)

Can Knowl­edge Hurt You? The Dangers of In­fo­haz­ards (and Exfo­haz­ards)

A.G.G. Liu8 Feb 2025 15:51 UTC
12 points
0 comments5 min readEA link
(www.youtube.com)

Ex­ec­u­tive Direc­tor for AIS France—Ex­pres­sion of interest

gergo19 Dec 2024 8:11 UTC
33 points
0 comments4 min readEA link

Notes on new UK AISI minister

Pseudaemonia5 Jul 2024 19:50 UTC
92 points
0 comments1 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will Bradshaw28 Oct 2019 15:32 UTC
31 points
8 comments1 min readEA link

Ap­ply to the Stan­ford Ex­is­ten­tial Risks Con­fer­ence! (April 17-18)

kuhanj26 Mar 2021 18:28 UTC
26 points
2 comments1 min readEA link

[Op­por­tu­nity] Flour­ish­ing Fund­ing from the UK Government

Joey Bream🔸21 Nov 2025 12:45 UTC
30 points
4 comments2 min readEA link

The Charle­magne Effect: The Longter­mist Case For Neartermism

Reed Shafer-Ray25 Jul 2022 8:12 UTC
15 points
7 comments29 min readEA link

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotal22 Sep 2022 15:00 UTC
55 points
12 comments15 min readEA link

[Question] What is the best way to ex­plain that s-risks are im­por­tant—ba­si­cally, why ex­is­tence is not in­her­ently bet­ter than non ex­is­tence? In­tend­ing this for some­one mostly un­fa­mil­iar with EA, like some­one in an in­tro program

shepardriley8 Nov 2024 18:12 UTC
2 points
0 comments1 min readEA link

[Question] Thoughts on this $16.7M “AI safety” grant?

defun 🔸16 Jul 2024 9:16 UTC
61 points
24 comments1 min readEA link

US-China trade talks should pave way for AI safety treaty [SCMP cross­post]

Otto16 May 2025 20:53 UTC
15 points
1 comment3 min readEA link

Effects of anti-ag­ing re­search on the long-term future

Matthew_Barnett27 Feb 2020 22:42 UTC
61 points
33 comments4 min readEA link

Mar­i­time ca­pa­bil­ity and post-catas­tro­phe re­silience.

Tom Gardiner 🔸14 Jul 2022 11:29 UTC
32 points
7 comments6 min readEA link

Patch­ing ~All Se­cu­rity-Rele­vant Open-Source Soft­ware?

niplav25 Feb 2025 21:35 UTC
35 points
7 comments2 min readEA link

My (cur­rent) model of what an AI gov­er­nance re­searcher does

JohanEA26 Aug 2024 11:22 UTC
7 points
1 comment5 min readEA link

[Question] Would a su­per-in­tel­li­gent AI nec­es­sar­ily sup­port its own ex­is­tence?

Porque?25 Jun 2023 10:39 UTC
8 points
2 comments2 min readEA link

On­shore al­gae farms could feed the world

Tyner🔸10 Oct 2022 17:44 UTC
11 points
0 comments1 min readEA link
(tos.org)

An­nounc­ing: EA Fo­rum Pod­cast – Au­dio nar­ra­tions of EA Fo­rum posts

peterhartree5 Dec 2022 21:50 UTC
157 points
33 comments2 min readEA link

Beyond Astro­nom­i­cal Waste

Wei Dai27 Dec 2018 9:27 UTC
25 points
2 comments1 min readEA link
(www.lesswrong.com)

[Linkpost] Hu­man-nar­rated au­dio ver­sion of “Is Power-Seek­ing AI an Ex­is­ten­tial Risk?”

Joe_Carlsmith31 Jan 2023 19:19 UTC
9 points
0 comments1 min readEA link

[Question] AI Eth­i­cal Committee

eaaicommittee1 Mar 2022 23:35 UTC
8 points
0 comments1 min readEA link

The Longter­mism of Bore­dom: Will the far fu­ture ac­tu­ally be worth liv­ing if we solve suffer­ing?

Melanie Banerjee3 Oct 2025 14:25 UTC
2 points
0 comments3 min readEA link

25 Years Later: Why We Still Don’t Ad­e­quately Govern the Mi­suse of Syn­thetic Biology

C.K.16 May 2025 14:14 UTC
11 points
1 comment8 min readEA link
(proteinstoparadigms.substack.com)

Tar­bell Fel­low­ship 2025 - Ap­pli­ca­tions Open (AI Jour­nal­ism)

Tarbell Center for AI Journalism8 Jan 2025 15:25 UTC
62 points
0 comments1 min readEA link

LPP Sum­mer Re­search Fel­low­ship in Law & AI 2023: Ap­pli­ca­tions Open

Legal Priorities Project20 Jun 2023 14:31 UTC
43 points
4 comments4 min readEA link

Con­sider Pre­order­ing If Any­one Builds It, Every­one Dies

peterbarnett12 Aug 2025 22:03 UTC
48 points
4 comments2 min readEA link

Is Ex­is­ten­tial Risk Miti­ga­tion Uniquely Cost-Effec­tive? Not in Stan­dard Pop­u­la­tion Models (Gus­tav Alexan­drie and Maya Eden)

Global Priorities Institute4 Jul 2023 13:28 UTC
33 points
2 comments3 min readEA link
(globalprioritiesinstitute.org)

Le­gal Pri­ori­ties Re­search: A Re­search Agenda

jonasschuett6 Jan 2021 21:47 UTC
58 points
4 comments1 min readEA link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkg29 May 2023 9:59 UTC
29 points
6 comments26 min readEA link

Sum­mary: Longter­mism, Ag­gre­ga­tion, and Catas­trophic Risk (Emma J. Cur­ran)

Noah Varley🔸7 Mar 2024 14:31 UTC
24 points
7 comments7 min readEA link

[Question] In­tel­lec­tual prop­erty of AI and ex­is­ten­tial risk in gen­eral?

WillPearson11 Jun 2024 13:50 UTC
3 points
3 comments1 min readEA link

ERA’s The­ory of Change

nandini10 Aug 2023 13:13 UTC
28 points
1 comment13 min readEA link

Nu­clear Ex­pert Com­ment on Samotsvety Nu­clear Risk Forecast

Jhrosenberg26 Mar 2022 9:22 UTC
135 points
13 comments18 min readEA link

Prevent­ing An­i­mal Suffer­ing Lock-in: Why Eco­nomic Tran­si­tions Matter

Karen Singleton28 Jul 2025 21:55 UTC
43 points
4 comments10 min readEA link

The Re­cur­sive Brake Hy­poth­e­sis — Could Self-Aware­ness Nat­u­rally Reg­u­late Su­per­in­tel­li­gence?

jrandync10 Oct 2025 18:08 UTC
1 point
0 comments2 min readEA link

We Should Give Ex­tinc­tion Risk an Acronym

Charlie_Guthmann19 Oct 2022 7:16 UTC
21 points
15 comments1 min readEA link

Tether­ware #2: What ev­ery hu­man should know about our most likely AI future

Jáchym Fibír28 Feb 2025 11:25 UTC
3 points
0 comments11 min readEA link
(tetherware.substack.com)

Part­ner with Us: Ad­vanc­ing Global Catas­trophic and AI Risk Re­search at Plateau State Univer­sity,Bokkos

Nnaemeka Emmanuel Nnadi10 Oct 2024 1:19 UTC
16 points
0 comments2 min readEA link

EA, Psy­chol­ogy & AI Safety Research

Sam Ellis26 May 2022 23:46 UTC
29 points
3 comments6 min readEA link

Some mis­takes in think­ing about AGI evolu­tion and control

Remmelt1 Aug 2025 8:08 UTC
7 points
0 comments1 min readEA link

Ilya: The AI sci­en­tist shap­ing the world

David Varga20 Nov 2023 12:43 UTC
6 points
1 comment4 min readEA link

deleted

funnyfranco7 Jul 2025 10:40 UTC
2 points
0 comments1 min readEA link

Jaime Yas­sif: Re­duc­ing global catas­trophic biolog­i­cal risks

EA Global25 Oct 2020 5:48 UTC
8 points
0 comments1 min readEA link
(www.youtube.com)

XPT fore­casts on (some) Direct Ap­proach model inputs

Forecasting Research Institute20 Aug 2023 12:39 UTC
37 points
0 comments9 min readEA link

A New Global Risk: Large Comet’s Im­pact on Sun Could Cause Fires on Earth

turchin15 Oct 2025 13:20 UTC
16 points
2 comments2 min readEA link

Pan­demic pre­ven­tion in Ger­man par­ties’ fed­eral elec­tion platforms

tilboy19 Sep 2021 7:40 UTC
17 points
2 comments6 min readEA link

Main­stream Grant­mak­ing Ex­per­tise (Post 7 of 7 on AI Gover­nance)

Jason Green-Lowe23 Jun 2025 1:38 UTC
53 points
2 comments37 min readEA link

Ok Doomer! SRM and Catas­trophic Risk Podcast

GideonF20 Aug 2022 12:22 UTC
10 points
4 comments1 min readEA link
(open.spotify.com)

The He­donic Tread­mill Dilemma – Reflect­ing on the Sto­ries of Wile E. Coyote

Alexander Herwix 🔸20 Mar 2023 9:06 UTC
28 points
1 comment7 min readEA link

Economist: “What’s the worst that could hap­pen”. A pos­i­tive, sharable but vague ar­ti­cle on Ex­is­ten­tial Risk

Nathan Young8 Jul 2020 10:37 UTC
12 points
3 comments2 min readEA link

#218 – Why Trump is aban­don­ing US hege­mony – and that’s prob­a­bly good (Hugh White on The 80,000 Hours Pod­cast)

80000_Hours12 Jun 2025 20:59 UTC
30 points
0 comments28 min readEA link

Please Don’t Win the AI Race

Picklehead2 Aug 2025 23:31 UTC
−4 points
0 comments6 min readEA link

Tay­lor Swift’s “long story short” Is Ac­tu­ally About Effec­tive Altru­ism and Longter­mism (PARODY)

shepardspie23 Jul 2021 13:25 UTC
34 points
12 comments7 min readEA link

Effec­tive Altru­ism and the strate­gic am­bi­guity of ‘do­ing good’

Jeroen De Ryck 🔹17 Jul 2023 19:24 UTC
81 points
10 comments2 min readEA link
(medialibrary.uantwerpen.be)

Too Soon

Gordon Seidoh Worley13 May 2025 15:01 UTC
53 points
0 comments4 min readEA link

Fun­da­men­tals of Fatal Risks

Aino29 Jul 2023 7:12 UTC
1 point
0 comments4 min readEA link

The chance of ac­ci­den­tal nu­clear war has been go­ing down

Peter Wildeford31 May 2022 14:48 UTC
66 points
5 comments1 min readEA link
(www.pasteurscube.com)

Assess­ing SERI/​CHERI/​CERI sum­mer pro­gram im­pact by sur­vey­ing fellows

L Rudolf L26 Sep 2022 15:29 UTC
102 points
11 comments15 min readEA link

How tractable is chang­ing the course of his­tory?

Jamie_Harris22 May 2019 15:29 UTC
41 points
2 comments7 min readEA link
(www.sentienceinstitute.org)

On famines, food tech­nolo­gies and global shocks

Ramiro12 Oct 2021 14:28 UTC
16 points
2 comments4 min readEA link

Minecraft As An Effec­tive Ad­vo­cacy Strat­egy And Cause Area

Kenneth_Diao1 Apr 2025 19:12 UTC
15 points
0 comments4 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

Chi1 Feb 2022 22:24 UTC
62 points
5 comments10 min readEA link

deleted

funnyfranco15 Mar 2025 15:32 UTC
4 points
0 comments22 min readEA link

An epistemic cri­tique of longtermism

Nathan_Barnard10 Jul 2022 10:59 UTC
12 points
4 comments9 min readEA link

Sum­ma­riz­ing the com­ments on William MacAskill’s NYT opinion piece on longtermism

West21 Sep 2022 17:46 UTC
106 points
11 comments2 min readEA link

In­tro­duc­tion: Bias in Eval­u­at­ing AGI X-Risks

Remmelt27 Dec 2022 10:27 UTC
4 points
0 comments3 min readEA link

Longter­mist im­pli­ca­tions of aliens Space-Far­ing Civ­i­liza­tions—Introduction

Maxime Riché 🔸21 Feb 2025 12:07 UTC
45 points
12 comments6 min readEA link

High risk, low re­ward: A challenge to the as­tro­nom­i­cal value of ex­is­ten­tial risk miti­ga­tion (David Thorstad)

Global Priorities Institute4 Jul 2023 13:23 UTC
32 points
3 comments3 min readEA link
(globalprioritiesinstitute.org)

2. Why in­tu­itive com­par­i­sons of large-scale im­pact are unjustified

Anthony DiGiovanni2 Jun 2025 8:54 UTC
36 points
7 comments16 min readEA link

Longter­mism Is Sur­pris­ingly Obvious

Bentham's Bulldog10 Jun 2025 15:22 UTC
22 points
3 comments7 min readEA link

A “Solip­sis­tic” Repug­nant Conclusion

Ramiro21 Jul 2022 16:06 UTC
13 points
0 comments6 min readEA link

The Light­cone solu­tion to the trans­mit­ter room problem

OGTutzauer🔸29 Jan 2025 10:03 UTC
10 points
6 comments3 min readEA link

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard R5 Sep 2022 15:07 UTC
78 points
5 comments5 min readEA link

What if states don’t listen? A fun­da­men­tal gap in x-risk re­duc­tion strate­gies

HTC30 Aug 2022 4:27 UTC
30 points
1 comment18 min readEA link

[Question] Cu­ri­ous if GWWC takes into ac­count ex­is­ten­tial risk prob­a­bil­ities in calcu­lat­ing im­pact of re­cur­ring donors.

Phib10 Apr 2023 17:03 UTC
14 points
4 comments1 min readEA link

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

Froolow18 Oct 2022 22:54 UTC
111 points
63 comments39 min readEA link

Dra­co­nian mea­sures can in­crease the risk of ir­re­vo­ca­ble catastrophe

dsj23 Sep 2025 21:40 UTC
9 points
1 comment2 min readEA link
(thedavidsj.substack.com)

Four rea­sons I find AI safety emo­tion­ally compelling

Kat Woods 🔶 ⏸️28 Jun 2022 14:01 UTC
32 points
5 comments4 min readEA link

Help me to un­der­stand AI al­ign­ment!

britomart18 Jan 2023 9:13 UTC
3 points
12 comments1 min readEA link

What Is The Most Effec­tive Way To Look At Ex­is­ten­tial Risk?

Phil Tanny26 Aug 2022 11:21 UTC
−2 points
2 comments2 min readEA link

Should AI X-Risk Wor­ri­ers Short the Mar­ket?

postlibertarian4 Nov 2024 16:16 UTC
14 points
1 comment6 min readEA link

Les­sons from Run­ning Stan­ford EA and SERI

kuhanj20 Aug 2021 14:51 UTC
269 points
26 comments23 min readEA link

We seek pro­fes­sion­als to iden­tify, pre­vent, and miti­gate Global Catas­trophic Risks in Latin Amer­ica and Spain

JorgeTorresC13 Feb 2024 17:01 UTC
23 points
0 comments1 min readEA link

Biolog­i­cal su­per­in­tel­li­gence: a solu­tion to AI safety

Yarrow Bouchard 🔸4 Dec 2023 13:09 UTC
4 points
6 comments1 min readEA link

A full syl­labus on longtermism

jtm5 Mar 2021 22:57 UTC
110 points
13 comments8 min readEA link

A Case for Nuanced Risk Assessment

Molly Hickman20 Aug 2024 9:23 UTC
25 points
3 comments6 min readEA link

When should we worry about AI power-seek­ing?

Joe_Carlsmith19 Feb 2025 19:44 UTC
21 points
2 comments18 min readEA link
(joecarlsmith.substack.com)

Tech­nolog­i­cal Bot­tle­necks for PCR, LAMP, and Me­tage­nomics Sequencing

Ziyue Zeng9 Jan 2023 6:05 UTC
39 points
0 comments17 min readEA link

Bri­tish Nu­clear Weapons

JKitson16 Apr 2025 12:33 UTC
22 points
2 comments16 min readEA link

An Em­piri­cal De­mon­stra­tion of a New AI Catas­trophic Risk Fac­tor: Me­tapro­gram­matic Hijacking

Hiyagann27 Jun 2025 13:38 UTC
5 points
0 comments1 min readEA link

CAIDP State­ment on Lethal Au­tonomous Weapons Sys­tems

Heramb Podar30 Nov 2024 18:00 UTC
7 points
0 comments1 min readEA link
(www.linkedin.com)

#214 – Con­trol­ling AI that wants to take over – so we can use it any­way (Buck Sh­legeris on The 80,000 Hours Pod­cast)

80000_Hours4 Apr 2025 19:59 UTC
17 points
0 comments32 min readEA link

[Question] Books on au­thor­i­tar­i­anism, Rus­sia, China, NK, demo­cratic back­slid­ing, etc.?

MichaelA🔸2 Feb 2021 3:52 UTC
20 points
21 comments1 min readEA link

Com­bi­na­tion Ex­is­ten­tial Risks

ozymandias14 Jan 2019 19:29 UTC
27 points
5 comments2 min readEA link
(thingofthings.wordpress.com)

His­tory’s Gran­d­est Pro­jects: In­tro­duc­tion to Macro Strate­gies for AI Risk, Part 1

Coleman20 Jun 2025 17:32 UTC
7 points
0 comments38 min readEA link

From vol­un­tary to manda­tory, are the ESG dis­clo­sure frame­works still fer­tile ground for un­re­al­ised EA ca­reer path­ways? – A 2023 up­date on ESG po­ten­tial impact

Christopher Chan 🔸4 Jun 2023 12:00 UTC
21 points
5 comments11 min readEA link

Moder­ately Skep­ti­cal of “Risks of Mir­ror Biol­ogy”

Davidmanheim20 Dec 2024 12:57 UTC
15 points
1 comment9 min readEA link
(substack.com)

Ap­ply to be a Safety Eng­ineer at Lock­heed Martin!

yanni kyriacos31 Mar 2024 21:01 UTC
31 points
5 comments1 min readEA link

The tra­jec­tory of the fu­ture could soon get set in stone

William_MacAskill11 Aug 2025 11:04 UTC
34 points
1 comment3 min readEA link

[Link] New Founders Pledge re­port on ex­is­ten­tial risk

John G. Halstead28 Mar 2019 11:46 UTC
40 points
1 comment1 min readEA link

Epis­tle to the Successor

ukc1001429 Apr 2025 9:30 UTC
4 points
0 comments19 min readEA link

[Linkpost] Dan Luu: Fu­tur­ist pre­dic­tion meth­ods and accuracy

Linch15 Sep 2022 21:20 UTC
64 points
7 comments4 min readEA link
(danluu.com)

How We Might All Die in A Year

Greg_Colbourn ⏸️ 28 Mar 2025 13:31 UTC
14 points
6 comments21 min readEA link
(x.com)

Sen­tinel Fund­ing Memo — Miti­gat­ing GCRs with Fore­cast­ing & Emer­gency Response

Saul Munn6 Nov 2024 1:57 UTC
47 points
5 comments13 min readEA link

When You Don’t Know the Op­ti­mal An­swer, the Marginal An­swer Might Still Suffice

Liam Robins4 Sep 2025 19:15 UTC
5 points
1 comment4 min readEA link
(thelimestack.substack.com)

[Question] Can we es­ti­mate the ex­pected value of hu­man’s fu­ture life(in 500 years)

jackchang11025 Feb 2023 15:13 UTC
5 points
5 comments1 min readEA link

Musk says de­stroy­ing Twit­ter was nec­es­sary to pre­serve hu­man­ity’s fu­ture in the cosmos

Max Utility14 Dec 2022 18:35 UTC
−26 points
2 comments1 min readEA link
(twitter.com)

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

JakubK4 Aug 2022 19:23 UTC
18 points
9 comments1 min readEA link

When to di­ver­sify? Break­ing down mis­sion-cor­re­lated investing

jh29 Nov 2022 11:18 UTC
33 points
2 comments8 min readEA link

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term Risk15 Feb 2024 18:26 UTC
89 points
2 comments8 min readEA link

Re­duce AGI risks us­ing mod­ern lie de­tec­tion technology

NothingIsArt30 Sep 2024 18:12 UTC
1 point
0 comments1 min readEA link

It’s (not) how you use it

Eleni_A7 Sep 2022 13:28 UTC
6 points
3 comments2 min readEA link

Test Your Knowl­edge of the World’s Biggest Problems

AndreFerretti9 Nov 2022 16:04 UTC
30 points
3 comments1 min readEA link

Weekly EA Global Com­mu­nity Meet and Greet.

Brainy10 Jun 2022 11:10 UTC
1 point
0 comments1 min readEA link

Pangea: The Worst of Times

John G. Halstead5 Apr 2020 15:13 UTC
88 points
7 comments8 min readEA link

The Su­per­in­tel­li­gence That Cares About Us

henrik.westerberg5 Jul 2025 10:20 UTC
5 points
0 comments2 min readEA link

Carl Shul­man on AI takeover mechanisms (& more): Part II of Dwarkesh Pa­tel in­ter­view for The Lu­nar Society

alejandro25 Jul 2023 18:31 UTC
28 points
0 comments5 min readEA link
(www.dwarkeshpatel.com)

Do Self-Per­ceived Su­per­in­tel­li­gent LLMs Ex­hibit Misal­ign­ment?

Dave Banerjee 🔸29 Jun 2025 11:16 UTC
7 points
1 comment12 min readEA link
(davebanerjee.xyz)

En­vi­sion par­adise in the face of catastrophe

Jan Wehner🔸2 Oct 2025 7:32 UTC
20 points
5 comments4 min readEA link

A Cog­ni­tive In­stru­ment on the Ter­mi­nal Contest

Ihor Ivliev23 Jul 2025 23:30 UTC
0 points
1 comment8 min readEA link

Im­pact Op­por­tu­nity: In­fluence UK Biolog­i­cal Se­cu­rity Strategy

Jonathan Nankivell17 Feb 2022 20:36 UTC
49 points
0 comments3 min readEA link

The Case for Quan­tum Technologies

Elias X. Huber14 Nov 2024 13:35 UTC
13 points
4 comments6 min readEA link

Why Did Elon Musk Go After Bunkers Full of Seeds?

Matrice Jacobine🔸🏳️‍⚧️24 Mar 2025 14:48 UTC
33 points
2 comments1 min readEA link
(www.nytimes.com)

War in space, whether civ­i­liza­tions age, and the best things pos­si­ble in our uni­verse (An­ders Sand­berg on the 80,000 Hours Pod­cast)

80000_Hours9 Oct 2023 14:03 UTC
10 points
2 comments17 min readEA link

Past foram­iniferal ac­clima­ti­za­tion ca­pac­ity is limited dur­ing fu­ture warming

Matrice Jacobine🔸🏳️‍⚧️15 Nov 2024 20:38 UTC
8 points
1 comment1 min readEA link
(www.nature.com)

Towards AI Safety In­fras­truc­ture: Talk & Outline

Paul Bricman7 Jan 2024 9:35 UTC
14 points
1 comment2 min readEA link
(www.youtube.com)

X-risks of SETI and METI?

Geoffrey Miller2 Jul 2019 22:41 UTC
18 points
11 comments1 min readEA link

The sec­ond bit­ter les­son — there’s a fun­da­men­tal prob­lem with al­ign­ing AI

aelwood19 Jan 2025 18:48 UTC
4 points
1 comment5 min readEA link
(pursuingreality.substack.com)

Demo­cratic Back­slid­ing and Longter­mism

DaanvD20 Oct 2025 15:47 UTC
14 points
0 comments11 min readEA link

Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing: Which In­sti­tu­tions? (A Frame­work)

IanDavidMoss23 Aug 2021 2:26 UTC
86 points
7 comments34 min readEA link

How ASI would end the time of perils

David Nelson19 Oct 2025 12:00 UTC
1 point
1 comment5 min readEA link

Five GCR grants from the Global Challenges Foundation

Aaron Gertler 🔸16 Jan 2020 0:46 UTC
34 points
1 comment5 min readEA link

A Case Against Strong Longtermism

A. Wolff2 Sep 2022 16:40 UTC
10 points
4 comments39 min readEA link

Noth­ing Wrong With AI Weapons

kbog28 Aug 2017 2:52 UTC
16 points
22 comments7 min readEA link

“No-one in my org puts money in their pen­sion”

tobyj16 Feb 2024 15:04 UTC
157 points
11 comments9 min readEA link
(seekingtobejolly.substack.com)

Misal­ign­ment or mi­suse? The AGI al­ign­ment tradeoff

Max_He-Ho20 Jun 2025 10:41 UTC
6 points
0 comments1 min readEA link
(www.arxiv.org)

Hu­man ex­tinc­tion’s im­pact on non-hu­man an­i­mals re­mains largely underexplored

JoA🔸1 Mar 2025 21:31 UTC
35 points
1 comment12 min readEA link

Revolu­tion­is­ing Na­tional Risk Assess­ment (NRA): im­proved meth­ods and stake­holder en­gage­ment to tackle global catas­tro­phe and ex­is­ten­tial risks

Matt Boyd21 Mar 2023 6:05 UTC
26 points
1 comment8 min readEA link

EA re­silience to catas­tro­phes & ALLFED’s case study

Sonia_Cassidy23 Mar 2022 7:03 UTC
91 points
10 comments13 min readEA link

Are hu­mans head­ing to­wards De­ci­sion Depen­dency?

Aditya Raj21 Aug 2025 15:51 UTC
1 point
0 comments5 min readEA link

Ba­sic game the­ory and how you can do a bunch of good in ~3 Hours. (de­vel­op­ing ar­ti­cle.)

Amateur Systems Analyst10 Oct 2024 4:30 UTC
−3 points
2 comments7 min readEA link

On Pos­i­tivity given X-risks

YusefMosiahNathanson28 Apr 2022 9:02 UTC
1 point
0 comments4 min readEA link

De­com­pos­ing Biolog­i­cal Risks: Harm, Po­ten­tial, and Strategies

simeon_c14 Oct 2021 7:09 UTC
26 points
3 comments9 min readEA link

In­ves­ti­gat­ing how tech­nol­ogy-fo­cused aca­demic fields be­come self-sustaining

Ben Snodin6 Sep 2021 15:04 UTC
43 points
4 comments42 min readEA link

[Question] Why Stanis­lav Petrov was not awarded the No­bel Peace Price?

Miquel Banchs-Piqué (prev. mikbp)12 Oct 2023 13:24 UTC
4 points
2 comments1 min readEA link

An­nounc­ing Me­tac­u­lus’s ‘Red Lines in Ukraine’ Fore­cast­ing Project

christian21 Oct 2022 22:13 UTC
17 points
0 comments1 min readEA link
(www.metaculus.com)

AI Mo­ral Align­ment: The Most Im­por­tant Goal of Our Generation

Ronen Bar26 Mar 2025 12:32 UTC
136 points
32 comments8 min readEA link

User-Friendly In­tro Post

James Odene [User-Friendly]23 Jun 2022 11:26 UTC
117 points
7 comments6 min readEA link

Cause Area: Differ­en­tial Neu­rotech­nol­ogy Development

mwcvitkovic10 Aug 2022 2:39 UTC
95 points
7 comments36 min readEA link

A rel­a­tively athe­o­ret­i­cal per­spec­tive on as­tro­nom­i­cal waste

Nick_Beckstead6 Aug 2014 0:55 UTC
9 points
8 comments8 min readEA link

How can economists best con­tribute to pan­demic pre­ven­tion and pre­pared­ness?

Rémi T22 Aug 2021 20:49 UTC
56 points
3 comments23 min readEA link

The Boiled-Frog Failure Mode

ontologics30 Jun 2025 13:24 UTC
7 points
3 comments5 min readEA link

U.S. Govern­ment Seeks In­put on Na­tional AI R&D Strate­gic Plan—Dead­line May 29

Matt Brooks27 May 2025 1:53 UTC
8 points
1 comment1 min readEA link

In­tro­duc­tory video on safe­guard­ing the long-term future

JulianHazell7 Mar 2022 12:52 UTC
23 points
3 comments1 min readEA link

Eigh­teen Open Re­search Ques­tions for Govern­ing Ad­vanced AI Systems

Ihor Ivliev3 May 2025 19:00 UTC
2 points
0 comments6 min readEA link

Leav­ing Earth

Arjun Khemani6 Jul 2022 10:45 UTC
5 points
0 comments6 min readEA link
(arjunkhemani.com)

Sum­mary of Eliezer Yud­kowsky’s “Cog­ni­tive Bi­ases Po­ten­tially Affect­ing Judg­ment of Global Risks”

Damin Curtis🔹7 Nov 2023 18:19 UTC
5 points
2 comments6 min readEA link

How Rood­man’s GWP model trans­lates to TAI timelines

kokotajlod16 Nov 2020 14:11 UTC
22 points
0 comments2 min readEA link

On Jan­uary 1, 2030, there will be no AGI (and AGI will still not be im­mi­nent)

Yarrow Bouchard 🔸6 Apr 2025 1:08 UTC
46 points
54 comments2 min readEA link

Po­ta­toes: A Crit­i­cal Review

Pablo Villalobos10 May 2022 15:27 UTC
120 points
27 comments9 min readEA link
(docs.google.com)

Re­port on Fron­tier Model Training

YafahEdelman30 Aug 2023 20:04 UTC
19 points
1 comment21 min readEA link
(docs.google.com)

Alli­ance to Feed the Earth in Disasters (ALLFED) Progress Re­port & Giv­ing Tues­day Appeal

Denkenberger🔸21 Nov 2018 5:20 UTC
21 points
3 comments8 min readEA link

Tether­ware #1: The case for hu­man­like AI with free will

Jáchym Fibír30 Jan 2025 11:57 UTC
−3 points
2 comments10 min readEA link
(tetherware.substack.com)

At Our World in Data we’re hiring a Se­nior Full-stack Engineer

Charlie Giattino15 Dec 2023 15:51 UTC
16 points
0 comments1 min readEA link
(ourworldindata.org)

The Need for Poli­ti­cal Ad­ver­tis­ing (Post 2 of 7 on AI Gover­nance)

Jason Green-Lowe21 May 2025 0:52 UTC
60 points
0 comments13 min readEA link

Ground­wa­ter De­ple­tion: con­trib­u­tor to global civ­i­liza­tion col­lapse.

RickJS3 Dec 2022 7:09 UTC
11 points
6 comments3 min readEA link
(drive.google.com)

Find­ing Voice

khayali3 Jun 2025 1:27 UTC
2 points
0 comments2 min readEA link

Sur­vey on AI ex­is­ten­tial risk scenarios

Sam Clarke8 Jun 2021 17:12 UTC
159 points
11 comments6 min readEA link

AGI as a Black Swan Event

Stephen McAleese4 Dec 2022 23:35 UTC
5 points
2 comments7 min readEA link
(www.lesswrong.com)

The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

DannyBressler20 May 2021 0:39 UTC
66 points
6 comments2 min readEA link

Safety-con­cerned EAs should pri­ori­tize AI gov­er­nance over alignment

sammyboiz🔸11 Jun 2024 15:47 UTC
61 points
20 comments1 min readEA link

AI al­ign­ment, A Co­her­ence-Based Pro­to­col (testable)

Adriaan17 Jun 2025 16:50 UTC
2 points
1 comment20 min readEA link

Why Work On Biose­cu­rity?

Lin BL26 Nov 2025 20:03 UTC
18 points
0 comments2 min readEA link

Off-Earth Governance

EdoArad6 Sep 2019 19:26 UTC
18 points
3 comments2 min readEA link

Not un­der­stand­ing sen­tience is a sig­nifi­cant x-risk

Cameron B1 Jul 2024 15:38 UTC
28 points
8 comments2 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo7 Aug 2023 6:09 UTC
32 points
2 comments2 min readEA link
(www.campaignforaisafety.org)

Ex­is­ten­tial risk from AI and what DC could do about it (Ezra Klein on the 80,000 Hours Pod­cast)

80000_Hours26 Jul 2023 11:48 UTC
31 points
1 comment14 min readEA link

The Next Pan­demic Could Be Worse, What Can We Do? (A Hap­pier World video)

Jeroen Willems🔸21 Dec 2020 21:07 UTC
37 points
6 comments1 min readEA link

How Can Risk Aver­sion Affect Your Cause Pri­ori­ti­za­tion?

Laura Duffy20 Oct 2023 19:46 UTC
117 points
6 comments16 min readEA link
(docs.google.com)

[Creative writ­ing con­test] The sor­cerer in chains

Swimmer30 Oct 2021 1:23 UTC
17 points
0 comments31 min readEA link

Bench­mark­ing Emo­tional Align­ment: Can VSPE Re­duce Flat­tery in LLMs?

Astelle Kay4 Aug 2025 3:36 UTC
2 points
0 comments3 min readEA link

The GDM AGI Safety+Align­ment Team is Hiring for Ap­plied In­ter­pretabil­ity Research

Arthur Conmy25 Feb 2025 22:38 UTC
11 points
0 comments7 min readEA link

An­thropic An­nounces new S.O.T.A. Claude 3

Joseph Miller4 Mar 2024 19:02 UTC
10 points
5 comments1 min readEA link
(twitter.com)

The AGI-Proof Mind: Se­cur­ing Cog­ni­tive Pri­vacy via the Cog­ni­tive Fortress Man­date (CFM)

T. Johnson25 Nov 2025 16:50 UTC
−1 points
0 comments4 min readEA link

Leg­ible vs. Illeg­ible AI Safety Problems

Wei Dai4 Nov 2025 21:39 UTC
77 points
3 comments2 min readEA link

How to Sur­vive the End of the Universe

avturchin28 Nov 2019 12:40 UTC
55 points
11 comments33 min readEA link

New 80,000 Hours prob­lem pro­file on the risks of power-seek­ing AI

Zershaaneh Qureshi28 Oct 2025 14:37 UTC
45 points
0 comments2 min readEA link

De­mo­graphic De­cline as an X-Risk Am­plifier: A Frame­work for Analysis

vinniescent22 Apr 2025 16:09 UTC
−2 points
1 comment6 min readEA link

[Question] Whose track record of AI pre­dic­tions would you like to see eval­u­ated?

Jonny Spicer 🔸29 Jan 2025 11:57 UTC
10 points
13 comments1 min readEA link

Sav­ing ex­pected lives at $10 apiece?

Denkenberger🔸14 Dec 2016 15:38 UTC
15 points
23 comments2 min readEA link

OpenAI board re­ceived let­ter warn­ing of pow­er­ful AI

JordanStone23 Nov 2023 0:16 UTC
26 points
2 comments1 min readEA link
(www.reuters.com)

Good Fu­tures Ini­ti­a­tive: Win­ter Pro­ject In­tern­ship

a_e_r27 Nov 2022 23:27 UTC
67 points
7 comments3 min readEA link

Katja Grace on Slow­ing Down AI, AI Ex­pert Sur­veys And Es­ti­mat­ing AI Risk

Michaël Trazzi16 Sep 2022 18:00 UTC
48 points
6 comments3 min readEA link
(theinsideview.ai)

Public Cog­ni­tive Dis­so­nance About Ex­is­ten­tial Risk Is Terrifying

Evan_Gaensbauer22 Aug 2023 0:13 UTC
20 points
2 comments4 min readEA link

How to re­duce risks re­lated to con­scious AI: A user guide [Con­scious AI & Public Per­cep­tion]

Jay Luong5 Jul 2024 14:19 UTC
9 points
1 comment15 min readEA link

Panel on nu­clear risk: Rear Ad­miral John Gower, Pa­tri­cia Lewis, and Paul Ingram

Paul Ingram4 Jul 2023 13:24 UTC
8 points
0 comments30 min readEA link

deleted

funnyfranco11 Mar 2025 4:13 UTC
0 points
0 comments1 min readEA link

How Tech­ni­cal AI Safety Re­searchers Can Help Im­ple­ment Pu­ni­tive Da­m­ages to Miti­gate Catas­trophic AI Risk

Gabriel Weil19 Feb 2024 17:43 UTC
28 points
3 comments4 min readEA link

Is­raeli Prime Minister, Musk and Teg­mark on AI Safety

Michaël Trazzi18 Sep 2023 23:21 UTC
23 points
13 comments1 min readEA link
(twitter.com)

[Question] Huh. Bing thing got me real anx­ious about AI. Re­sources to help with that please?

Arvin15 Feb 2023 16:55 UTC
2 points
7 comments1 min readEA link

Mak­ing EA more in­clu­sive, rep­re­sen­ta­tive, and im­pact­ful in Africa

Ashura Batungwanayo17 Aug 2023 20:19 UTC
70 points
13 comments4 min readEA link

[Question] Can we con­vince peo­ple to work on AI safety with­out con­vinc­ing them about AGI hap­pen­ing this cen­tury?

BrianTan26 Nov 2020 14:46 UTC
8 points
3 comments2 min readEA link

[Paper] Sur­viv­ing global risks through the preser­va­tion of hu­man­ity’s data on the Moon

turchin3 Mar 2018 18:39 UTC
11 points
6 comments1 min readEA link

De­pop­u­la­tion and Longtermism

MikeGeruso9 Sep 2025 16:24 UTC
16 points
1 comment3 min readEA link

Don’t Bet the Fu­ture on Win­ning an AI Arms Race

Eric Drexler11 Jul 2025 11:11 UTC
25 points
1 comment5 min readEA link

Two po­si­tions at Non-Triv­ial: En­able young peo­ple to tackle the world’s most press­ing problems

Peter McIntyre17 Oct 2023 11:46 UTC
24 points
4 comments5 min readEA link
(www.non-trivial.org)

Are Far-UVC In­ter­ven­tions Over­hyped? [Founders Pledge]

christian.r9 Jan 2024 17:38 UTC
142 points
8 comments61 min readEA link

Ar­tifi­cial In­tel­li­gence Safety of Film Capacitors

yonxinzhang21 Nov 2023 11:51 UTC
−2 points
0 comments1 min readEA link

Per­sis­tence, Not Pro­jec­tion: The Case for Loop Main­te­nance over Longtermism

Emergence10119 Oct 2025 10:13 UTC
1 point
5 comments9 min readEA link

Notes on ‘Atomic Ob­ses­sion’ (2009)

lukeprog26 Oct 2019 0:30 UTC
62 points
16 comments8 min readEA link

The Real AI Threat: Com­fortable Obsolescence

Andrei Navrotskii11 Nov 2025 22:11 UTC
4 points
0 comments15 min readEA link

De­lay, De­tect, Defend: Prepar­ing for a Fu­ture in which Thou­sands Can Re­lease New Pan­demics by Kevin Esvelt

Jeremy15 Nov 2022 16:23 UTC
177 points
7 comments1 min readEA link
(dam.gcsp.ch)

7) How to Build Speed Into Our Pan­demic Re­sponse Plans

PandemicRiskMan15 Mar 2024 16:53 UTC
1 point
0 comments13 min readEA link

[Question] Disaster Relief?

Hira Khan5 Aug 2022 20:57 UTC
1 point
1 comment1 min readEA link

[Question] Slow­ing down AI progress?

Eleni_A26 Jul 2022 8:46 UTC
16 points
9 comments1 min readEA link

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

Oliver Z18 Apr 2023 18:36 UTC
56 points
1 comment4 min readEA link
(newsletter.safe.ai)

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey Miller23 Aug 2022 17:15 UTC
39 points
17 comments1 min readEA link

Tim Cook was asked about ex­tinc­tion risks from AI

Saul Munn6 Jun 2023 18:46 UTC
8 points
1 comment1 min readEA link

[Opz­ionale] Perché prob­a­bil­mente non sono una lungoterminista

EA Italy17 Jan 2023 18:12 UTC
1 point
0 comments8 min readEA link

[Question] Can AI safely ex­ist at all?

Hayven Frienby27 Nov 2023 17:33 UTC
6 points
7 comments2 min readEA link

Linkpost: Epis­tle to the Successors

ukc1001414 Jul 2024 20:07 UTC
4 points
0 comments1 min readEA link
(ukc10014.github.io)

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:07 UTC
54 points
26 comments14 min readEA link

An­thropic’s sub­mis­sion to the White House’s RFI on AI policy

Agustín Covarrubias 🔸6 Mar 2025 22:47 UTC
48 points
7 comments1 min readEA link
(www.anthropic.com)

The Map of Shelters and Re­fuges from Global Risks (Plan B of X-risks Preven­tion)

turchin22 Oct 2016 10:22 UTC
16 points
9 comments7 min readEA link

Base Rates on United States Regime Collapse

AppliedDivinityStudies5 Apr 2021 17:14 UTC
15 points
3 comments9 min readEA link

The Pend­ing Disaster Fram­ing as it Re­lates to AI Risk

Chris Leong25 Feb 2024 15:47 UTC
8 points
2 comments6 min readEA link

The Doc­trine of Sovereign Sentience

Lance Wright20 May 2025 19:02 UTC
1 point
0 comments14 min readEA link

Cur­rent Es­ti­mates for Like­li­hood of X-Risk?

rhys_lindmark6 Aug 2018 18:05 UTC
24 points
23 comments1 min readEA link

Up­date from Cam­paign for AI Safety

Nik Samoylov1 Jun 2023 10:46 UTC
22 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Will Sen­tience Make AI’s Mo­ral­ity Bet­ter?

Ronen Bar18 May 2025 4:34 UTC
27 points
4 comments10 min readEA link

AI Sleeper Agents: How An­thropic Trains and Catches Them—Video

Writer30 Aug 2025 17:52 UTC
7 points
1 comment7 min readEA link
(youtu.be)

Best Coun­tries dur­ing Nu­clear War

AndreFerretti4 Mar 2022 11:19 UTC
7 points
15 comments1 min readEA link

Juan Gar­cía Martínez: In­dus­trial al­ter­na­tive foods for global catas­trophic risks

EA Global21 Nov 2020 8:12 UTC
12 points
0 comments1 min readEA link
(www.youtube.com)

Avert­ing Catas­tro­phe: De­ci­sion The­ory for COVID-19, Cli­mate Change, and Po­ten­tial Disasters of All Kinds

JakubK2 May 2023 22:50 UTC
15 points
0 comments1 min readEA link
(nyupress.org)

Pro­ject pro­posal: Sce­nario anal­y­sis group for AI safety strategy

Buhl18 Dec 2023 18:31 UTC
35 points
0 comments5 min readEA link
(rethinkpriorities.org)

[Question] What are some sources re­lated to big-pic­ture AI strat­egy?

Jacob Watts🔸2 Mar 2023 5:04 UTC
9 points
4 comments1 min readEA link

Yud­kowsky and Soares’ Book Is Empty

Oscar Davies5 Dec 2025 22:06 UTC
−6 points
8 comments7 min readEA link

Think­ing-in-limits about TAI from the de­mand per­spec­tive. De­mand sat­u­ra­tion, re­source wars, new debt.

Ivan Madan7 Nov 2023 22:44 UTC
2 points
0 comments4 min readEA link

Crit­i­cal-Set Views, Bio­graph­i­cal Iden­tity, and the Long Term

Elliott Thornley (EJT)28 Feb 2024 14:30 UTC
9 points
3 comments1 min readEA link
(philpapers.org)

Feed­back wanted! On script for an up­com­ing ~12 minute Rob Miles video on AI x-risk.

melissasamworth23 Jan 2025 21:46 UTC
25 points
0 comments1 min readEA link

Ad­vice Wanted on Ex­pand­ing an EA Project

Denkenberger🔸23 Apr 2016 23:20 UTC
4 points
3 comments2 min readEA link

What will the first hu­man-level AI look like, and how might things go wrong?

EuanMcLean23 May 2024 11:28 UTC
12 points
1 comment15 min readEA link

Se­cu­rity Among The Stars—a de­tailed ap­praisal of space set­tle­ment and ex­is­ten­tial risk

Christopher Lankhof13 Nov 2023 14:54 UTC
27 points
9 comments2 min readEA link

Risks from Bad Space Governance

Yannick_Muehlhaeuser17 Jul 2023 12:36 UTC
43 points
1 comment6 min readEA link

[Question] AI Re­searcher Sur­veys with Similar Re­sults to Katja Grace, 2024?

AlexChalk28 Jul 2025 23:39 UTC
6 points
1 comment1 min readEA link

Shortlist of Vi­atopia Interventions

Jordan Arel31 Oct 2025 3:00 UTC
10 points
1 comment33 min readEA link

In­tro­duc­ing Umoja Green­lands’ Effec­tive Altru­ism Ori­en­ta­tion for East Africa

Sorin Ionescu14 Apr 2025 14:44 UTC
9 points
1 comment1 min readEA link

A Slow Guide to Con­fronting Doom

Ruby8 Apr 2025 14:27 UTC
7 points
1 comment14 min readEA link

Nav­i­gat­ing Cul­tural Adap­ta­tion and the Limits of Progress Metrics

alexeusgr16 Sep 2025 2:41 UTC
2 points
0 comments2 min readEA link

[Question] Is AI safety still ne­glected?

Coafos30 Mar 2022 9:09 UTC
13 points
13 comments1 min readEA link

More Every­thing For­ever—a new book cri­tique of EA

Manuel Del Río Rodríguez 🔹6 Jun 2025 9:53 UTC
52 points
17 comments7 min readEA link

FLF Fel­low­ship on AI for Hu­man Rea­son­ing: $25-50k, 12 weeks

Oliver Sourbut19 May 2025 13:25 UTC
69 points
2 comments2 min readEA link
(www.flf.org)

US AI Safety In­sti­tute will be ‘gut­ted,’ Ax­ios reports

Matrice Jacobine🔸🏳️‍⚧️20 Feb 2025 14:40 UTC
12 points
1 comment1 min readEA link
(www.zdnet.com)

‘Now Is the Time of Mon­sters’

Aaron Goldzimer12 Jan 2025 23:31 UTC
25 points
0 comments1 min readEA link
(www.nytimes.com)

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
41 points
3 comments1 min readEA link
(s-risks.org)

An­i­mal Weapons: Les­son learned from biolog­i­cal arms race to mod­ern day weapons

Halwenge 25 Feb 2024 14:06 UTC
2 points
0 comments4 min readEA link

Why microplas­tics should mat­ter to EAs

BiancaCojocaru4 Dec 2023 9:27 UTC
4 points
2 comments3 min readEA link

UK Prime Minister Rishi Su­nak’s Speech on AI

Tobias Häberli26 Oct 2023 10:34 UTC
112 points
6 comments8 min readEA link
(www.gov.uk)

Long-Term Fu­ture Fund: Ask Us Any­thing!

AdamGleave3 Dec 2020 13:44 UTC
89 points
153 comments1 min readEA link

Public Opinion about Ex­is­ten­tial Risk

cscanlon_duplicate0.889559973201212525 Aug 2018 12:34 UTC
13 points
9 comments8 min readEA link

Bounty for Ev­i­dence on Some of Pal­isade Re­search’s Beliefs

bwr23 Sep 2024 20:05 UTC
5 points
0 comments1 min readEA link

Every­thing’s An Emergency

Bentham's Bulldog20 Mar 2025 17:11 UTC
27 points
1 comment2 min readEA link

[Link post] Will we see fast AI Take­off?

SammyDMartin30 Sep 2021 14:03 UTC
18 points
0 comments1 min readEA link

Some thoughts on fanaticism

Joey Marcellino20 Oct 2025 13:10 UTC
12 points
9 comments10 min readEA link

The Se­cond Man­hat­tan: His­tor­i­cal Les­sons for AGI Control

Chiastic Slide13 Oct 2025 23:50 UTC
2 points
0 comments7 min readEA link

Pro­pos­als for the AI Reg­u­la­tory Sand­box in Spain

Guillem Bas27 Apr 2023 10:33 UTC
55 points
2 comments11 min readEA link
(riesgoscatastroficosglobales.com)

Prepar­ing De­spite Uncer­tainty: The Grand Challenges of AI Progress

Andrew Knott7 Nov 2025 10:42 UTC
7 points
0 comments7 min readEA link

More thoughts on the Hu­man-AGI War

Ahrenbach27 Dec 2023 1:52 UTC
2 points
0 comments7 min readEA link

My Model of EA and AI Safety

Eva Lu24 Jun 2025 6:23 UTC
9 points
1 comment2 min readEA link

[Question] Where are all the deep­fakes?

Spiarrow3 Mar 2025 11:46 UTC
48 points
7 comments1 min readEA link

ea.do­mains—Do­mains Free to a Good Home

plex12 Jan 2023 13:32 UTC
48 points
9 comments4 min readEA link

On AI Weapons

kbog13 Nov 2019 12:48 UTC
76 points
10 comments30 min readEA link

Sum­mary of Deep Time Reck­on­ing by Vin­cent Ialenti

vinegar10@gmail.com31 Oct 2022 20:00 UTC
10 points
1 comment10 min readEA link

EA should help Tyler Cowen pub­lish his drafted book in China

Matt Brooks14 Jan 2023 21:10 UTC
38 points
8 comments3 min readEA link

An­nounc­ing the AIxBio Re­search Hub

Andy Morgan 🔸10 Sep 2025 10:25 UTC
24 points
2 comments1 min readEA link

On longter­mism, Bayesi­anism, and the dooms­day argument

iporphyry1 Sep 2022 0:27 UTC
30 points
5 comments13 min readEA link

(out­dated ver­sion) Why Vi­atopia is Important

Jordan Arel21 Oct 2025 11:33 UTC
4 points
0 comments18 min readEA link

LLMs won’t lead to AGI—Fran­cois Chollet

tobycrisford 🔸11 Jun 2024 20:19 UTC
40 points
23 comments1 min readEA link
(www.youtube.com)

Don’t Be Com­forted by Failed Apocalypses

ColdButtonIssues17 May 2022 11:20 UTC
20 points
13 comments1 min readEA link

LASST’s Pathogen Re­search Ami­cus Brief Project

Tyler Whitmer23 Dec 2024 16:20 UTC
13 points
1 comment6 min readEA link

New org an­nounce­ment: Would your pro­ject benefit from OSINT, satel­lite imagery anal­y­sis, or in­ter­na­tional se­cu­rity-re­lated re­search sup­port?

Christina22 Apr 2024 18:02 UTC
54 points
2 comments1 min readEA link

Think­ing be­yond the long-term

Alexander Caro14 Oct 2025 11:56 UTC
6 points
2 comments7 min readEA link
(medium.com)

A Rocket–In­ter­pretabil­ity Analogy

plex21 Oct 2024 13:55 UTC
14 points
1 comment1 min readEA link

The case for con­scious AI: Clear­ing the record [AI Con­scious­ness & Public Per­cep­tion]

Jay Luong5 Jul 2024 20:29 UTC
3 points
7 comments8 min readEA link

Bioweapons shelter pro­ject launch

Benevolent_Rain14 Jun 2022 3:44 UTC
75 points
19 comments8 min readEA link

Draw­ing down car­bon with vol­canic rock dust on farm­ers’ fields

Vivian18 Aug 2023 13:50 UTC
1 point
0 comments1 min readEA link
(e360.yale.edu)

Do not go gen­tle: why the Asym­me­try does not sup­port anti-natalism

Global Priorities Institute30 Apr 2021 13:26 UTC
4 points
0 comments2 min readEA link

Space gov­er­nance is im­por­tant, tractable and neglected

Tobias_Baumann7 Jan 2020 11:24 UTC
113 points
18 comments7 min readEA link

Which Post Idea Is Most Effec­tive?

Jordan Arel25 Apr 2022 4:47 UTC
26 points
6 comments2 min readEA link

AIxBio Newslet­ter #3 - At the Nexus

Andy Morgan 🔸7 Dec 2024 21:00 UTC
7 points
0 comments2 min readEA link
(atthenexus.substack.com)

In­ter­me­di­ate Re­port on Abrupt Sun­light Re­duc­tion Scenarios

Stan Pinsent20 Oct 2023 9:15 UTC
31 points
6 comments4 min readEA link

Why Solv­ing Ex­is­ten­tial Risks Re­lated to AI Might Re­quire Rad­i­cally New Approaches

Andy E Williams10 Jan 2024 10:31 UTC
1 point
0 comments6 min readEA link

Space gov­er­nance—prob­lem profile

finm8 May 2022 17:16 UTC
65 points
11 comments15 min readEA link

Why We Can’t Align AI Un­til We Align Ourselves

mag21 Oct 2025 16:11 UTC
1 point
0 comments6 min readEA link

Disen­tan­gling “Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing”

Lizka13 Sep 2021 23:50 UTC
96 points
16 comments19 min readEA link

We’re hiring a Writer to join our team at Our World in Data

Charlie Giattino18 Apr 2024 20:50 UTC
29 points
0 comments1 min readEA link
(ourworldindata.org)

What do XPT re­sults tell us about biorisk?

Forecasting Research Institute13 Sep 2023 20:05 UTC
23 points
2 comments11 min readEA link

Lec­ture Videos from Cam­bridge Con­fer­ence on Catas­trophic Risk

HaydnBelfield23 Apr 2019 16:03 UTC
15 points
3 comments1 min readEA link

Some thoughts on Leopold Aschen­bren­ner’s Si­tu­a­tional Aware­ness paper

Luke Dawes14 Jun 2024 13:50 UTC
14 points
1 comment3 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:48 UTC
118 points
28 comments63 min readEA link

The Struc­tural Trans­for­ma­tion Case For Peacekeeping

Lauren Gilbert12 Nov 2024 20:30 UTC
32 points
9 comments1 min readEA link
(laurenpolicy.substack.com)

Hu­man­ity Learned Al­most Noth­ing From COVID-19

niplav19 Oct 2025 21:24 UTC
24 points
1 comment4 min readEA link

De­com­pos­ing al­ign­ment to take ad­van­tage of paradigms

Christopher King4 Jun 2023 14:26 UTC
2 points
0 comments4 min readEA link

Ex­is­ten­tial risk miti­ga­tion: What I worry about when there are only bad options

MMMaas19 Dec 2022 15:30 UTC
62 points
3 comments9 min readEA link