RSS

Ex­is­ten­tial risk

Core TagLast edit: Apr 5, 2023, 11:30 AM by Will Howard🔹

An existential risk is a risk that threatens the destruction of the long-term potential of life.[1] An existential risk could threaten the extinction of humans (and other sentient beings), or it could threaten some other unrecoverable collapse or permanent failure to achieve a potential good state. Natural risks such as those posed by asteroids or supervolcanoes could be existential risks, as could anthropogenic (human-caused) risks like accidents from synthetic biology or unaligned artificial intelligence.

Estimating the probability of existential risk from different factors is difficult, but there are some estimates.[1]

Some view reducing existential risks as a key moral priority, for a variety of reasons.[2] Some people simply view the current estimates of existential risk as unacceptably high. Other authors argue that existential risks are especially important because the long-run future of humanity matters a great deal.[3] Many believe that there is no intrinsic moral difference between the importance of a life today and one in a hundred years. However, there may be many more people in the future than there are now. Given these assumptions, existential risks threaten not only the beings alive right now, but also the enormous number of lives yet to be lived. One objection to this argument is that people have a special responsibility to other people currently alive that they do not have to people who have not yet been born.[4] Another objection is that, although it would in principle be important to manage, the risks are currently so unlikely and poorly understood that existential risk reduction is less cost-effective than work on other promising areas.

In The Precipice: Existential Risk and the Future of Humanity, Toby Ord offers several policy and research recommendations for handling existential risks:[5]

Further reading

Bostrom, Nick (2002) Existential risks: analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology, vol. 9.
A paper surveying a wide range of non-extinction existential risks.

Bostrom, Nick (2013) Existential risk prevention as global priority, Global Policy, vol. 4, pp. 15–31.

Matheny, Jason Gaverick (2007) Reducing the risk of human extinction, Risk Analysis, vol. 27, pp. 1335–1344.
A paper exploring the cost-effectiveness of extinction risk reduction.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Ord, Toby (2020) Existential risks to humanity in Pedro Conceição (ed.) The 2020 Human Development Report: The Next Frontier: Human Development and the Anthropocene, New York: United Nations Development Programme, pp. 106–111.

Sánchez, Sebastián (2022) Timeline of existential risk, Timelines Wiki.

Related entries

civilizational collapse | criticism of longtermism and existential risk studies | dystopia | estimation of existential risks | ethics of existential risk | existential catastrophe | existential risk factor | existential security | global catastrophic risk | hinge of history | longtermism | Toby Ord | rationality community | Russell–Einstein Manifesto | s-risk

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  2. ^

    Todd, Benjamin (2017) The case for reducing existential risks, 80,000 Hours website. (Updated June 2022.)

  3. ^

    Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, PhD thesis, Rutgers University.

  4. ^

    Roberts, M. A. (2009) The nonidentity problem, Stanford Encyclopedia of Philosophy, July 21 (updated 1 December 2020).

  5. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, pp. 280–281.

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA🔸Jul 15, 2020, 12:28 PM
81 points
7 comments7 min readEA link

“Long-Ter­mism” vs. “Ex­is­ten­tial Risk”

Scott AlexanderApr 6, 2022, 9:41 PM
525 points
81 comments3 min readEA link

The ex­pected value of ex­tinc­tion risk re­duc­tion is positive

JanBDec 9, 2018, 8:00 AM
66 points
22 comments61 min readEA link

Chart­ing the precipice: The time of per­ils and pri­ori­tiz­ing x-risk

David BernardOct 24, 2023, 4:25 PM
86 points
14 comments25 min readEA link

Ac­tu­al­ism, asym­me­try and extinction

MichaelStJulesJan 7, 2025, 4:02 PM
24 points
0 comments9 min readEA link

Is x-risk the most cost-effec­tive if we count only the next few gen­er­a­tions?

Laura DuffyOct 30, 2023, 12:43 PM
120 points
7 comments20 min readEA link
(docs.google.com)

Katja Grace: Let’s think about slow­ing down AI

peterhartreeDec 23, 2022, 12:57 AM
84 points
6 comments2 min readEA link
(worldspiritsockpuppet.substack.com)

Ex­is­ten­tial risks are not just about humanity

MichaelA🔸Apr 28, 2020, 12:09 AM
36 points
0 comments5 min readEA link

What is ex­is­ten­tial se­cu­rity?

MichaelA🔸Sep 1, 2020, 9:40 AM
34 points
1 comment6 min readEA link

Ex­is­ten­tial risk as com­mon cause

technicalitiesDec 5, 2018, 2:01 PM
49 points
22 comments5 min readEA link

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanniJul 1, 2021, 9:01 PM
145 points
10 comments32 min readEA link

Nick Bostrom – Ex­is­ten­tial Risk Preven­tion as Global Priority

Zach Stein-PerlmanFeb 1, 2013, 5:00 PM
15 points
1 comment1 min readEA link
(www.existential-risk.org)

Ex­is­ten­tial Risk Ob­ser­va­tory: re­sults and 2022 targets

OttoJan 14, 2022, 1:52 PM
22 points
6 comments4 min readEA link

Ex­cerpts from “Do­ing EA Bet­ter” on x-risk methodology

Eevee🔹Jan 26, 2023, 1:04 AM
22 points
5 comments6 min readEA link
(forum.effectivealtruism.org)

What fac­tors al­low so­cieties to sur­vive a crisis?

FJehnApr 9, 2024, 8:05 AM
23 points
1 comment10 min readEA link
(existentialcrunch.substack.com)

Database of ex­is­ten­tial risk estimates

MichaelA🔸Apr 15, 2020, 12:43 PM
130 points
37 comments5 min readEA link

On the as­sess­ment of vol­canic erup­tions as global catas­trophic or ex­is­ten­tial risks

Mike CassidyOct 13, 2021, 2:32 PM
112 points
18 comments19 min readEA link

Ob­jec­tives of longter­mist policy making

Henrik Øberg MyhreFeb 10, 2021, 6:26 PM
54 points
7 comments22 min readEA link

Quan­tify­ing the prob­a­bil­ity of ex­is­ten­tial catas­tro­phe: A re­ply to Beard et al.

MichaelA🔸Aug 10, 2020, 5:56 AM
21 points
3 comments3 min readEA link
(gcrinstitute.org)

X-risks to all life v. to humans

RobertHarlingJun 3, 2020, 3:40 PM
78 points
33 comments4 min readEA link

How bad would hu­man ex­tinc­tion be?

arvommOct 23, 2023, 12:01 PM
132 points
25 comments18 min readEA link

The con­se­quences of large-scale blackouts

FJehnOct 21, 2024, 6:12 AM
15 points
5 comments12 min readEA link
(existentialcrunch.substack.com)

The Fu­ture Might Not Be So Great

JacyJun 30, 2022, 1:01 PM
145 points
118 comments34 min readEA link
(www.sentienceinstitute.org)

Beyond Ex­tinc­tion: Re­vis­it­ing the Ques­tion and Broad­en­ing Our View

arvommMar 17, 2025, 4:03 PM
27 points
2 comments10 min readEA link

Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catastrophes

MichaelA🔸Apr 24, 2020, 1:27 PM
39 points
3 comments7 min readEA link

The Im­por­tance of Un­known Ex­is­ten­tial Risks

MichaelDickensJul 23, 2020, 7:09 PM
72 points
11 comments12 min readEA link

Some con­sid­er­a­tions for differ­ent ways to re­duce x-risk

JacyFeb 4, 2016, 3:21 AM
28 points
34 comments5 min readEA link

How much food is there?

FJehnSep 2, 2024, 6:29 AM
40 points
3 comments5 min readEA link
(existentialcrunch.substack.com)

Can a pan­demic cause hu­man ex­tinc­tion? Pos­si­bly, at least on priors

Vasco Grilo🔸Jul 15, 2024, 5:07 PM
29 points
4 comments6 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac KingOct 15, 2024, 1:40 AM
12 points
17 comments4 min readEA link

ALTER Is­rael—Mid-year 2022 Update

DavidmanheimJun 12, 2022, 9:22 AM
63 points
0 comments2 min readEA link

Long Reflec­tion Read­ing List

Will AldredMar 24, 2024, 4:27 PM
92 points
7 comments14 min readEA link

The Odyssean Process

Odyssean InstituteNov 24, 2023, 1:48 PM
25 points
6 comments1 min readEA link
(www.odysseaninstitute.org)

Ex­is­ten­tial risk pes­simism and the time of perils

David ThorstadAug 12, 2022, 2:42 PM
177 points
67 comments20 min readEA link

Re­silient foods: How to feed ev­ery­one, even in the worst of times

FJehnDec 19, 2024, 11:12 AM
11 points
1 comment7 min readEA link
(existentialcrunch.substack.com)

Re­duc­ing long-term risks from malev­olent actors

David_AlthausApr 29, 2020, 8:55 AM
344 points
93 comments37 min readEA link

Beyond Sim­ple Ex­is­ten­tial Risk: Sur­vival in a Com­plex In­ter­con­nected World

Gideon FutermanNov 21, 2022, 2:35 PM
84 points
67 comments21 min readEA link

[Question] Seek­ing sug­gested read­ings & videos for a new course on ‘AI and Psy­chol­ogy’

Geoffrey MillerMay 20, 2024, 5:45 PM
32 points
7 comments1 min readEA link

How bad would nu­clear win­ter caused by a US-Rus­sia nu­clear ex­change be?

Luisa_RodriguezJun 20, 2019, 1:48 AM
145 points
18 comments43 min readEA link

Con­tra Sa­gan on As­teroid Weaponiza­tion

christian.rDec 4, 2024, 5:49 PM
24 points
1 comment14 min readEA link

The trou­ble with tip­ping points: Are we steer­ing to­wards a cli­mate catas­tro­phe or a man­age­able challenge?

FJehnJun 19, 2023, 8:57 AM
24 points
18 comments8 min readEA link
(existentialcrunch.substack.com)

[Question] How Much Does New Re­search In­form Us About Ex­is­ten­tial Cli­mate Risk?

zdgroffJul 22, 2020, 11:47 PM
63 points
5 comments1 min readEA link

A pro­posed hi­er­ar­chy of longter­mist concepts

ArepoOct 30, 2022, 4:26 PM
38 points
13 comments4 min readEA link

Causal di­a­grams of the paths to ex­is­ten­tial catastrophe

MichaelA🔸Mar 1, 2020, 2:08 PM
51 points
11 comments13 min readEA link

Miti­gat­ing x-risk through modularity

Toby NewberryDec 17, 2020, 7:54 PM
103 points
6 comments14 min readEA link

Sum­mary of posts on XPT fore­casts on AI risk and timelines

Forecasting Research InstituteJul 25, 2023, 8:42 AM
28 points
5 comments4 min readEA link

Nathan A. Sears (1987-2023)

HaydnBelfieldMar 29, 2023, 4:07 PM
296 points
7 comments4 min readEA link

An­nounc­ing the Q1 2025 Long-Term Fu­ture Fund grant round

LinchDec 20, 2024, 2:17 AM
53 points
12 comments2 min readEA link

Effec­tive strate­gies for chang­ing pub­lic opinion: A liter­a­ture review

Jamie_HarrisNov 9, 2021, 2:09 PM
81 points
2 comments36 min readEA link
(www.sentienceinstitute.org)

Nav­i­gat­ing the New Real­ity in DC: An EIP Primer

IanDavidMossDec 20, 2024, 4:59 PM
20 points
1 comment13 min readEA link
(effectiveinstitutionsproject.substack.com)

The Choice Transition

Owen Cotton-BarrattNov 18, 2024, 12:32 PM
43 points
1 comment15 min readEA link
(strangecities.substack.com)

Global catas­trophic risks law ap­proved in the United States

JorgeTorresCMar 7, 2023, 2:28 PM
157 points
7 comments1 min readEA link
(riesgoscatastroficosglobales.com)

The 25 re­searchers who have pub­lished the largest num­ber of aca­demic ar­ti­cles on ex­is­ten­tial risk

FJehnAug 12, 2023, 8:57 AM
34 points
21 comments4 min readEA link
(existentialcrunch.substack.com)

En­light­en­ment Values in a Vuln­er­a­ble World

Maxwell TabarrokJul 18, 2022, 11:54 AM
66 points
18 comments31 min readEA link

Some thoughts on Toby Ord’s ex­is­ten­tial risk estimates

MichaelA🔸Apr 7, 2020, 2:19 AM
67 points
33 comments9 min readEA link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 19, 2019, 2:58 AM
147 points
28 comments62 min readEA link

Why I pri­ori­tize moral cir­cle ex­pan­sion over re­duc­ing ex­tinc­tion risk through ar­tifi­cial in­tel­li­gence alignment

JacyFeb 20, 2018, 6:29 PM
107 points
72 comments35 min readEA link
(www.sentienceinstitute.org)

Diver­sity In Ex­is­ten­tial Risk Stud­ies Sur­vey: SJ Beard

Gideon FutermanNov 25, 2022, 4:29 PM
2 points
0 comments1 min readEA link

How AI Takeover Might Hap­pen in Two Years

JoshcFeb 7, 2025, 11:51 PM
29 points
7 comments29 min readEA link
(x.com)

In­ter­ac­tively Vi­su­al­iz­ing X-Risk

Conor Barnes 🔶Jul 29, 2022, 4:43 PM
52 points
27 comments1 min readEA link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotalMay 17, 2023, 11:58 AM
43 points
3 comments15 min readEA link

Early-warn­ing Fore­cast­ing Cen­ter: What it is, and why it’d be cool

LinchMar 14, 2022, 7:20 PM
62 points
8 comments11 min readEA link

AGI Catas­tro­phe and Takeover: Some Refer­ence Class-Based Priors

zdgroffMay 24, 2023, 7:14 PM
103 points
10 comments6 min readEA link

The timing of labour aimed at re­duc­ing ex­is­ten­tial risk

Toby_Ord2Jul 4, 2014, 4:08 AM
21 points
7 comments7 min readEA link

Re­think’s CURVE Se­quence—The Good and the Gaps

JackMNov 28, 2023, 1:06 AM
96 points
7 comments10 min readEA link

In favour of ex­plor­ing nag­ging doubts about x-risk

Owen Cotton-BarrattJun 25, 2024, 11:52 PM
89 points
15 comments2 min readEA link

Ex­is­ten­tial Health Care Ethics: Call for Papers

Devin M. KellisSep 25, 2024, 12:34 PM
5 points
0 comments1 min readEA link

Two tools for re­think­ing ex­is­ten­tial risk

ArepoApr 5, 2024, 9:25 PM
82 points
14 comments25 min readEA link

A non-alarmist model of nu­clear winter

Stan PinsentJul 15, 2024, 10:00 AM
22 points
6 comments4 min readEA link

In­ter­me­di­ate goals for re­duc­ing risks from nu­clear weapons: A shal­low re­view (part 1/​4)

MichaelA🔸May 1, 2023, 3:04 PM
35 points
0 comments11 min readEA link
(docs.google.com)

Can a war cause hu­man ex­tinc­tion? Once again, not on priors

Vasco Grilo🔸Jan 25, 2024, 7:56 AM
67 points
29 comments18 min readEA link

Ex­per­i­men­tal longter­mism: the­ory needs data

Jan_KulveitMar 15, 2022, 10:05 AM
186 points
9 comments4 min readEA link

ALLFED’s 2024 Highlights

JuanGarciaNov 18, 2024, 11:34 AM
44 points
0 comments22 min readEA link

Book Re­view: The Precipice

Aaron Gertler 🔸Apr 9, 2020, 9:21 PM
39 points
0 comments17 min readEA link
(slatestarcodex.com)

AI Gover­nance: Op­por­tu­nity and The­ory of Impact

Allan DafoeSep 17, 2020, 6:30 AM
262 points
19 comments12 min readEA link

Ex­is­ten­tial Risk and Eco­nomic Growth

leopoldSep 3, 2019, 1:23 PM
112 points
31 comments1 min readEA link

Prior prob­a­bil­ity of this be­ing the most im­por­tant century

Vasco Grilo🔸Jul 15, 2023, 7:18 AM
8 points
2 comments2 min readEA link

Nu­clear risk re­search ideas: Sum­mary & introduction

MichaelA🔸Apr 8, 2022, 11:17 AM
103 points
4 comments7 min readEA link

Giv­ing Now vs. Later for Ex­is­ten­tial Risk: An Ini­tial Approach

MichaelDickensAug 29, 2020, 1:04 AM
14 points
2 comments28 min readEA link

[Question] Nu­clear safety/​se­cu­rity: Why doesn’t EA pri­ori­tize it more?

RockwellAug 30, 2023, 9:43 PM
33 points
20 comments1 min readEA link

The uni­ver­sal An­thro­pocene or things we can learn from exo-civil­i­sa­tions, even if we never meet any

FJehnApr 26, 2022, 12:06 PM
11 points
0 comments8 min readEA link

My per­sonal cruxes for fo­cus­ing on ex­is­ten­tial risks /​ longter­mism /​ any­thing other than just video games

MichaelA🔸Apr 13, 2021, 5:50 AM
55 points
28 comments3 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #1 - Sum­mary of the Paper

Alex HTJun 21, 2020, 9:22 AM
64 points
7 comments9 min readEA link

In­for­ma­tion se­cu­rity ca­reers for GCR reduction

ClaireZabelJun 20, 2019, 11:56 PM
187 points
35 comments8 min readEA link

Kevin Esvelt: Miti­gat­ing catas­trophic biorisks

EA GlobalSep 3, 2020, 6:11 PM
32 points
0 comments22 min readEA link
(www.youtube.com)

The Gover­nance Prob­lem and the “Pretty Good” X-Risk

Zach Stein-PerlmanAug 28, 2021, 8:00 PM
23 points
4 comments11 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

WinstonOct 24, 2023, 7:40 PM
131 points
6 comments6 min readEA link

Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 1: In­tro­duc­tion and cu­mu­la­tive risk) - Reflec­tive altruism

Eevee🔹Jul 3, 2023, 6:33 AM
74 points
6 comments6 min readEA link
(ineffectivealtruismblog.com)

Cru­cial ques­tions for longtermists

MichaelA🔸Jul 29, 2020, 9:39 AM
104 points
17 comments19 min readEA link

AMA: Chris­tian Ruhl (se­nior global catas­trophic risk re­searcher at Founders Pledge)

LizkaSep 26, 2023, 9:50 AM
68 points
28 comments1 min readEA link

Eight high-level un­cer­tain­ties about global catas­trophic and ex­is­ten­tial risk

SiebeRozendalNov 28, 2019, 2:47 PM
85 points
9 comments5 min readEA link

Some global catas­trophic risk estimates

TamayFeb 10, 2021, 7:32 PM
106 points
15 comments1 min readEA link

Mis­takes in the moral math­e­mat­ics of ex­is­ten­tial risk (Part 2: Ig­nor­ing back­ground risk) - Reflec­tive altruism

Eevee🔹Jul 3, 2023, 6:34 AM
84 points
7 comments6 min readEA link
(ineffectivealtruismblog.com)

“Dis­ap­point­ing Fu­tures” Might Be As Im­por­tant As Ex­is­ten­tial Risks

MichaelDickensSep 3, 2020, 1:15 AM
96 points
18 comments25 min readEA link

Im­prov­ing dis­aster shelters to in­crease the chances of re­cov­ery from a global catastrophe

Nick_BecksteadFeb 19, 2014, 10:17 PM
24 points
5 comments26 min readEA link

Ex­is­ten­tial Risk Model­ling with Con­tin­u­ous-Time Markov Chains

Radical Empath IsmamJan 23, 2023, 8:32 PM
87 points
9 comments12 min readEA link

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo🔸Dec 2, 2023, 8:20 AM
43 points
9 comments15 min readEA link

Ap­ply to join SHELTER Week­end this August

Joel BeckerJun 15, 2022, 2:21 PM
108 points
19 comments2 min readEA link

Progress stud­ies vs. longter­mist EA: some differences

Max_DanielMay 31, 2021, 9:35 PM
84 points
27 comments3 min readEA link

Propos­ing the Con­di­tional AI Safety Treaty (linkpost TIME)

OttoNov 15, 2024, 1:56 PM
12 points
6 comments3 min readEA link
(time.com)

Don’t Let Other Global Catas­trophic Risks Fall Be­hind: Sup­port ORCG in 2024

JorgeTorresCNov 11, 2024, 6:27 PM
48 points
1 comment4 min readEA link

Two im­por­tant re­cent AI Talks- Ge­bru and Lazar

Gideon FutermanMar 6, 2023, 1:30 AM
−7 points
5 comments1 min readEA link

What new x- or s-risk field­build­ing or­gani­sa­tions would you like to see? An EOI form. (FBB #3)

gergoFeb 17, 2025, 12:37 PM
28 points
3 comments2 min readEA link

Read­ing Group Launch: In­tro­duc­tion to Nu­clear Is­sues, March-April 2023

IsabelFeb 3, 2023, 2:55 PM
11 points
2 comments3 min readEA link

Re­duc­ing x-risk might be ac­tively harmful

MountainPathNov 18, 2024, 2:18 PM
22 points
9 comments1 min readEA link

2024: a year of con­soli­da­tion for ORCG

JorgeTorresCDec 18, 2024, 5:47 PM
33 points
0 comments7 min readEA link
(www.orcg.info)

Differ­en­tial knowl­edge interconnection

Roman LeventovOct 12, 2024, 12:52 PM
3 points
1 comment1 min readEA link

Ex­is­ten­tial Risks Con­ven­tion: pos­si­bil­ities to act

Manfred KohlerOct 17, 2024, 5:35 PM
1 point
0 comments2 min readEA link

How much should gov­ern­ments pay to pre­vent catas­tro­phes? Longter­mism’s limited role

EJTMar 19, 2023, 4:50 PM
258 points
35 comments35 min readEA link
(philpapers.org)

Is the Far Fu­ture Ir­rele­vant for Mo­ral De­ci­sion-Mak­ing?

Tristan DOct 1, 2024, 7:42 AM
35 points
31 comments2 min readEA link
(www.sciencedirect.com)

Su­per­vol­ca­noes tail risk has been ex­ag­ger­ated?

Vasco Grilo🔸Mar 6, 2024, 8:38 AM
46 points
9 comments8 min readEA link
(journals.ametsoc.org)

The Parable of the Boy Who Cried 5% Chance of Wolf

Kat WoodsAug 15, 2022, 2:22 PM
80 points
8 comments2 min readEA link

An­nounc­ing The Most Im­por­tant Cen­tury Writ­ing Prize

michelOct 31, 2022, 9:37 PM
48 points
0 comments2 min readEA link

2021 ALLFED Highlights

Ross_TiemanNov 17, 2021, 3:24 PM
45 points
1 comment16 min readEA link

A Land­scape Anal­y­sis of In­sti­tu­tional Im­prove­ment Opportunities

IanDavidMossMar 21, 2022, 12:15 AM
97 points
25 comments30 min readEA link

X-risk Miti­ga­tion Does Ac­tu­ally Re­quire Longter­mism

𝕮𝖎𝖓𝖊𝖗𝖆Nov 13, 2022, 7:40 PM
35 points
6 comments1 min readEA link

Re­search pro­ject idea: How should EAs re­act to fun­ders pul­ling out of the nu­clear risk space?

MichaelA🔸Apr 15, 2023, 2:37 PM
12 points
0 comments3 min readEA link

Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_CarlsmithApr 28, 2021, 9:41 PM
88 points
34 comments1 min readEA link

Ap­ply to the Cavendish Labs Fel­low­ship (by 4/​15)

Derik KApr 3, 2023, 11:06 PM
35 points
2 comments1 min readEA link

[Question] Where should I give to help pre­vent nu­clear war?

Luke EureNov 19, 2023, 5:05 AM
20 points
10 comments1 min readEA link

Paus­ing for what?

MountainPathOct 21, 2024, 12:18 PM
6 points
1 comment1 min readEA link

[Question] Con­crete, ex­ist­ing ex­am­ples of high-im­pact risks from AI?

freedomandutilityApr 15, 2023, 10:19 PM
9 points
1 comment1 min readEA link

X-Risk Re­searchers Sur­vey

NitaSanghaApr 24, 2023, 8:06 AM
12 points
1 comment1 min readEA link

Ap­pli­ca­tions open! UChicago Ex­is­ten­tial Risk Lab­o­ra­tory’s 2023 Sum­mer Re­search Fellowship

ZacharyRudolphApr 1, 2023, 8:55 PM
39 points
1 comment1 min readEA link

Col­lec­tive in­tel­li­gence as in­fras­truc­ture for re­duc­ing broad ex­is­ten­tial risks

vickyCYangAug 2, 2021, 6:00 AM
30 points
6 comments11 min readEA link

Re­view: What We Owe The Future

Kelsey PiperNov 21, 2022, 9:41 PM
165 points
3 comments1 min readEA link
(asteriskmag.com)

Tech­ni­cal AGI safety re­search out­side AI

richard_ngoOct 18, 2019, 3:02 PM
91 points
5 comments3 min readEA link

Guard­ing Against Pandemics

Guarding Against PandemicsSep 18, 2021, 11:15 AM
72 points
15 comments4 min readEA link

Most* small prob­a­bil­ities aren’t pas­calian

Gregory Lewis🔸Aug 7, 2022, 4:17 PM
212 points
20 comments6 min readEA link

The value of x-risk re­duc­tion

Nathan_BarnardMay 21, 2022, 7:40 PM
19 points
10 comments4 min readEA link

Risks from atom­i­cally pre­cise man­u­fac­tur­ing—Prob­lem profile

Benjamin HiltonAug 9, 2022, 1:41 PM
53 points
4 comments5 min readEA link
(80000hours.org)

In­tro­duc­ing The Long Game Pro­ject: Table­top Ex­er­cises for a Re­silient Tomorrow

Dr Dan EpsteinMay 17, 2023, 8:56 AM
48 points
7 comments5 min readEA link

Dr Alt­man or: How I Learned to Stop Wor­ry­ing and Love the Killer AI

Barak GilaMar 11, 2024, 5:01 AM
−7 points
0 comments2 min readEA link

Key points from The Dead Hand, David E. Hoffman

KitAug 9, 2019, 1:59 PM
71 points
8 comments7 min readEA link

Bioinfohazards

FinSep 17, 2019, 2:41 AM
89 points
8 comments18 min readEA link

Lo­ca­tion Model­ling for Post-Nu­clear Re­fuge Bunkers

Bleddyn MottersheadFeb 14, 2024, 7:09 AM
10 points
2 comments15 min readEA link

Trade col­lapse: Cas­cad­ing risks in our global sup­ply chains

FJehnApr 25, 2024, 7:02 AM
9 points
1 comment8 min readEA link
(existentialcrunch.substack.com)

In­tro­duc­tion to Space and Ex­is­ten­tial Risk

JordanStoneSep 23, 2023, 7:56 PM
26 points
0 comments7 min readEA link

Assess­ing Cli­mate Change’s Con­tri­bu­tion to Global Catas­trophic Risk

HaydnBelfieldFeb 19, 2021, 4:26 PM
27 points
8 comments37 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphireAug 20, 2020, 8:23 PM
51 points
0 comments3 min readEA link

Why poli­cy­mak­ers should be­ware claims of new “arms races” (Bul­letin of the Atomic Scien­tists)

christian.rJul 14, 2022, 1:38 PM
55 points
1 comment1 min readEA link
(thebulletin.org)

AI Safety Needs Great Engineers

Andy JonesNov 23, 2021, 9:03 PM
98 points
13 comments4 min readEA link

What If 99% of Hu­man­ity Van­ished? (A Hap­pier World video)

Jeroen Willems🔸Feb 16, 2023, 5:10 PM
16 points
1 comment3 min readEA link

Del­e­gated agents in prac­tice: How com­pa­nies might end up sel­l­ing AI ser­vices that act on be­half of con­sumers and coal­i­tions, and what this im­plies for safety research

RemmeltNov 26, 2020, 4:39 PM
11 points
0 comments4 min readEA link

21 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Sep 2019 up­date)

HaydnBelfieldNov 5, 2019, 2:26 PM
31 points
4 comments13 min readEA link

Fore­cast­ing Thread: Ex­is­ten­tial Risk

amandangoSep 22, 2020, 8:51 PM
24 points
4 comments2 min readEA link
(www.lesswrong.com)

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 8Jul 7, 2022, 4:44 AM
61 points
9 comments27 min readEA link

Long-Term Fu­ture Fund: Au­gust 2019 grant recommendations

Habryka [Deactivated]Oct 3, 2019, 6:46 PM
79 points
70 comments64 min readEA link

How will a nu­clear war end?

Kinoshita Yoshikazu (pseudonym)Jun 23, 2023, 10:50 AM
14 points
4 comments2 min readEA link

Ex­is­ten­tial risk x Crypto: An un­con­fer­ence at Zuzalu

YeshApr 11, 2023, 1:31 PM
6 points
0 comments1 min readEA link

Which World Gets Saved

trammellNov 9, 2018, 6:08 PM
155 points
27 comments3 min readEA link

A Biose­cu­rity and Biorisk Read­ing+ List

Tessa A 🔸Mar 14, 2021, 2:30 AM
135 points
13 comments12 min readEA link

Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

80000_HoursJul 29, 2021, 4:38 PM
20 points
0 comments156 min readEA link

New US Se­nate Bill on X-Risk Miti­ga­tion [Linkpost]

Evan R. MurphyJul 4, 2022, 1:28 AM
22 points
12 comments1 min readEA link
(www.hsgac.senate.gov)

A pseudo math­e­mat­i­cal for­mu­la­tion of di­rect work choice be­tween two x-risks

Joseph BloomAug 11, 2022, 12:28 AM
7 points
0 comments4 min readEA link

On Col­lapse Risk (C-Risk)

Pawntoe4Jan 2, 2020, 5:10 AM
39 points
10 comments8 min readEA link

Great Power Conflict

Zach Stein-PerlmanSep 15, 2021, 3:00 PM
11 points
7 comments4 min readEA link

Nel­son Man­dela’s or­ga­ni­za­tion, The Elders, back­ing x risk pre­ven­tion and longtermism

krohmal5Feb 1, 2023, 6:40 AM
179 points
4 comments1 min readEA link
(theelders.org)

Good news on cli­mate change

John G. HalsteadOct 28, 2021, 2:04 PM
236 points
34 comments12 min readEA link

Call for Cruxes by Rhyme, a Longter­mist His­tory Con­sul­tancy

Lara_THMar 1, 2023, 10:20 AM
147 points
6 comments3 min readEA link

Nu­clear war is un­likely to cause hu­man extinction

Jeffrey LadishNov 7, 2020, 5:39 AM
61 points
27 comments11 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹Apr 3, 2023, 2:11 AM
12 points
0 comments4 min readEA link

[Linkpost] Be­ware the Squir­rel by Ver­ity Harding

EarthlingSep 3, 2023, 9:04 PM
1 point
1 comment2 min readEA link
(samf.substack.com)

Which nu­clear wars should worry us most?

Luisa_RodriguezJun 16, 2019, 11:31 PM
103 points
13 comments6 min readEA link

Famine’s Role in So­cietal Collapse

FJehnOct 5, 2023, 6:19 AM
14 points
1 comment6 min readEA link
(existentialcrunch.substack.com)

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin PopeMar 21, 2023, 1:23 AM
166 points
21 comments39 min readEA link

BERI is seek­ing new trial collaborators

elizabethcooperJul 14, 2023, 5:08 PM
16 points
0 comments1 min readEA link

Nick Bostrom: An In­tro­duc­tion [early draft]

peterhartreeJul 31, 2021, 5:04 PM
38 points
0 comments19 min readEA link

An­nounc­ing “Fore­cast­ing Ex­is­ten­tial Risks: Ev­i­dence from a Long-Run Fore­cast­ing Tour­na­ment”

Forecasting Research InstituteJul 10, 2023, 5:04 PM
160 points
31 comments2 min readEA link

Sum­mary of “The Precipice” (3 of 4): Play­ing Rus­sian roulette with the future

rileyharrisAug 21, 2023, 7:55 AM
4 points
0 comments1 min readEA link
(www.millionyearview.com)

The end of the Bronze Age as an ex­am­ple of a sud­den col­lapse of civilization

FJehnOct 28, 2020, 12:55 PM
53 points
7 comments7 min readEA link

An­nounc­ing New Begin­ner-friendly Book on AI Safety and Risk

Darren McKeeNov 25, 2023, 3:57 PM
114 points
9 comments1 min readEA link

Paper Sum­mary: The Effec­tive­ness of AI Ex­is­ten­tial Risk Com­mu­ni­ca­tion to the Amer­i­can and Dutch Public

OttoMar 9, 2023, 10:40 AM
97 points
11 comments4 min readEA link

ALLFED 2020 Highlights

AronMNov 19, 2020, 10:06 PM
51 points
5 comments26 min readEA link

2023 Stan­ford Ex­is­ten­tial Risks Conference

elizabethcooperFeb 24, 2023, 5:49 PM
29 points
5 comments1 min readEA link

Dona­tion recom­men­da­tions for xrisk + ai safety

vincentweisserFeb 6, 2023, 9:25 PM
17 points
11 comments1 min readEA link

Long-Term Fu­ture Fund: April 2019 grant recommendations

Habryka [Deactivated]Apr 23, 2019, 7:00 AM
142 points
242 comments47 min readEA link

An­nounc­ing AXRP, the AI X-risk Re­search Podcast

DanielFilanDec 23, 2020, 8:10 PM
32 points
1 comment1 min readEA link

9/​26 is Petrov Day

LizkaSep 25, 2022, 11:14 PM
77 points
10 comments2 min readEA link
(www.lesswrong.com)

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA🔸Apr 4, 2021, 3:09 AM
79 points
27 comments2 min readEA link
(globalprioritiesinstitute.org)

Will the Treaty on the Pro­hi­bi­tion of Nu­clear Weapons af­fect nu­clear de­pro­lifer­a­tion through le­gal chan­nels?

Luisa_RodriguezDec 6, 2019, 10:38 AM
100 points
5 comments32 min readEA link

‘Are We Doomed?’ Memos

Miranda_ZhangMay 19, 2021, 1:51 PM
27 points
0 comments15 min readEA link

In­tro­duc­ing The Non­lin­ear Fund: AI Safety re­search, in­cu­ba­tion, and funding

Kat WoodsMar 18, 2021, 2:07 PM
71 points
32 comments5 min readEA link

[Link post] How plau­si­ble are AI Takeover sce­nar­ios?

SammyDMartinSep 27, 2021, 1:03 PM
26 points
0 comments1 min readEA link

[Question] Is some kind of min­i­mally-in­va­sive mass surveillance re­quired for catas­trophic risk pre­ven­tion?

Chris LeongJul 1, 2020, 11:32 PM
26 points
6 comments1 min readEA link

A se­lec­tion of cross-cut­ting re­sults from the XPT

Forecasting Research InstituteSep 26, 2023, 11:50 PM
18 points
1 comment9 min readEA link

Coun­ter­fac­tual catastrophes

FJehnNov 20, 2024, 7:12 PM
14 points
1 comment8 min readEA link
(existentialcrunch.substack.com)

Quan­tum, China, & Tech bifur­ca­tion; Why it Matters

Elias X. HuberNov 20, 2024, 3:28 PM
5 points
1 comment9 min readEA link

Miti­gat­ing Geo­mag­netic Storm and EMP Risks to the Elec­tri­cal Grid (Shal­low Dive)

DavidmanheimNov 26, 2024, 8:00 AM
9 points
1 comment1 min readEA link

Open Phil is hiring a leader for all our Global Catas­trophic Risks work

Alexander_BergerNov 15, 2024, 8:18 PM
90 points
2 comments1 min readEA link

Would US and Rus­sian nu­clear forces sur­vive a first strike?

Luisa_RodriguezJun 18, 2019, 12:28 AM
85 points
4 comments24 min readEA link

Bot­tle­necks and Solu­tions for the X-Risk Ecosystem

FlorentBerthetOct 8, 2018, 12:47 PM
53 points
12 comments8 min readEA link

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawfordJun 2, 2021, 6:47 PM
119 points
37 comments3 min readEA link

Sim­plify EA Pitches to “Holy Shit, X-Risk”

Neel NandaFeb 11, 2022, 1:57 AM
185 points
78 comments11 min readEA link
(www.neelnanda.io)

Crit­i­cal Re­view of ‘The Precipice’: A Re­assess­ment of the Risks of AI and Pandemics

James FodorMay 11, 2020, 11:11 AM
111 points
32 comments26 min readEA link

Cos­mic AI safety

Magnus VindingDec 6, 2024, 10:32 PM
23 points
5 comments6 min readEA link

Democratis­ing Risk—or how EA deals with critics

CarlaZoeCDec 28, 2021, 3:05 PM
273 points
311 comments4 min readEA link

Dis­in­for­ma­tion as a GCR Threat Mul­ti­plier and Ev­i­dence Based Response

Ari96Jan 24, 2024, 11:19 AM
2 points
0 comments8 min readEA link

Video and Tran­script of Pre­sen­ta­tion on Ex­is­ten­tial Risk from Power-Seek­ing AI

Joe_CarlsmithMay 8, 2022, 3:52 AM
97 points
7 comments30 min readEA link

We should ex­pect to worry more about spec­u­la­tive risks

bgarfinkelMay 29, 2022, 9:08 PM
120 points
14 comments3 min readEA link

Ma­jor UN re­port dis­cusses ex­is­ten­tial risk and fu­ture gen­er­a­tions (sum­mary)

finmSep 17, 2021, 3:51 PM
320 points
5 comments12 min readEA link

Chain­ing the evil ge­nie: why “outer” AI safety is prob­a­bly easy

titotalAug 30, 2022, 1:55 PM
40 points
12 comments10 min readEA link

Desta­bi­liza­tion of the United States: The top X-fac­tor EA ne­glects?

Yelnats T.J.Jul 15, 2024, 2:54 AM
188 points
29 comments39 min readEA link

S-Risks: Fates Worse Than Ex­tinc­tion

A.G.G. LiuMay 4, 2024, 3:30 PM
104 points
9 comments6 min readEA link
(www.lesswrong.com)

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluugMar 8, 2022, 7:17 PM
191 points
43 comments10 min readEA link
(skluug.substack.com)

Se­cureBio—Notes from SoGive

SoGiveMay 6, 2024, 9:15 PM
4 points
3 comments3 min readEA link

[Question] What do you make of the dooms­day ar­gu­ment?

niklasMar 19, 2021, 6:30 AM
14 points
8 comments1 min readEA link

How many peo­ple would be kil­led as a di­rect re­sult of a US-Rus­sia nu­clear ex­change?

Luisa_RodriguezJun 30, 2019, 3:00 AM
97 points
17 comments52 min readEA link

A New X-Risk Fac­tor: Brain-Com­puter Interfaces

JackAug 10, 2020, 10:24 AM
76 points
12 comments42 min readEA link

Edge of Ex­is­tence (2022)

Hugo WongApr 23, 2024, 6:39 PM
1 point
0 comments1 min readEA link
(www.documentaryarea.com)

Defend­ing against hy­po­thet­i­cal moon life dur­ing Apollo 11

eukaryoteJan 7, 2024, 11:59 PM
67 points
3 comments32 min readEA link
(eukaryotewritesblog.com)

[Question] (Where) Does an­i­mal x-risk fit?

Stephen RobcraftDec 21, 2023, 11:04 AM
21 points
8 comments1 min readEA link

Pod­cast: In­ter­view se­ries fea­tur­ing Dr. Peter Park

Jacob-HaimesMar 26, 2024, 12:35 AM
1 point
0 comments2 min readEA link
(into-ai-safety.github.io)

How I learned to stop wor­ry­ing and love X-risk

MoneroMar 11, 2024, 3:58 AM
11 points
1 comment1 min readEA link

[Question] Pro­jects tack­ling nu­clear risk?

SanjayMay 29, 2020, 10:41 PM
29 points
3 comments1 min readEA link

The best places to weather global catastrophes

FJehnMar 4, 2024, 7:57 AM
31 points
9 comments7 min readEA link
(existentialcrunch.substack.com)

BERI’s 2024 Goals and Predictions

elizabethcooperJan 12, 2024, 10:15 PM
9 points
0 comments1 min readEA link
(existence.org)

Tort Law Can Play an Im­por­tant Role in Miti­gat­ing AI Risk

Gabriel WeilFeb 12, 2024, 5:11 PM
99 points
6 comments5 min readEA link

AMA: Toby Ord, au­thor of “The Precipice” and co-founder of the EA movement

Toby_OrdMar 17, 2020, 2:39 AM
68 points
82 comments1 min readEA link

Sum­mary: Tiny Prob­a­bil­ities and the Value of the Far Fu­ture (Pe­tra Koso­nen)

Noah Varley🔸Feb 17, 2024, 2:11 PM
7 points
1 comment4 min readEA link

Not all x-risk is the same: im­pli­ca­tions of non-hu­man-descendants

NikolaDec 18, 2021, 9:22 PM
38 points
4 comments5 min readEA link

[Question] What am I miss­ing re. open-source LLM’s?

another-anon-do-gooderDec 4, 2023, 4:48 AM
1 point
2 comments1 min readEA link

Am­bi­guity aver­sion and re­duc­tion of X-risks: A mod­el­ling situation

Benedikt SchmidtSep 13, 2021, 7:16 AM
29 points
6 comments5 min readEA link

Sum­mary: Mis­takes in the Mo­ral Math­e­mat­ics of Ex­is­ten­tial Risk (David Thorstad)

Noah Varley🔸Apr 10, 2024, 2:21 PM
62 points
23 comments4 min readEA link

Nu­clear Fine-Tun­ing: How Many Wor­lds Have Been De­stroyed?

EmberAug 17, 2022, 1:13 PM
18 points
28 comments23 min readEA link

A Dou­ble Fea­ture on The Extropians

Maxwell TabarrokJun 3, 2023, 6:29 PM
47 points
3 comments1 min readEA link

Miti­gat­ing Eth­i­cal Con­cerns and Risks in the US Ap­proach to Au­tonomous Weapons Sys­tems through Effec­tive Altruism

VeeJun 11, 2023, 10:37 AM
5 points
2 comments4 min readEA link

Long list of AI ques­tions

NunoSempereDec 6, 2023, 11:12 AM
124 points
14 comments86 min readEA link

‘The Precipice’ Book Review

Matt GoodmanJul 27, 2020, 10:10 PM
14 points
1 comment4 min readEA link

How the Ukraine con­flict may in­fluence spend­ing on longter­mist pro­jects

Frank_RMar 16, 2022, 8:15 AM
23 points
3 comments2 min readEA link

En­gag­ing UK Cen­tre-Right Types in Ex­is­ten­tial Risk

Max_ThiloDec 4, 2023, 9:26 AM
17 points
0 comments1 min readEA link

Sen­tience In­sti­tute 2021 End of Year Summary

AliNov 26, 2021, 2:40 PM
66 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum HinchcliffeOct 14, 2023, 9:18 PM
28 points
4 comments9 min readEA link

[Linkpost] OpenAI lead­ers call for reg­u­la­tion of “su­per­in­tel­li­gence” to re­duce ex­is­ten­tial risk.

Lowe LundinMay 25, 2023, 2:14 PM
5 points
0 comments1 min readEA link

Bear Brau­moel­ler has passed away

Stephen ClareMay 5, 2023, 2:06 PM
153 points
4 comments1 min readEA link

What is the like­li­hood that civ­i­liza­tional col­lapse would di­rectly lead to hu­man ex­tinc­tion (within decades)?

Luisa_RodriguezDec 24, 2020, 10:10 PM
296 points
37 comments50 min readEA link

The Re­think Pri­ori­ties Ex­is­ten­tial Se­cu­rity Team’s Strat­egy for 2023

Ben SnodinMay 8, 2023, 8:08 AM
92 points
3 comments16 min readEA link

Pop­u­la­tion After a Catastrophe

Stan PinsentOct 2, 2023, 4:06 PM
33 points
12 comments14 min readEA link

Could Ukraine re­take Crimea?

mhint199May 1, 2023, 1:06 AM
6 points
3 comments4 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ May 2, 2023, 10:17 AM
68 points
35 comments13 min readEA link

Op­ti­mal Allo­ca­tion of Spend­ing on Ex­is­ten­tial Risk Re­duc­tion over an In­finite Time Hori­zon (in a too sim­plis­tic model)

Yassin AlayaAug 12, 2021, 8:14 PM
13 points
4 comments1 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risks In­tro­duc­tory Course (ERIC)

nandiniAug 19, 2022, 3:57 PM
57 points
14 comments7 min readEA link

Cos­mic’s Mug­ger : Should we re­ally de­lay cos­mic ex­pan­sion ?

Lysandre TerrisseJun 30, 2022, 6:41 AM
10 points
1 comment4 min readEA link

Ob­sta­cles to the U.S. for Sup­port­ing Ver­ifi­ca­tions in the BWC, and Po­ten­tial Solu­tions.

Garrett EhingerApr 14, 2023, 2:48 AM
27 points
2 comments16 min readEA link

Are you re­ally in a race? The Cau­tion­ary Tales of Szilárd and Ellsberg

HaydnBelfieldMay 19, 2022, 8:42 AM
487 points
44 comments18 min readEA link

Cli­mate anoma­lies and so­cietal collapse

FJehnFeb 8, 2024, 9:49 AM
13 points
6 comments10 min readEA link
(existentialcrunch.substack.com)

[Question] What would you ask a poli­cy­maker about ex­is­ten­tial risks?

James Nicholas BryantJul 6, 2021, 11:53 PM
24 points
2 comments1 min readEA link

EA is too fo­cused on the Man­hat­tan Project

trevor1Sep 5, 2022, 2:00 AM
17 points
0 comments1 min readEA link

Long-Term Fu­ture Fund AMA

HelenDec 19, 2018, 4:10 AM
39 points
30 comments1 min readEA link

Model­ling Great Power con­flict as an ex­is­ten­tial risk factor

Stephen ClareFeb 3, 2022, 11:41 AM
122 points
22 comments19 min readEA link

Rea­sons to have hope

Jordan Pieters 🔸Apr 20, 2023, 10:19 AM
53 points
4 comments1 min readEA link

Cli­mate Change & Longter­mism: new book-length report

John G. HalsteadAug 26, 2022, 9:13 AM
319 points
160 comments13 min readEA link

An­drew Sny­der Beat­tie: Biotech­nol­ogy and ex­is­ten­tial risk

EA GlobalNov 3, 2017, 7:43 AM
11 points
0 comments1 min readEA link
(www.youtube.com)

[Fu­ture Perfect] How to be a good ancestor

PabloJul 2, 2021, 1:17 PM
41 points
3 comments2 min readEA link
(www.vox.com)

Split­ting the timeline as an ex­tinc­tion risk intervention

NunoSempereFeb 6, 2022, 7:59 PM
14 points
27 comments4 min readEA link

Im­por­tant, ac­tion­able re­search ques­tions for the most im­por­tant century

Holden KarnofskyFeb 24, 2022, 4:34 PM
298 points
13 comments19 min readEA link

[Question] Is trans­for­ma­tive AI the biggest ex­is­ten­tial risk? Why or why not?

Eevee🔹Mar 5, 2022, 3:54 AM
9 points
10 comments1 min readEA link

Man­i­fund x AI Worldviews

AustinMar 31, 2023, 3:32 PM
32 points
2 comments2 min readEA link
(manifund.org)

Man­i­fund: What we’re fund­ing (weeks 2-4)

AustinAug 4, 2023, 4:00 PM
65 points
6 comments5 min readEA link
(manifund.substack.com)

Rus­sian x-risks newslet­ter, sum­mer 2019

avturchinSep 7, 2019, 9:55 AM
23 points
1 comment4 min readEA link

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotalMar 30, 2023, 10:07 PM
19 points
8 comments5 min readEA link

Be­ing at peace with Doom

Johannes C. MayerApr 9, 2023, 3:01 PM
15 points
7 comments4 min readEA link
(www.lesswrong.com)

ProMED, plat­form which alerted the world to Covid, might col­lapse—can EA donors fund it?

freedomandutilityAug 4, 2023, 4:42 PM
41 points
4 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 21, 2020, 3:25 PM
155 points
16 comments68 min readEA link

[Question] How would you define “ex­is­ten­tial risk?”

LinchNov 29, 2021, 5:17 AM
12 points
4 comments1 min readEA link

My at­tempt at ex­plain­ing the case for AI risk in a straight­for­ward way

JulianHazellMar 25, 2023, 4:32 PM
25 points
7 comments18 min readEA link
(muddyclothes.substack.com)

Suc­ces­sif: Join our AI pro­gram to help miti­gate the catas­trophic risks of AI

ClaireBOct 25, 2023, 4:51 PM
15 points
0 comments5 min readEA link

“Effec­tive Altru­ism, Longter­mism, and the Prob­lem of Ar­bi­trary Power” by Gwilym David Blunt

WobblyPandaPandaNov 12, 2023, 1:21 AM
22 points
2 comments1 min readEA link
(www.thephilosopher1923.org)

The Top AI Safety Bets for 2023: GiveWiki’s Lat­est Recommendations

Dawn DrescherNov 11, 2023, 9:04 AM
11 points
4 comments8 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port April—Septem­ber 2019

HaydnBelfieldSep 30, 2019, 7:20 PM
14 points
1 comment16 min readEA link

Mea­sur­ing AI-Driven Risk with Stock Prices (Su­sana Cam­pos-Mart­ins)

Global Priorities InstituteDec 12, 2024, 2:22 PM
10 points
1 comment4 min readEA link
(globalprioritiesinstitute.org)

New Cause Area: Pro­gram­matic Mettā

Milan GriffesApr 1, 2021, 12:54 PM
4 points
1 comment2 min readEA link

5 home­grown EA pro­jects, seek­ing small donors

AustinOct 28, 2024, 11:24 PM
50 points
1 comment2 min readEA link

Longter­mism Fund: Au­gust 2023 Grants Report

Michael Townsend🔸Aug 20, 2023, 5:34 AM
81 points
3 comments5 min readEA link

Ries­gos Catas­trófi­cos Globales needs funding

Jaime SevillaAug 1, 2023, 4:26 PM
98 points
1 comment3 min readEA link

[Linkpost] Prospect Magaz­ine—How to save hu­man­ity from extinction

jackvaSep 26, 2023, 7:16 PM
32 points
2 comments1 min readEA link
(www.prospectmagazine.co.uk)

The Precipice: a risky re­view by a non-EA

Fernando Moreno 🔸Aug 8, 2020, 2:40 PM
14 points
1 comment18 min readEA link

AMA: Andy We­ber (U.S. As­sis­tant Sec­re­tary of Defense from 2009-2014)

LizkaSep 26, 2023, 9:40 AM
132 points
49 comments1 min readEA link

[Question] Why isn’t there a char­ity eval­u­a­tor for longter­mist pro­jects?

Eevee🔹Jul 29, 2023, 4:30 PM
106 points
44 comments1 min readEA link

State­ment on Plu­ral­ism in Ex­is­ten­tial Risk Stud­ies

Gideon FutermanAug 16, 2023, 2:29 PM
29 points
46 comments7 min readEA link

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn ⏸️ Jul 19, 2023, 4:46 PM
31 points
2 comments1 min readEA link

Con­cepts of ex­is­ten­tial catas­tro­phe (Hilary Greaves)

Global Priorities InstituteNov 9, 2023, 5:42 PM
41 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

Man­i­fund: what we’re fund­ing (week 1)

AustinJul 15, 2023, 12:28 AM
43 points
11 comments3 min readEA link
(manifund.substack.com)

Hu­man sur­vival is a policy choice

Peter WildefordJun 3, 2022, 6:53 PM
27 points
2 comments6 min readEA link
(www.pasteurscube.com)

“Aligned with who?” Re­sults of sur­vey­ing 1,000 US par­ti­ci­pants on AI values

Holly MorganMar 21, 2023, 10:07 PM
41 points
0 comments2 min readEA link
(www.lesswrong.com)

Five Years of Re­think Pri­ori­ties: Im­pact, Fu­ture Plans, Fund­ing Needs (July 2023)

Rethink PrioritiesJul 18, 2023, 3:59 PM
110 points
3 comments16 min readEA link

Defin­ing Meta Ex­is­ten­tial Risk

rhys_lindmarkJul 9, 2019, 6:16 PM
13 points
3 comments4 min readEA link

An­nounc­ing the EA Archive

Aaron BergmanJul 6, 2023, 1:49 PM
70 points
18 comments2 min readEA link

An­nounc­ing the Ex­is­ten­tial In­foSec Forum

calebpJul 7, 2023, 9:08 PM
90 points
1 comment2 min readEA link

U.S. Has De­stroyed the Last of Its Once-Vast Chem­i­cal Weapons Arsenal

JMonty🔸Jul 18, 2023, 1:47 AM
19 points
2 comments1 min readEA link
(www.nytimes.com)

A re­sponse to Michael Plant’s re­view of What We Owe The Future

JackMOct 4, 2023, 11:40 PM
61 points
14 comments10 min readEA link

The GiveWiki’s Top Picks in AI Safety for the Giv­ing Sea­son of 2023

Dawn DrescherDec 7, 2023, 9:23 AM
26 points
0 comments3 min readEA link
(impactmarkets.substack.com)

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸Jul 8, 2023, 8:40 AM
8 points
0 comments2 min readEA link

How Re­think Pri­ori­ties’ Re­search could in­form your grantmaking

kierangreig🔸Oct 4, 2023, 6:24 PM
59 points
0 comments2 min readEA link

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevinSep 4, 2023, 7:02 PM
21 points
2 comments6 min readEA link

Great power con­flict—prob­lem pro­file (sum­mary and high­lights)

Stephen ClareJul 7, 2023, 2:40 PM
110 points
6 comments5 min readEA link
(80000hours.org)

Juan B. Gar­cía Martínez on tack­ling many causes at once and his jour­ney into EA

Amber DawnJun 30, 2023, 1:48 PM
92 points
3 comments8 min readEA link
(contemplatonist.substack.com)

Start­ing the sec­ond Green Revolution

freedomandutilityJun 29, 2023, 12:23 PM
30 points
3 comments1 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

Gideon FutermanMar 16, 2023, 2:37 PM
59 points
11 comments15 min readEA link

[Question] How can we se­cure more re­search po­si­tions at our uni­ver­si­ties for x-risk re­searchers?

Neil CrawfordSep 6, 2022, 2:41 PM
3 points
2 comments1 min readEA link

Ap­ply to Spring 2024 policy in­tern­ships (we can help)

ESOct 4, 2023, 2:45 PM
26 points
2 comments1 min readEA link

The most im­por­tant cli­mate change uncertainty

cwaJul 26, 2022, 3:15 PM
144 points
28 comments13 min readEA link

[Paper] In­ter­ven­tions that May Prevent or Mol­lify Su­per­vol­canic Eruptions

Denkenberger🔸Jan 15, 2018, 9:46 PM
23 points
8 comments1 min readEA link

An­nounc­ing the 2023 CLR Sum­mer Re­search Fellowship

stefan.torgesMar 17, 2023, 12:11 PM
81 points
0 comments3 min readEA link

An­nounc­ing Man­i­fund Regrants

AustinJul 5, 2023, 7:42 PM
217 points
51 comments4 min readEA link
(manifund.org)

Civ­i­liza­tion Re-Emerg­ing After a Catas­trophic Collapse

MichaelA🔸Jun 27, 2020, 3:22 AM
32 points
18 comments2 min readEA link
(www.youtube.com)

AI Tools for Ex­is­ten­tial Security

LizkaMar 14, 2025, 6:37 PM
43 points
6 comments11 min readEA link
(www.forethought.org)

Mo­ral er­ror as an ex­is­ten­tial risk

William_MacAskillMar 17, 2025, 4:22 PM
75 points
3 comments11 min readEA link

New book on s-risks

Tobias_BaumannOct 26, 2022, 12:04 PM
293 points
27 comments1 min readEA link

The catas­trophic pri­macy of re­ac­tivity over proac­tivity in gov­ern­men­tal risk as­sess­ment: brief UK case study

JuanGarciaSep 27, 2021, 3:53 PM
56 points
0 comments5 min readEA link

“Can We Sur­vive Tech­nol­ogy?” by John von Neumann

Eli RoseMar 13, 2023, 2:26 AM
51 points
0 comments1 min readEA link
(geosci.uchicago.edu)

Sen­tinel’s Global Risks Weekly Roundup #11/​2025. Trump in­vokes Alien Ene­mies Act, Chi­nese in­va­sion barges de­ployed in ex­er­cise.

NunoSempereMar 17, 2025, 7:37 PM
40 points
0 comments6 min readEA link
(blog.sentinel-team.org)

“Is this risk ac­tu­ally ex­is­ten­tial?” may be less im­por­tant than we think

Miquel Banchs-Piqué (prev. mikbp)Mar 3, 2023, 10:18 PM
8 points
8 comments2 min readEA link

Risks from Atom­i­cally Pre­cise Manufacturing

MichaelA🔸Aug 25, 2020, 9:53 AM
29 points
4 comments2 min readEA link
(www.openphilanthropy.org)

Fu­ture benefits of miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸Mar 4, 2023, 4:22 PM
20 points
0 comments28 min readEA link

Mo­ral plu­ral­ism and longter­mism | Sunyshore

Eevee🔹Apr 17, 2021, 12:14 AM
26 points
0 comments5 min readEA link
(sunyshore.substack.com)

In­tro­duc­ing the new Ries­gos Catas­trófi­cos Globales team

Jaime SevillaMar 3, 2023, 11:04 PM
74 points
3 comments5 min readEA link
(riesgoscatastroficosglobales.com)

Risks from so­lar flares?

freedomandutilityMar 7, 2023, 11:12 AM
20 points
6 comments1 min readEA link

Maybe longter­mism isn’t for everyone

Eevee🔹Feb 10, 2023, 4:48 PM
39 points
17 comments1 min readEA link

Effec­tive al­tru­ists are already in­sti­tu­tion­al­ists and are do­ing far more than un­work­able longter­mism—A re­sponse to “On the Differ­ences be­tween Eco­mod­ernism and Effec­tive Altru­ism”

jackvaFeb 21, 2023, 6:08 PM
78 points
3 comments12 min readEA link

Tech­nolog­i­cal de­vel­op­ments that could in­crease risks from nu­clear weapons: A shal­low review

MichaelA🔸Feb 9, 2023, 3:41 PM
79 points
3 comments5 min readEA link
(bit.ly)

Pro­posal: Create A New Longter­mism Organization

Brian LuiFeb 7, 2023, 5:59 AM
25 points
37 comments6 min readEA link

PHILANTHROPY AND NUCLEAR RISK REDUCTION

ELNFeb 10, 2023, 10:48 AM
22 points
5 comments4 min readEA link

What does Putin’s sus­pen­sion of a nu­clear treaty to­day mean for x-risk from nu­clear weapons?

freedomandutilityFeb 21, 2023, 4:46 PM
37 points
2 comments1 min readEA link

How can we re­duce s-risks?

Tobias_BaumannJan 29, 2021, 3:46 PM
42 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

APPG on Fu­ture Gen­er­a­tions im­pact re­port – Rais­ing the pro­file of fu­ture gen­er­a­tion in the UK Parliament

weeatquinceAug 12, 2020, 2:24 PM
87 points
2 comments17 min readEA link

Con­ver­sa­tion with Holden Karnofsky, Nick Beck­stead, and Eliezer Yud­kowsky on the “long-run” per­spec­tive on effec­tive altruism

Nick_BecksteadAug 18, 2014, 4:30 AM
11 points
7 comments6 min readEA link

FLI FAQ on the re­jected grant pro­posal controversy

TegmarkJan 19, 2023, 5:31 PM
331 points
132 comments1 min readEA link

Non-util­i­tar­ian effec­tive altruism

keir bradwellJan 29, 2023, 6:07 AM
42 points
10 comments17 min readEA link
(keirbradwell.substack.com)

Un­jour­nal’s 1st eval is up: Re­silient foods pa­per (Denken­berger et al) & AMA ~48 hours

david_reinsteinFeb 6, 2023, 7:18 PM
77 points
10 comments3 min readEA link
(sciety.org)

Some more pro­jects I’d like to see

finmFeb 25, 2023, 10:22 PM
67 points
13 comments24 min readEA link
(finmoorhouse.com)

More than Earth War­riors: The Di­verse Roles of Geo­scien­tists in Effec­tive Altruism

Christopher ChanAug 31, 2023, 6:30 AM
56 points
5 comments16 min readEA link

One Hun­dred Opinions on Nu­clear War (Ladish, 2019)

Will AldredDec 29, 2022, 8:23 PM
12 points
0 comments3 min readEA link
(jeffreyladish.com)

Re­think Pri­ori­ties: Seek­ing Ex­pres­sions of In­ter­est for Spe­cial Pro­jects Next Year

kierangreig🔸Nov 29, 2023, 1:44 PM
57 points
0 comments5 min readEA link

Re­duc­ing the neart­erm risk of hu­man ex­tinc­tion is not as­tro­nom­i­cally cost-effec­tive?

Vasco Grilo🔸Jun 9, 2024, 8:02 AM
20 points
37 comments8 min readEA link

EA needs more humor

SWKDec 1, 2022, 5:30 AM
35 points
14 comments5 min readEA link

Ex­is­ten­tial Risk: More to explore

EA HandbookJan 1, 2021, 10:15 AM
2 points
0 comments1 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Six Month Re­port: Novem­ber 2018 - April 2019

HaydnBelfieldMay 1, 2019, 3:34 PM
10 points
16 comments15 min readEA link

Tyler Cowen on effec­tive al­tru­ism (De­cem­ber 2022)

peterhartreeJan 13, 2023, 9:39 AM
76 points
11 comments20 min readEA link
(youtu.be)

Database of orgs rele­vant to longter­mist/​x-risk work

MichaelA🔸Nov 19, 2021, 8:50 AM
104 points
65 comments4 min readEA link

Teruji Thomas, ‘The Asym­me­try, Uncer­tainty, and the Long Term’

PabloNov 5, 2019, 8:24 PM
43 points
6 comments1 min readEA link
(globalprioritiesinstitute.org)

Sav­ing lives near the precipice

MikhailSaminJul 29, 2022, 3:08 PM
18 points
10 comments3 min readEA link

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maximeMar 29, 2021, 6:10 PM
116 points
23 comments11 min readEA link

Should We Pri­ori­tize Long-Term Ex­is­ten­tial Risk?

MichaelDickensAug 20, 2020, 2:23 AM
28 points
17 comments3 min readEA link

ALLFED 2019 An­nual Re­port and Fundrais­ing Appeal

AronMNov 23, 2019, 2:05 AM
42 points
12 comments21 min readEA link

Warn­ing Shots Prob­a­bly Wouldn’t Change The Pic­ture Much

So8resOct 6, 2022, 5:15 AM
93 points
20 comments2 min readEA link

80,000 Hours ca­reer re­view: In­for­ma­tion se­cu­rity in high-im­pact areas

80000_HoursJan 16, 2023, 12:45 PM
56 points
10 comments11 min readEA link
(80000hours.org)

Case stud­ies of self-gov­er­nance to re­duce tech­nol­ogy risk

jiaApr 6, 2021, 8:49 AM
55 points
6 comments7 min readEA link

What Re­think Pri­ori­ties Gen­eral Longter­mism Team Did in 2022, and Up­dates in Light of the Cur­rent Situation

LinchDec 14, 2022, 1:37 PM
162 points
9 comments19 min readEA link

Geo­eng­ineer­ing to re­duce global catas­trophic risk?

Niklas LehmannMay 29, 2022, 3:50 PM
7 points
3 comments10 min readEA link

AMA: To­bias Bau­mann, Cen­ter for Re­duc­ing Suffering

Tobias_BaumannSep 6, 2020, 10:45 AM
48 points
45 comments1 min readEA link

Linkpost for var­i­ous re­cent es­says on suffer­ing-fo­cused ethics, pri­ori­ties, and more

Magnus VindingSep 28, 2022, 8:58 AM
87 points
0 comments5 min readEA link
(centerforreducingsuffering.org)

Kurzge­sagt—The Last Hu­man (Longter­mist video)

LizkaJun 28, 2022, 8:16 PM
150 points
17 comments1 min readEA link
(www.youtube.com)

A Sim­ple Model of AGI De­ploy­ment Risk

djbinderJul 9, 2021, 9:44 AM
30 points
0 comments5 min readEA link

Does cli­mate change de­serve more at­ten­tion within EA?

BenApr 17, 2019, 6:50 AM
152 points
65 comments15 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port Oc­to­ber 2019 - Jan­uary 2020

HaydnBelfieldApr 8, 2020, 1:28 PM
8 points
0 comments17 min readEA link

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJulesOct 3, 2021, 4:55 PM
48 points
19 comments1 min readEA link

Global Devel­op­ment → re­duced ex-risk/​long-ter­mism. (Ini­tial draft/​ques­tion)

ArnoAug 13, 2022, 4:29 PM
3 points
3 comments1 min readEA link

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumwayAug 7, 2020, 6:27 AM
40 points
18 comments16 min readEA link

Rus­sian x-risks newslet­ter, fall 2019

avturchinDec 3, 2019, 5:01 PM
27 points
2 comments3 min readEA link

Case study: Re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_GreenJun 2, 2022, 4:07 AM
41 points
2 comments16 min readEA link

Re­place Neglectedness

Indra Gesink 🔸Jan 16, 2023, 5:42 PM
52 points
4 comments4 min readEA link

“Safety Cul­ture for AI” is im­por­tant, but isn’t go­ing to be easy

DavidmanheimJun 26, 2023, 11:27 AM
53 points
0 comments2 min readEA link
(papers.ssrn.com)

Me­diocre AI safety as ex­is­ten­tial risk

technicalitiesMar 16, 2022, 11:50 AM
52 points
12 comments3 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDelloMay 20, 2020, 11:01 PM
27 points
10 comments6 min readEA link

An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

MichaelA🔸Jun 16, 2021, 4:12 PM
38 points
0 comments2 min readEA link

Global Pri­ori­ties In­sti­tute: Re­search Agenda

Aaron Gertler 🔸Jan 20, 2021, 8:09 PM
22 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah WeilerJun 26, 2022, 10:59 AM
57 points
2 comments21 min readEA link

Should marginal longter­mist dona­tions sup­port fun­da­men­tal or in­ter­ven­tion re­search?

MichaelA🔸Nov 30, 2020, 1:10 AM
43 points
4 comments15 min readEA link

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_CarlsmithMar 22, 2021, 8:21 AM
102 points
13 comments12 min readEA link

Sur­viv­ing Global Catas­tro­phe in Nu­clear Sub­marines as Refuges

turchinApr 5, 2017, 8:06 AM
14 points
4 comments1 min readEA link

Assess­ing global catas­trophic biolog­i­cal risks (Crys­tal Wat­son)

EA GlobalJun 8, 2018, 7:15 AM
9 points
0 comments9 min readEA link
(www.youtube.com)

Marc Lip­sitch: Prevent­ing catas­trophic risks by miti­gat­ing sub­catas­trophic ones

EA GlobalJun 2, 2017, 8:48 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

[Question] What are the best ar­ti­cles/​blogs on the psy­chol­ogy of ex­is­ten­tial risk?

Geoffrey MillerDec 16, 2020, 6:05 PM
24 points
7 comments1 min readEA link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew CritchMay 13, 2022, 5:26 PM
51 points
5 comments4 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risk Observatory

OttoAug 12, 2021, 3:51 PM
39 points
0 comments5 min readEA link

.01% Fund—Ideation and Proposal

LinchMar 1, 2022, 6:25 PM
69 points
23 comments5 min readEA link

[Linkpost] Don’t Look Up—a Net­flix com­edy about as­ter­oid risk and re­al­is­tic so­cietal re­ac­tions (Dec. 24th)

LinchNov 18, 2021, 9:40 PM
63 points
16 comments1 min readEA link
(www.youtube.com)

Free to at­tend: Cam­bridge Con­fer­ence on Catas­trophic Risk (19-21 April)

HaydnBelfieldMar 21, 2022, 1:23 PM
19 points
2 comments1 min readEA link

Seth Baum: Rec­on­cil­ing in­ter­na­tional security

EA GlobalJun 8, 2018, 7:15 AM
9 points
0 comments15 min readEA link
(www.youtube.com)

Age-Weighted Voting

William_MacAskillJul 12, 2019, 3:21 PM
73 points
40 comments6 min readEA link

Com­mon-sense cases where “hy­po­thet­i­cal fu­ture peo­ple” matter

tlevinAug 12, 2022, 2:05 PM
107 points
21 comments4 min readEA link

How x-risk pro­jects are differ­ent from startups

Jan_KulveitApr 5, 2019, 7:35 AM
67 points
9 comments1 min readEA link

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecasJun 7, 2020, 7:52 PM
2 points
15 comments3 min readEA link

Per­sonal thoughts on ca­reers in AI policy and strategy

carrickflynnSep 27, 2017, 4:52 PM
56 points
28 comments18 min readEA link

Up­date on civ­i­liza­tional col­lapse research

Jeffrey LadishFeb 10, 2020, 11:40 PM
56 points
7 comments3 min readEA link

Com­pe­ti­tion for “For­tified Es­says” on nu­clear risk

MichaelA🔸Nov 17, 2021, 8:55 PM
35 points
0 comments3 min readEA link
(www.metaculus.com)

Fu­ture Mat­ters #4: AI timelines, AGI risk, and ex­is­ten­tial risk from cli­mate change

PabloAug 8, 2022, 11:00 AM
59 points
0 comments17 min readEA link

Hauke Hille­brandt: In­ter­na­tional agree­ments to spend per­centage of GDP on global pub­lic goods

EA GlobalNov 21, 2020, 8:12 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

Luisa Ro­driguez: The like­li­hood and sever­ity of a US-Rus­sia nu­clear exchange

EA GlobalOct 18, 2019, 6:05 PM
11 points
0 comments1 min readEA link
(www.youtube.com)

US Ci­ti­zens: Tar­geted poli­ti­cal con­tri­bu­tions are prob­a­bly the best pas­sive dona­tion op­por­tu­ni­ties for miti­gat­ing ex­is­ten­tial risk

Jeffrey LadishMay 5, 2022, 11:04 PM
51 points
20 comments5 min readEA link

Read­ing the ethi­cists 2: Hunt­ing for AI al­ign­ment papers

Charlie SteinerJun 6, 2022, 3:53 PM
9 points
0 comments1 min readEA link
(www.lesswrong.com)

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendalJun 20, 2019, 8:18 PM
70 points
8 comments20 min readEA link

Prevent­ing hu­man extinction

Peter SingerAug 19, 2013, 9:07 PM
25 points
6 comments5 min readEA link

Im­prov­ing the fu­ture by in­fluenc­ing ac­tors’ benev­olence, in­tel­li­gence, and power

MichaelA🔸Jul 20, 2020, 10:00 AM
76 points
15 comments17 min readEA link

Solv­ing al­ign­ment isn’t enough for a flour­ish­ing future

micFeb 2, 2024, 6:22 PM
27 points
0 comments22 min readEA link
(papers.ssrn.com)

In­tel­lec­tual Diver­sity in AI Safety

KRJul 22, 2020, 7:07 PM
21 points
8 comments3 min readEA link

We should say more than “x-risk is high”

OllieBaseDec 16, 2022, 10:09 PM
52 points
12 comments4 min readEA link

Par­ti­ci­pate in the Hy­brid Fore­cast­ing-Per­sua­sion Tour­na­ment (on X-risk top­ics)

JhrosenbergApr 25, 2022, 10:13 PM
53 points
4 comments2 min readEA link

My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

NunoSempereJan 23, 2023, 8:08 PM
436 points
116 comments14 min readEA link
(nunosempere.com)

AGI x-risk timelines: 10% chance (by year X) es­ti­mates should be the head­line, not 50%.

Greg_Colbourn ⏸️ Mar 1, 2022, 12:02 PM
69 points
22 comments2 min readEA link

EAGxVir­tual 2020 light­ning talks

EA GlobalJan 25, 2021, 3:32 PM
13 points
1 comment33 min readEA link
(www.youtube.com)

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA🔸May 3, 2021, 2:22 PM
39 points
33 comments2 min readEA link

Google Maps nuke-mode

AndreFerrettiJan 31, 2023, 9:37 PM
11 points
6 comments1 min readEA link

War Between the US and China: A case study for epistemic challenges around China-re­lated catas­trophic risk

Jordan_SchneiderAug 12, 2022, 2:19 AM
76 points
17 comments43 min readEA link

Na­ture: Nu­clear war be­tween two na­tions could spark global famine

TynerAug 15, 2022, 8:55 PM
15 points
1 comment1 min readEA link
(www.nature.com)

Con­cern­ing the Re­cent 2019-Novel Coron­avirus Outbreak

Matthew_BarnettJan 27, 2020, 5:47 AM
144 points
142 comments3 min readEA link

A Sur­vey of the Po­ten­tial Long-term Im­pacts of AI

Sam ClarkeJul 18, 2022, 9:48 AM
63 points
2 comments27 min readEA link

What is it like do­ing AI safety work?

Kat WoodsFeb 21, 2023, 7:24 PM
99 points
2 comments10 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk: Six Month Re­port May-Oc­to­ber 2018

HaydnBelfieldNov 30, 2018, 8:32 PM
26 points
2 comments17 min readEA link

CSER Spe­cial Is­sue: ‘Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk’

HaydnBelfieldOct 2, 2018, 5:18 PM
9 points
1 comment1 min readEA link

Rus­sian x-risks newslet­ter win­ter 2019-2020

avturchinMar 1, 2020, 12:51 PM
10 points
4 comments2 min readEA link

Longter­mist (es­pe­cially x-risk) ter­minol­ogy has bi­as­ing assumptions

ArepoOct 30, 2022, 4:26 PM
70 points
13 comments7 min readEA link

In­ter­na­tional Crim­i­nal Law and the Fu­ture of Hu­man­ity: A The­ory of the Crime of Omnicide

philosophytorresMar 22, 2021, 12:19 PM
−3 points
1 comment1 min readEA link

Com­mu­nity Build­ing for Grad­u­ate Stu­dents: A Tar­geted Approach

Neil CrawfordMar 29, 2022, 7:47 PM
13 points
0 comments3 min readEA link

Ex­tinc­tion risk re­duc­tion and moral cir­cle ex­pan­sion: Spec­u­lat­ing sus­pi­cious convergence

MichaelA🔸Aug 4, 2020, 11:38 AM
12 points
4 comments6 min readEA link

AI Could Defeat All Of Us Combined

Holden KarnofskyJun 10, 2022, 11:25 PM
143 points
14 comments17 min readEA link

How likely is a nu­clear ex­change be­tween the US and Rus­sia?

Luisa_RodriguezJun 20, 2019, 1:49 AM
80 points
13 comments14 min readEA link

In­ter­na­tional Co­op­er­a­tion Against Ex­is­ten­tial Risks: In­sights from In­ter­na­tional Re­la­tions Theory

Jenny_XiaoJan 11, 2021, 7:10 AM
41 points
7 comments6 min readEA link

“Holy Shit, X-risk” talk

michelAug 15, 2022, 5:04 AM
13 points
2 comments9 min readEA link

GCRI Open Call for Ad­visees and Collaborators

McKenna_FitzgeraldMay 20, 2021, 10:07 PM
13 points
0 comments4 min readEA link

[Cross­post] Why Un­con­trol­lable AI Looks More Likely Than Ever

OttoMar 8, 2023, 3:33 PM
49 points
6 comments4 min readEA link
(time.com)

Com­par­a­tive Bias

Joey🔸Nov 5, 2014, 5:57 AM
7 points
5 comments1 min readEA link

New in­fo­graphic based on “The Precipice”. any feed­back?

michael.andreggJan 14, 2021, 7:29 AM
50 points
4 comments1 min readEA link

Ex­is­ten­tial Choices Sym­po­sium with Will MacAskill and other spe­cial guests (3-5pm GMT Mon­day)

Toby Tremlett🔹Mar 14, 2025, 1:50 PM
69 points
154 comments2 min readEA link

Dis­cus­sion Thread: Ex­is­ten­tial Choices De­bate Week

Toby Tremlett🔹Mar 14, 2025, 5:20 PM
40 points
156 comments1 min readEA link

Com­mon Points of Ad­vice for Stu­dents and Early-Ca­reer Pro­fes­sion­als In­ter­ested in Global Catas­trophic Risk

SethBaumNov 16, 2021, 8:51 PM
60 points
5 comments15 min readEA link

FLI AI Align­ment pod­cast: Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

evhubJul 1, 2020, 8:59 PM
13 points
2 comments1 min readEA link
(futureoflife.org)

Launch of FERSTS Retreat

Theo KJun 17, 2022, 11:53 AM
26 points
0 comments2 min readEA link

[Question] Is it pos­si­ble to have a high level of hu­man het­ero­gene­ity and low chance of ex­is­ten­tial risks?

ekkaMay 24, 2022, 9:55 PM
4 points
0 comments1 min readEA link

EA Re­search Around Min­eral Re­source Exhaustion

haywyerJun 3, 2022, 12:59 AM
2 points
0 comments1 min readEA link

Ex­is­ten­tial risk and the fu­ture of hu­man­ity (Toby Ord)

EA GlobalMar 21, 2020, 6:05 PM
10 points
1 comment14 min readEA link
(www.youtube.com)

[Notes] Steven Pinker and Yu­val Noah Harari in conversation

BenFeb 9, 2020, 12:49 PM
29 points
2 comments7 min readEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew CritchDec 15, 2020, 12:15 PM
12 points
1 comment56 min readEA link
(alignmentforum.org)

My cur­rent thoughts on MIRI’s “highly re­li­able agent de­sign” work

Daniel_DeweyJul 7, 2017, 1:17 AM
60 points
59 comments19 min readEA link

Matt Lev­ine on the Arche­gos failure

Kelsey PiperJul 29, 2021, 7:36 PM
141 points
5 comments4 min readEA link

An­nounc­ing ERA: a spin-off from CERI

nandiniDec 13, 2022, 8:58 PM
55 points
7 comments3 min readEA link

Why I am prob­a­bly not a longtermist

Denise_MelchinSep 23, 2021, 5:24 PM
257 points
49 comments8 min readEA link

In­ter­view Thomas Moynihan: “The dis­cov­ery of ex­tinc­tion is a philo­soph­i­cal cen­tre­piece of the mod­ern age”

felix.hMar 6, 2021, 11:51 AM
15 points
0 comments18 min readEA link

Still no strong ev­i­dence that LLMs in­crease bioter­ror­ism risk

freedomandutilityNov 2, 2023, 9:23 PM
58 points
9 comments1 min readEA link

De­con­fus­ing Pauses: Long Term Mo­ra­to­rium vs Slow­ing AI

Gideon FutermanAug 4, 2024, 11:32 AM
17 points
3 comments5 min readEA link

Bon­nie Jenk­ins: Fireside chat

EA GlobalJul 22, 2020, 3:59 PM
18 points
0 comments25 min readEA link
(www.youtube.com)

[Question] Are there su­perfore­casts for ex­is­ten­tial risk?

Alex HTJul 7, 2020, 7:39 AM
24 points
13 comments1 min readEA link

“Nu­clear risk re­search, fore­cast­ing, & im­pact” [pre­sen­ta­tion]

MichaelA🔸Oct 21, 2021, 10:54 AM
20 points
0 comments1 min readEA link
(www.youtube.com)

Notes on Apollo re­port on biodefense

LinchJul 23, 2022, 9:38 PM
69 points
1 comment12 min readEA link
(biodefensecommission.org)

[Question] What would you say gives you a feel­ing of ex­is­ten­tial hope, and what can we do to in­spire more of it?

elteerkersJan 26, 2022, 1:46 PM
18 points
4 comments1 min readEA link

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_DanielJun 2, 2017, 8:48 AM
10 points
1 comment22 min readEA link
(www.youtube.com)

Sir Gavin and the green sky

technicalitiesDec 17, 2022, 11:28 PM
50 points
0 comments1 min readEA link

Jenny Xiao: Dual moral obli­ga­tions and in­ter­na­tional co­op­er­a­tion against global catas­trophic risks

EA GlobalNov 21, 2020, 8:12 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

[Question] What are the best re­sources on com­par­ing x-risk pre­ven­tion to im­prov­ing the value of the fu­ture in other ways?

LHAJun 26, 2022, 3:22 AM
8 points
3 comments1 min readEA link

Shelly Ka­gan—read­ings for Ethics and the Fu­ture sem­i­nar (spring 2021)

jamesJun 29, 2021, 9:59 AM
91 points
7 comments5 min readEA link
(docs.google.com)

[Question] What is the im­pact of the Nu­clear Ban Treaty?

DCNov 29, 2020, 12:26 AM
22 points
3 comments2 min readEA link

Max Teg­mark: Effec­tive al­tru­ism, ex­is­ten­tial risk, and ex­is­ten­tial hope

EA GlobalJun 2, 2017, 8:48 AM
11 points
0 comments1 min readEA link
(www.youtube.com)

Shap­ing Hu­man­ity’s Longterm Trajectory

Toby_OrdJul 18, 2023, 10:09 AM
173 points
57 comments2 min readEA link
(files.tobyord.com)

A (Very) Short His­tory of the Col­lapse of Civ­i­liza­tions, and Why it Matters

DavidmanheimAug 30, 2020, 7:49 AM
53 points
16 comments2 min readEA link

Three pillars for avoid­ing AGI catas­tro­phe: Tech­ni­cal al­ign­ment, de­ploy­ment de­ci­sions, and co­or­di­na­tion

LintzAAug 3, 2022, 9:24 PM
93 points
4 comments11 min readEA link

[Question] Strongest real-world ex­am­ples sup­port­ing AI risk claims?

rosehadsharSep 5, 2023, 3:11 PM
52 points
9 comments1 min readEA link

Jaan Tal­linn: Fireside chat (2018)

EA GlobalJun 8, 2018, 7:15 AM
9 points
0 comments12 min readEA link
(www.youtube.com)

[Question] How to find *re­li­able* ways to im­prove the fu­ture?

SjlverAug 18, 2022, 12:47 PM
53 points
35 comments2 min readEA link

Pri­ori­tiz­ing x-risks may re­quire car­ing about fu­ture people

eliflandAug 14, 2022, 12:55 AM
182 points
38 comments6 min readEA link
(www.foxy-scout.com)

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnayOct 26, 2022, 1:24 AM
7 points
1 comment1 min readEA link

AI X-Risk: In­te­grat­ing on the Shoulders of Giants

TD_PilditchNov 1, 2022, 4:07 PM
34 points
0 comments47 min readEA link

Risks from Asteroids

finmFeb 11, 2022, 9:01 PM
44 points
9 comments8 min readEA link
(www.finmoorhouse.com)

3 sug­ges­tions about jar­gon in EA

MichaelA🔸Jul 5, 2020, 3:37 AM
131 points
18 comments5 min readEA link

Talk­ing With a Biose­cu­rity Pro­fes­sional (Quick Notes)

DirectedEvolutionApr 10, 2021, 4:23 AM
45 points
0 comments2 min readEA link

Launch­ing the EAF Fund

stefan.torgesNov 28, 2018, 5:13 PM
60 points
14 comments4 min readEA link

[Question] What ac­tions would ob­vi­ously de­crease x-risk?

Eli RoseOct 6, 2019, 9:00 PM
22 points
28 comments1 min readEA link

Cor­po­rate Global Catas­trophic Risks (C-GCRs)

Hauke HillebrandtJun 30, 2019, 4:53 PM
63 points
17 comments10 min readEA link

13 ideas for new Ex­is­ten­tial Risk Movies & TV Shows – what are your ideas?

HaydnBelfieldApr 12, 2022, 11:47 AM
81 points
15 comments4 min readEA link

Why I ex­pect suc­cess­ful (nar­row) alignment

Tobias_BaumannDec 29, 2018, 3:46 PM
18 points
10 comments1 min readEA link
(s-risks.org)

The Precipice—Sum­mary/​Review

NikolaOct 11, 2022, 12:06 AM
10 points
0 comments5 min readEA link

New Pod­cast: X-Risk Upskill

Anthony FlemingAug 27, 2022, 9:19 PM
12 points
4 comments1 min readEA link

Carl Ro­bichaud: Fac­ing the risk of nu­clear war in the 21st century

EA GlobalJul 15, 2020, 5:17 PM
16 points
0 comments12 min readEA link
(www.youtube.com)

Coun­ter­mea­sures & sub­sti­tu­tion effects in biosecurity

ASBDec 16, 2021, 9:40 PM
87 points
6 comments3 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes🔸Jul 28, 2021, 1:23 PM
96 points
5 comments30 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis🔸Apr 13, 2018, 1:44 AM
65 points
33 comments4 min readEA link

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸Mar 28, 2023, 7:43 AM
12 points
2 comments8 min readEA link

Stan­ford Ex­is­ten­tial Risk Con­fer­ence Feb. 26/​27

kuhanjFeb 11, 2022, 12:56 AM
28 points
0 comments1 min readEA link

What suc­cess looks like

mariushobbhahnJun 28, 2022, 2:30 PM
112 points
20 comments19 min readEA link

The Pug­wash Con­fer­ences and the Anti-Bal­lis­tic Mis­sile Treaty as a case study of Track II diplomacy

rani_martinSep 16, 2022, 10:42 AM
82 points
5 comments27 min readEA link

My ar­ti­cle in The Na­tion — Cal­ifor­nia’s AI Safety Bill Is a Mask-Off Mo­ment for the Industry

GarrisonAug 15, 2024, 7:25 PM
134 points
0 comments1 min readEA link
(www.thenation.com)

In­tro­duc­ing the Si­mon In­sti­tute for Longterm Gover­nance (SI)

maximeMar 29, 2021, 6:10 PM
116 points
23 comments11 min readEA link

The last era of hu­man mistakes

Owen Cotton-BarrattJul 24, 2024, 9:56 AM
23 points
4 comments7 min readEA link
(strangecities.substack.com)

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

Owen Cotton-BarrattMar 15, 2024, 10:22 PM
43 points
2 comments7 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA🔸May 2, 2021, 6:00 PM
30 points
21 comments2 min readEA link

Hiring en­g­ineers and re­searchers to help al­ign GPT-3

Paul_ChristianoOct 1, 2020, 6:52 PM
107 points
19 comments3 min readEA link

Car­ing about excellence

Owen Cotton-BarrattJul 22, 2024, 2:24 PM
16 points
2 comments6 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

Eevee🔹Jul 26, 2022, 5:04 AM
38 points
3 comments5 min readEA link
(sunyshore.substack.com)

Plan­ning ‘re­sis­tance’ to illiber­al­ism and authoritarianism

david_reinsteinJun 16, 2024, 5:21 PM
29 points
2 comments2 min readEA link
(www.nytimes.com)

Case study: Re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_GreenJun 2, 2022, 4:07 AM
41 points
2 comments16 min readEA link

The so­cial dis­in­cen­tives of warn­ing about un­likely risks

Lucius CaviolaJun 17, 2024, 11:20 AM
107 points
2 comments9 min readEA link
(outpaced.substack.com)

Jenny Xiao: Dual moral obli­ga­tions and in­ter­na­tional co­op­er­a­tion against global catas­trophic risks

EA GlobalNov 21, 2020, 8:12 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

How likely is a nu­clear ex­change be­tween the US and Rus­sia?

Luisa_RodriguezJun 20, 2019, 1:49 AM
80 points
13 comments14 min readEA link

The Case for An­i­mal-In­clu­sive Longtermism

Eevee🔹Feb 17, 2024, 12:07 AM
66 points
7 comments30 min readEA link
(brill.com)

...but is in­creas­ing the value of fu­tures tractable?

DavidmanheimMar 19, 2025, 8:49 AM
45 points
21 comments1 min readEA link

GCRI Open Call for Ad­visees and Collaborators

McKenna_FitzgeraldMay 20, 2021, 10:07 PM
13 points
0 comments4 min readEA link

Com­par­a­tive Bias

Joey🔸Nov 5, 2014, 5:57 AM
7 points
5 comments1 min readEA link

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_DanielJun 2, 2017, 8:48 AM
10 points
1 comment22 min readEA link
(www.youtube.com)

Sir Gavin and the green sky

technicalitiesDec 17, 2022, 11:28 PM
50 points
0 comments1 min readEA link

In­cu­bat­ing AI x-risk pro­jects: some per­sonal reflections

Ben SnodinDec 19, 2023, 5:03 PM
84 points
10 comments9 min readEA link

New Open Philan­thropy Grant­mak­ing Pro­gram: Forecasting

Open PhilanthropyFeb 19, 2024, 11:27 PM
92 points
58 comments1 min readEA link
(www.openphilanthropy.org)

“Pivotal ques­tions”: an Un­jour­nal trial ini­ti­a­tive

david_reinsteinJul 21, 2024, 4:57 PM
48 points
2 comments7 min readEA link

MIT hiring: Cli­matic effects of limited nu­clear wars and “avert­ing ar­maged­don”

christian.rMar 15, 2024, 3:14 PM
16 points
0 comments2 min readEA link

Case study: Traits of con­trib­u­tors to a sig­nifi­cant policy suc­cess

Tom_GreenMar 29, 2024, 12:24 AM
37 points
1 comment38 min readEA link

[Question] How to find *re­li­able* ways to im­prove the fu­ture?

SjlverAug 18, 2022, 12:47 PM
53 points
35 comments2 min readEA link

Why I am prob­a­bly not a longtermist

Denise_MelchinSep 23, 2021, 5:24 PM
257 points
49 comments8 min readEA link

Longter­mists are per­ceived as power-seeking

OllieBaseJun 20, 2023, 8:39 AM
133 points
43 comments2 min readEA link

What is the ex­pected effect of poverty alle­vi­a­tion efforts on ex­is­ten­tial risk?

WilliamKielyOct 2, 2015, 8:43 PM
13 points
25 comments1 min readEA link

[Question] What would it look like for AIS to no longer be ne­glected?

RockwellJun 16, 2023, 3:59 PM
100 points
15 comments1 min readEA link

Bon­nie Jenk­ins: Fireside chat

EA GlobalJul 22, 2020, 3:59 PM
18 points
0 comments25 min readEA link
(www.youtube.com)

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn ⏸️ Jun 15, 2023, 5:03 PM
58 points
1 comment1 min readEA link

Will re­leas­ing the weights of large lan­guage mod­els grant wide­spread ac­cess to pan­demic agents?

Jeff Kaufman 🔸Oct 30, 2023, 5:42 PM
56 points
18 comments1 min readEA link
(arxiv.org)

“Don’t Look Up” and the cin­ema of ex­is­ten­tial risk | Slow Boring

Eevee🔹Jan 5, 2022, 4:28 AM
24 points
0 comments1 min readEA link
(www.slowboring.com)

“Nu­clear risk re­search, fore­cast­ing, & im­pact” [pre­sen­ta­tion]

MichaelA🔸Oct 21, 2021, 10:54 AM
20 points
0 comments1 min readEA link
(www.youtube.com)

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotalDec 6, 2023, 2:09 PM
29 points
20 comments16 min readEA link
(open.substack.com)

Thoughts on “The Offense-Defense Balance Rarely Changes”

Cullen 🔸Feb 12, 2024, 3:26 AM
42 points
4 comments5 min readEA link

AI take­off and nu­clear war

Owen Cotton-BarrattJun 11, 2024, 7:33 PM
72 points
5 comments11 min readEA link
(strangecities.substack.com)

UK gov­ern­ment to host first global sum­mit on AI Safety

DavidNashJun 8, 2023, 1:24 PM
78 points
1 comment5 min readEA link
(www.gov.uk)

X-risk Agnosticism

Richard Y Chappell🔸Jun 8, 2023, 3:02 PM
34 points
1 comment5 min readEA link
(rychappell.substack.com)

A (Very) Short His­tory of the Col­lapse of Civ­i­liza­tions, and Why it Matters

DavidmanheimAug 30, 2020, 7:49 AM
53 points
16 comments2 min readEA link

A note of cau­tion about re­cent AI risk coverage

Sean_o_hJun 7, 2023, 5:05 PM
283 points
29 comments3 min readEA link

Why some peo­ple dis­agree with the CAIS state­ment on AI

David_MossAug 15, 2023, 1:39 PM
144 points
15 comments16 min readEA link

[Linkpost] Given Ex­tinc­tion Wor­ries, Why Don’t AI Re­searchers Quit? Well, Sev­eral Reasons

Daniel_EthJun 6, 2023, 7:31 AM
25 points
6 comments1 min readEA link
(medium.com)

Pod­cast In­ter­view with David Thorstad on Ex­is­ten­tial Risk, The Time of Per­ils, and Billion­aire Philanthropy

Nick_AnyosJun 4, 2023, 8:52 AM
38 points
0 comments1 min readEA link
(critiquesofea.podbean.com)

Span­ish Trans­la­tion of “The Precipice” by Toby Ord (Unoffi­cial)

davidfrivaJun 6, 2023, 1:11 AM
14 points
0 comments1 min readEA link
(drive.google.com)

An­nounce­ment: You can now listen to the “AI Safety Fun­da­men­tals” courses

peterhartreeJun 9, 2023, 4:32 PM
101 points
8 comments1 min readEA link

Talk­ing With a Biose­cu­rity Pro­fes­sional (Quick Notes)

DirectedEvolutionApr 10, 2021, 4:23 AM
45 points
0 comments2 min readEA link

Some thoughts on “AI could defeat all of us com­bined”

Milan GriffesJun 2, 2023, 3:03 PM
23 points
0 comments4 min readEA link

Open Philan­thropy is hiring for mul­ti­ple roles across our Global Catas­trophic Risks teams

Open PhilanthropySep 29, 2023, 11:24 PM
177 points
6 comments3 min readEA link

TED talk on Moloch and AI

LivBoereeNov 15, 2023, 7:28 PM
72 points
7 comments1 min readEA link

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

ChiFeb 13, 2024, 10:24 PM
95 points
7 comments2 min readEA link

The bul­ls­eye frame­work: My case against AI doom

titotalMay 30, 2023, 11:52 AM
71 points
15 comments17 min readEA link

EA read­ing list: longter­mism and ex­is­ten­tial risks

richard_ngoAug 3, 2020, 9:52 AM
35 points
3 comments1 min readEA link

Towards a longter­mist frame­work for eval­u­at­ing democ­racy-re­lated interventions

Tom Barnes🔸Jul 28, 2021, 1:23 PM
96 points
5 comments30 min readEA link

Govern­ments Might Pre­fer Bring­ing Re­sources Back to the So­lar Sys­tem Rather than Space Set­tle­ment in Order to Main­tain Con­trol, Given that Govern­ing In­ter­stel­lar Set­tle­ments Looks Al­most Im­pos­si­ble

David Mathers🔸May 29, 2023, 11:16 AM
36 points
4 comments5 min readEA link

State­ment on AI Ex­tinc­tion—Signed by AGI Labs, Top Aca­demics, and Many Other Notable Figures

Center for AI SafetyMay 30, 2023, 9:06 AM
427 points
28 comments1 min readEA link
(www.safe.ai)

Red-team­ing ex­is­ten­tial risk from AI

Zed TararNov 30, 2023, 2:35 PM
30 points
16 comments6 min readEA link

Notes on “Bioter­ror and Biowar­fare” (2006)

MichaelA🔸Mar 1, 2021, 9:42 AM
29 points
6 comments4 min readEA link

A pro­posed ad­just­ment to the as­tro­nom­i­cal waste argument

Nick_BecksteadMay 27, 2013, 4:00 AM
43 points
0 comments12 min readEA link

Has Rus­sia’s In­va­sion of Ukraine Changed Your Mind?

JoelMcGuireMay 27, 2023, 6:35 PM
61 points
14 comments6 min readEA link

Epistemics (Part 2: Ex­am­ples) | Reflec­tive Altruism

Eevee🔹May 19, 2023, 9:28 PM
34 points
0 comments2 min readEA link
(ineffectivealtruismblog.com)

[Linkpost] “Gover­nance of su­per­in­tel­li­gence” by OpenAI

Daniel_EthMay 22, 2023, 8:15 PM
51 points
6 comments2 min readEA link
(openai.com)

[Link post] Michael Niel­sen’s “Notes on Ex­is­ten­tial Risk from Ar­tifi­cial Su­per­in­tel­li­gence”

Joel BeckerSep 19, 2023, 1:31 PM
38 points
1 comment6 min readEA link
(michaelnotebook.com)

Cul­ture and Pro­gram­ming Ret­ro­spec­tive: ERA Fel­low­ship 2023

Gideon FutermanSep 28, 2023, 4:45 PM
16 points
0 comments10 min readEA link

16 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Nov & Dec 2019 up­date)

HaydnBelfieldJan 15, 2020, 12:07 PM
21 points
0 comments8 min readEA link

New re­port on the state of AI safety in China

Geoffrey MillerOct 27, 2023, 8:20 PM
22 points
0 comments3 min readEA link
(concordia-consulting.com)

The Leeroy Jenk­ins prin­ci­ple: How faulty AI could guaran­tee “warn­ing shots”

titotalJan 14, 2024, 3:03 PM
55 points
2 comments21 min readEA link
(titotal.substack.com)

Ap­ply to lead a pro­ject dur­ing the next vir­tual AI Safety Camp

Linda LinseforsSep 13, 2023, 1:29 PM
16 points
0 comments1 min readEA link
(aisafety.camp)

An ex­haus­tive list of cos­mic threats

JordanStoneDec 4, 2023, 5:59 PM
76 points
19 comments7 min readEA link

Astro­nom­i­cal Waste: The Op­por­tu­nity Cost of De­layed Tech­nolog­i­cal Devel­op­ment—Nick Bostrom (2003)

jamesJun 10, 2021, 9:21 PM
10 points
0 comments8 min readEA link
(www.nickbostrom.com)

How Eng­ineers can Con­tribute to Civil­i­sa­tion Resilience

Jessica WenMay 3, 2023, 2:22 PM
41 points
3 comments8 min readEA link

New CSER Direc­tor: Prof Matthew Connelly

HaydnBelfieldMay 17, 2023, 8:38 AM
36 points
0 comments1 min readEA link

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotalSep 29, 2023, 2:01 PM
102 points
33 comments20 min readEA link
(titotal.substack.com)

If Con­trac­tu­al­ism, Then AMF

Bob FischerOct 13, 2023, 6:03 PM
62 points
54 comments24 min readEA link

[April Fools’ Day] In­tro­duc­ing Open As­teroid Impact

LinchApr 1, 2024, 8:14 AM
286 points
13 comments1 min readEA link
(openasteroidimpact.org)

Sum­mary of “The Precipice” (1 of 4): As­teroids, vol­ca­noes and ex­plod­ing stars

rileyharrisAug 7, 2023, 3:57 AM
9 points
0 comments3 min readEA link
(www.millionyearview.com)

Refram­ing the bur­den of proof: Com­pa­nies should prove that mod­els are safe (rather than ex­pect­ing au­di­tors to prove that mod­els are dan­ger­ous)

AkashApr 25, 2023, 6:49 PM
35 points
1 comment1 min readEA link

Dis­cus­sion about AI Safety fund­ing (FB tran­script)

AkashApr 30, 2023, 7:05 PM
104 points
10 comments6 min readEA link

Causes and Uncer­tainty: Re­think­ing Value in Expectation

Bob FischerOct 11, 2023, 9:15 AM
220 points
29 comments15 min readEA link

On pre­sent­ing the case for AI risk

Aryeh EnglanderMar 8, 2022, 9:37 PM
114 points
12 comments4 min readEA link

How we could stum­ble into AI catastrophe

Holden KarnofskyJan 16, 2023, 2:52 PM
83 points
0 comments31 min readEA link
(www.cold-takes.com)

Amesh Adalja: Pan­demic pathogens

EA GlobalJun 8, 2018, 7:15 AM
11 points
1 comment20 min readEA link
(www.youtube.com)

Differ­en­tial tech­nolog­i­cal de­vel­op­ment

jamesJun 25, 2020, 10:54 AM
37 points
7 comments5 min readEA link

Pres­i­dent Trump as a Global Catas­trophic Risk

HaydnBelfieldNov 18, 2016, 6:02 PM
26 points
16 comments27 min readEA link

How likely is World War III?

Stephen ClareFeb 15, 2022, 3:09 PM
122 points
21 comments22 min readEA link

Stan­ford Ex­is­ten­tial Risks Conference

Jordan Pieters 🔸Apr 21, 2023, 8:32 PM
6 points
0 comments1 min readEA link

[Linkpost] ‘The God­father of A.I.’ Leaves Google and Warns of Danger Ahead

imp4rtial 🔸May 1, 2023, 7:54 PM
43 points
3 comments3 min readEA link
(www.nytimes.com)

The case for re­duc­ing ex­is­ten­tial risk

Benjamin_ToddOct 1, 2017, 8:44 AM
20 points
3 comments1 min readEA link
(80000hours.org)

What we tried

Jan_KulveitMar 21, 2022, 3:26 PM
71 points
8 comments9 min readEA link

Scru­ti­niz­ing AI Risk (80K, #81) - v. quick summary

BenJul 23, 2020, 7:02 PM
10 points
1 comment3 min readEA link

An ap­peal to peo­ple who are smarter than me: please help me clar­ify my think­ing about AI

bethhwAug 5, 2023, 4:38 PM
42 points
21 comments3 min readEA link

‘AI Emer­gency Eject Cri­te­ria’ Survey

tcelferactApr 19, 2023, 9:55 PM
5 points
4 comments1 min readEA link

Merger of Deep­Mind and Google Brain

Greg_Colbourn ⏸️ Apr 20, 2023, 8:16 PM
11 points
12 comments1 min readEA link
(blog.google)

Ap­pli­ca­tions Open: Pivotal 2025 Q3 Re­search Fellowship

Tobias HäberliMar 18, 2025, 1:25 PM
20 points
0 comments2 min readEA link

kpurens’s Quick takes

kpurensApr 11, 2023, 2:10 PM
9 points
2 comments2 min readEA link

The Case for Strong Longtermism

Global Priorities InstituteSep 3, 2019, 1:17 AM
14 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

Re­search pro­ject idea: Neart­er­mist cost-effec­tive­ness anal­y­sis of nu­clear risk reduction

MichaelA🔸Apr 15, 2023, 2:46 PM
12 points
0 comments3 min readEA link

[Question] Who looked into ex­treme nu­clear melt­downs?

RemmeltSep 1, 2024, 9:38 PM
4 points
12 comments1 min readEA link

Re­search pro­ject idea: Direct and in­di­rect effects of nu­clear fallout

MichaelA🔸Apr 15, 2023, 2:48 PM
12 points
0 comments2 min readEA link

[Question] If your AGI x-risk es­ti­mates are low, what sce­nar­ios make up the bulk of your ex­pec­ta­tions for an OK out­come?

Greg_Colbourn ⏸️ Apr 21, 2023, 11:15 AM
62 points
55 comments1 min readEA link

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ May 2, 2023, 10:40 AM
13 points
28 comments3 min readEA link

Is Bit­coin Danger­ous?

postlibertarianDec 19, 2021, 7:35 PM
14 points
7 comments9 min readEA link

Re­search pro­ject idea: How bad would the worst plau­si­ble nu­clear con­flict sce­nar­ios be?

MichaelA🔸Apr 15, 2023, 2:50 PM
16 points
0 comments3 min readEA link

Toby Ord: Fireside Chat and Q&A

EA GlobalJul 21, 2020, 4:23 PM
14 points
0 comments26 min readEA link
(www.youtube.com)

[Question] Model­ing hu­man­ity’s ro­bust­ness to GCRs?

QubitSwarm99Jun 9, 2022, 5:20 PM
7 points
1 comment2 min readEA link

Com­mon ground for longtermists

Tobias_BaumannJul 29, 2020, 10:26 AM
83 points
8 comments4 min readEA link

Re­search pro­ject idea: Im­pact as­sess­ment of nu­clear-risk-re­lated orgs, pro­grammes, move­ments, etc.

MichaelA🔸Apr 15, 2023, 2:39 PM
13 points
0 comments3 min readEA link

“Ex­is­ten­tial Risk” is badly named and leads to nar­row fo­cus on as­tro­nom­i­cal waste

freedomandutilityAug 22, 2022, 8:25 PM
39 points
2 comments2 min readEA link

[Question] Is an in­crease in at­ten­tion to the idea that ‘suffer­ing is bad’ likely to in­crease ex­is­ten­tial risk?

dotsamJun 30, 2021, 7:41 PM
2 points
6 comments1 min readEA link

Model­ling the odds of re­cov­ery from civ­i­liza­tional collapse

MichaelA🔸Sep 17, 2020, 11:58 AM
41 points
10 comments2 min readEA link

Mike Hue­mer on The Case for Tyranny

Chris LeongJul 16, 2020, 9:57 AM
24 points
5 comments1 min readEA link
(fakenous.net)

The Pen­tagon claims China will likely have 1,500 nu­clear war­heads by 2035

Will AldredDec 12, 2022, 6:12 PM
34 points
3 comments2 min readEA link
(media.defense.gov)

Re­search pro­ject idea: Nu­clear EMPs

MichaelA🔸Apr 15, 2023, 2:43 PM
18 points
1 comment3 min readEA link

Ap­ply to the new Open Philan­thropy Tech­nol­ogy Policy Fel­low­ship!

lukeprogJul 20, 2021, 6:41 PM
78 points
6 comments4 min readEA link

In­creased Availa­bil­ity and Willing­ness for De­ploy­ment of Re­sources for Effec­tive Altru­ism and Long-Termism

Evan_GaensbauerDec 29, 2021, 8:20 PM
46 points
1 comment2 min readEA link

8 pos­si­ble high-level goals for work on nu­clear risk

MichaelA🔸Mar 29, 2022, 6:30 AM
47 points
4 comments16 min readEA link

Re­search pro­ject idea: Cli­mate, agri­cul­tural, and famine effects of nu­clear conflict

MichaelA🔸Apr 15, 2023, 2:35 PM
17 points
2 comments4 min readEA link

Re­search pro­ject idea: Tech­nolog­i­cal de­vel­op­ments that could in­crease risks from nu­clear weapons

MichaelA🔸Apr 15, 2023, 2:28 PM
17 points
0 comments7 min readEA link

A Pin and a Bal­loon: An­thropic Frag­ility In­creases Chances of Ru­n­away Global Warm­ing

turchinSep 11, 2022, 10:22 AM
33 points
25 comments52 min readEA link

My thoughts on nan­otech­nol­ogy strat­egy re­search as an EA cause area

Ben SnodinMay 2, 2022, 9:41 AM
137 points
17 comments33 min readEA link

Re­search pro­ject idea: In­ter­me­di­ate goals for nu­clear risk reduction

MichaelA🔸Apr 15, 2023, 2:25 PM
24 points
0 comments5 min readEA link

Seek­ing EA ex­perts in­ter­ested in the evolu­tion­ary psy­chol­ogy of ex­is­ten­tial risks

Geoffrey MillerOct 23, 2019, 6:19 PM
22 points
1 comment1 min readEA link

Nick Beck­stead: Fireside chat (2020)

EA GlobalNov 21, 2020, 8:12 AM
7 points
0 comments1 min readEA link
(www.youtube.com)

Re­vis­it­ing “Why Global Poverty”

Jeff Kaufman 🔸Jun 1, 2022, 8:20 PM
66 points
0 comments3 min readEA link
(www.jefftk.com)

[linkpost] Peter Singer: The Hinge of History

micJan 16, 2022, 1:25 AM
38 points
8 comments3 min readEA link

Re­port: Food Se­cu­rity in Ar­gentina in the event of an Abrupt Sun­light Re­duc­tion Sce­nario (ASRS)

JorgeTorresCApr 27, 2023, 9:00 PM
66 points
3 comments3 min readEA link
(riesgoscatastroficosglobales.com)

More global warm­ing might be good to miti­gate the food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸Apr 29, 2023, 8:24 AM
46 points
39 comments13 min readEA link

Nu­clear risk, its po­ten­tial long-term im­pacts, & do­ing re­search on that: An in­tro­duc­tory talk

MichaelA🔸Apr 10, 2023, 3:26 PM
50 points
2 comments3 min readEA link

The Precipice: In­tro­duc­tion and Chap­ter One

Toby_OrdJan 2, 2021, 7:13 AM
23 points
0 comments1 min readEA link

Lord Martin Rees: an appreciation

HaydnBelfieldOct 24, 2022, 4:11 PM
188 points
19 comments5 min readEA link

Longterm cost-effec­tive­ness of Founders Pledge’s Cli­mate Change Fund

Vasco Grilo🔸Sep 14, 2022, 3:11 PM
36 points
9 comments6 min readEA link

Planned Up­dates to U.S. Reg­u­la­tory Anal­y­sis Meth­ods are Likely Rele­vant to EAs

MHR🔸Apr 7, 2023, 12:36 AM
163 points
6 comments4 min readEA link

CEEALAR: 2024 Update

CEEALARJul 19, 2024, 11:14 AM
116 points
7 comments4 min readEA link

Ge­orge Church, Kevin Esvelt, & Nathan Labenz: Open un­til dan­ger­ous — gene drive and the case for re­form­ing research

EA GlobalJun 2, 2017, 8:48 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

Re­search pro­ject idea: Overview of nu­clear-risk-re­lated pro­jects and stakeholders

MichaelA🔸Apr 15, 2023, 2:40 PM
12 points
0 comments2 min readEA link

Eng­ineered plant pan­demics and so­cietal col­lapse risk

freedomandutilityAug 4, 2023, 5:06 PM
13 points
2 comments1 min readEA link

Cli­mate Change Overview: CERI Sum­mer Re­search Fellowship

hb574Mar 17, 2022, 11:04 AM
33 points
0 comments4 min readEA link

Mor­tal­ity, ex­is­ten­tial risk, and uni­ver­sal ba­sic income

Max GhenisNov 30, 2021, 8:28 AM
14 points
5 comments22 min readEA link

Ques­tions for Reflec­tion on Gaza

gbNov 20, 2023, 6:01 AM
15 points
18 comments2 min readEA link

[Cross-post] A nu­clear war fore­cast is not a coin flip

David JohnstonMar 15, 2022, 4:01 AM
29 points
12 comments3 min readEA link

A list of good heuris­tics that the case for AI X-risk fails

Aaron Gertler 🔸Jul 16, 2020, 9:56 AM
25 points
9 comments2 min readEA link
(www.alignmentforum.org)

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #2 - A Crit­i­cal Look at Model Conclusions

Ben SnodinAug 18, 2020, 10:25 AM
58 points
11 comments17 min readEA link

Hinges and crises

Jan_KulveitMar 17, 2022, 1:43 PM
72 points
6 comments3 min readEA link

Longter­mists Should Work on AI—There is No “AI Neu­tral” Sce­nario

simeon_cAug 7, 2022, 4:43 PM
42 points
62 comments6 min readEA link

A dis­en­tan­gle­ment pro­ject for the nu­clear se­cu­rity cause area

Sarah WeilerJun 3, 2022, 5:29 AM
16 points
0 comments7 min readEA link

[Pod­cast] Tom Moynihan on why prior gen­er­a­tions missed some of the biggest pri­ori­ties of all

Eevee🔹Jun 25, 2021, 3:39 PM
12 points
0 comments1 min readEA link
(80000hours.org)

[Question] Help me un­der­stand this ex­pected value calculation

AndreaSROct 14, 2021, 6:23 AM
15 points
8 comments1 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis🔸Apr 13, 2018, 1:44 AM
65 points
33 comments4 min readEA link

[Question] What’s the GiveDirectly of longter­mism & ex­is­ten­tial risk?

Nathan YoungNov 15, 2021, 11:55 PM
28 points
25 comments1 min readEA link

Pos­si­ble mis­con­cep­tions about (strong) longtermism

JackMMar 9, 2021, 5:58 PM
90 points
43 comments19 min readEA link

A book re­view for “An­i­mal Weapons” and cross-ap­ply­ing the les­sons to x-risk.

Habeeb AbdulMay 30, 2023, 8:24 AM
6 points
0 comments1 min readEA link
(www.super-linear.org)

In­ter­ven­tion Pro­file: Bal­lot Initiatives

Jason SchukraftJan 13, 2020, 3:41 PM
117 points
5 comments42 min readEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger🔸Oct 29, 2017, 11:28 PM
38 points
9 comments16 min readEA link

Differ­en­tial tech­nol­ogy de­vel­op­ment: preprint on the concept

Hamish_HobbsSep 12, 2022, 1:52 PM
65 points
0 comments2 min readEA link

The Precipice: a risky re­view by a non-EA

Fernando Moreno 🔸Aug 8, 2020, 2:40 PM
14 points
1 comment18 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnayOct 26, 2022, 1:24 AM
7 points
1 comment1 min readEA link

Ap­pli­ca­tions open: Sup­port for tal­ent work­ing on in­de­pen­dent learn­ing, re­search or en­trepreneurial pro­jects fo­cused on re­duc­ing global catas­trophic risks

CEEALARFeb 9, 2024, 1:04 PM
63 points
1 comment2 min readEA link

Re­search pro­ject idea: Pol­ling or mes­sage test­ing re­lated to nu­clear risk re­duc­tion and rele­vant goals/​interventions

MichaelA🔸Apr 15, 2023, 2:44 PM
16 points
1 comment3 min readEA link

What Ques­tions Should We Ask Speak­ers at the Stan­ford Ex­is­ten­tial Risks Con­fer­ence?

kuhanjApr 10, 2021, 12:51 AM
21 points
2 comments2 min readEA link

Nu­clear brinks­man­ship is not a good AI x-risk strategy

titotalMar 30, 2023, 10:07 PM
19 points
8 comments5 min readEA link

In­crease in fu­ture po­ten­tial due to miti­gat­ing food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸Mar 28, 2023, 7:43 AM
12 points
2 comments8 min readEA link

Man­i­fund x AI Worldviews

AustinMar 31, 2023, 3:32 PM
32 points
2 comments2 min readEA link
(manifund.org)

The Pug­wash Con­fer­ences and the Anti-Bal­lis­tic Mis­sile Treaty as a case study of Track II diplomacy

rani_martinSep 16, 2022, 10:42 AM
82 points
5 comments27 min readEA link

Shelly Ka­gan—read­ings for Ethics and the Fu­ture sem­i­nar (spring 2021)

jamesJun 29, 2021, 9:59 AM
91 points
7 comments5 min readEA link
(docs.google.com)

“Effec­tive Altru­ism, Longter­mism, and the Prob­lem of Ar­bi­trary Power” by Gwilym David Blunt

WobblyPandaPandaNov 12, 2023, 1:21 AM
22 points
2 comments1 min readEA link
(www.thephilosopher1923.org)

Part 2: AI Safety Move­ment Builders should help the com­mu­nity to op­ti­mise three fac­tors: con­trib­u­tors, con­tri­bu­tions and coordination

PeterSlatteryDec 15, 2022, 10:48 PM
34 points
0 comments6 min readEA link

[Question] Are there su­perfore­casts for ex­is­ten­tial risk?

Alex HTJul 7, 2020, 7:39 AM
24 points
13 comments1 min readEA link

Risks from Asteroids

finmFeb 11, 2022, 9:01 PM
44 points
9 comments8 min readEA link
(www.finmoorhouse.com)

My at­tempt at ex­plain­ing the case for AI risk in a straight­for­ward way

JulianHazellMar 25, 2023, 4:32 PM
25 points
7 comments18 min readEA link
(muddyclothes.substack.com)

Man­i­fund: What we’re fund­ing (weeks 2-4)

AustinAug 4, 2023, 4:00 PM
65 points
6 comments5 min readEA link
(manifund.substack.com)

In­ter­view Thomas Moynihan: “The dis­cov­ery of ex­tinc­tion is a philo­soph­i­cal cen­tre­piece of the mod­ern age”

felix.hMar 6, 2021, 11:51 AM
15 points
0 comments18 min readEA link

An­nounc­ing ERA: a spin-off from CERI

nandiniDec 13, 2022, 8:58 PM
55 points
7 comments3 min readEA link

Jaan Tal­linn: Fireside chat (2018)

EA GlobalJun 8, 2018, 7:15 AM
9 points
0 comments12 min readEA link
(www.youtube.com)

Matt Lev­ine on the Arche­gos failure

Kelsey PiperJul 29, 2021, 7:36 PM
141 points
5 comments4 min readEA link

Mea­sur­ing AI-Driven Risk with Stock Prices (Su­sana Cam­pos-Mart­ins)

Global Priorities InstituteDec 12, 2024, 2:22 PM
10 points
1 comment4 min readEA link
(globalprioritiesinstitute.org)

The Top AI Safety Bets for 2023: GiveWiki’s Lat­est Recommendations

Dawn DrescherNov 11, 2023, 9:04 AM
11 points
4 comments8 min readEA link

Mone­tary and so­cial in­cen­tives in longter­mist careers

Vaidehi Agarwalla 🔸Sep 23, 2023, 9:03 PM
140 points
5 comments6 min readEA link

New Pod­cast: X-Risk Upskill

Anthony FlemingAug 27, 2022, 9:19 PM
12 points
4 comments1 min readEA link

[Linkpost] Prospect Magaz­ine—How to save hu­man­ity from extinction

jackvaSep 26, 2023, 7:16 PM
32 points
2 comments1 min readEA link
(www.prospectmagazine.co.uk)

Fu­ture peo­ple might not ex­ist

Indra Gesink 🔸Nov 30, 2022, 7:17 PM
18 points
0 comments4 min readEA link

[Notes] Steven Pinker and Yu­val Noah Harari in conversation

BenFeb 9, 2020, 12:49 PM
29 points
2 comments7 min readEA link

[Question] What are the stan­dard terms used to de­scribe risks in risk man­age­ment?

Eevee🔹Mar 5, 2022, 4:07 AM
11 points
2 comments1 min readEA link

EA Re­search Around Min­eral Re­source Exhaustion

haywyerJun 3, 2022, 12:59 AM
2 points
0 comments1 min readEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew CritchDec 15, 2020, 12:15 PM
12 points
1 comment56 min readEA link
(alignmentforum.org)

“Holy Shit, X-risk” talk

michelAug 15, 2022, 5:04 AM
13 points
2 comments9 min readEA link

Ries­gos Catas­trófi­cos Globales needs funding

Jaime SevillaAug 1, 2023, 4:26 PM
98 points
1 comment3 min readEA link

Longter­mism Fund: Au­gust 2023 Grants Report

Michael Townsend🔸Aug 20, 2023, 5:34 AM
81 points
3 comments5 min readEA link

Suc­ces­sif: Join our AI pro­gram to help miti­gate the catas­trophic risks of AI

ClaireBOct 25, 2023, 4:51 PM
15 points
0 comments5 min readEA link

AI Could Defeat All Of Us Combined

Holden KarnofskyJun 10, 2022, 11:25 PM
143 points
14 comments17 min readEA link

Longter­mist (es­pe­cially x-risk) ter­minol­ogy has bi­as­ing assumptions

ArepoOct 30, 2022, 4:26 PM
70 points
13 comments7 min readEA link

Ten ar­gu­ments that AI is an ex­is­ten­tial risk

Katja_GraceAug 14, 2024, 9:51 PM
30 points
0 comments7 min readEA link

Rus­sian x-risks newslet­ter win­ter 2019-2020

avturchinMar 1, 2020, 12:51 PM
10 points
4 comments2 min readEA link

Be­ing at peace with Doom

Johannes C. MayerApr 9, 2023, 3:01 PM
15 points
7 comments4 min readEA link
(www.lesswrong.com)

AMA: Andy We­ber (U.S. As­sis­tant Sec­re­tary of Defense from 2009-2014)

LizkaSep 26, 2023, 9:40 AM
132 points
49 comments1 min readEA link

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA🔸May 3, 2021, 2:22 PM
39 points
33 comments2 min readEA link

Nu­clear war tail risk has been ex­ag­ger­ated?

Vasco Grilo🔸Feb 25, 2024, 9:14 AM
48 points
22 comments28 min readEA link

State­ment on Plu­ral­ism in Ex­is­ten­tial Risk Stud­ies

Gideon FutermanAug 16, 2023, 2:29 PM
29 points
46 comments7 min readEA link

Com­mon Points of Ad­vice for Stu­dents and Early-Ca­reer Pro­fes­sion­als In­ter­ested in Global Catas­trophic Risk

SethBaumNov 16, 2021, 8:51 PM
60 points
5 comments15 min readEA link

Ex­tinc­tion risk re­duc­tion and moral cir­cle ex­pan­sion: Spec­u­lat­ing sus­pi­cious convergence

MichaelA🔸Aug 4, 2020, 11:38 AM
12 points
4 comments6 min readEA link

Cost-Effec­tive­ness of Foods for Global Catas­tro­phes: Even Bet­ter than Be­fore?

Denkenberger🔸Nov 19, 2018, 9:57 PM
29 points
5 comments10 min readEA link

[Question] Why isn’t there a char­ity eval­u­a­tor for longter­mist pro­jects?

Eevee🔹Jul 29, 2023, 4:30 PM
106 points
44 comments1 min readEA link

When “hu­man-level” is the wrong thresh­old for AI

Ben Millwood🔸Jun 22, 2024, 2:34 PM
38 points
3 comments7 min readEA link

Man­i­fund: what we’re fund­ing (week 1)

AustinJul 15, 2023, 12:28 AM
43 points
11 comments3 min readEA link
(manifund.substack.com)

[Question] Strongest real-world ex­am­ples sup­port­ing AI risk claims?

rosehadsharSep 5, 2023, 3:11 PM
52 points
9 comments1 min readEA link

In­ter­na­tional Co­op­er­a­tion Against Ex­is­ten­tial Risks: In­sights from In­ter­na­tional Re­la­tions Theory

Jenny_XiaoJan 11, 2021, 7:10 AM
41 points
7 comments6 min readEA link

How I Formed My Own Views About AI Safety

Neel NandaFeb 27, 2022, 6:52 PM
134 points
12 comments14 min readEA link
(www.neelnanda.io)

FLI AI Align­ment pod­cast: Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

evhubJul 1, 2020, 8:59 PM
13 points
2 comments1 min readEA link
(futureoflife.org)

[Link] GCRI’s Seth Baum re­views The Precipice

Aryeh EnglanderJun 6, 2022, 7:33 PM
21 points
0 comments1 min readEA link

Fu­ture Mat­ters #4: AI timelines, AGI risk, and ex­is­ten­tial risk from cli­mate change

PabloAug 8, 2022, 11:00 AM
59 points
0 comments17 min readEA link

AI strat­egy given the need for good reflection

Owen Cotton-BarrattMar 18, 2024, 12:48 AM
40 points
1 comment5 min readEA link

U.S. Has De­stroyed the Last of Its Once-Vast Chem­i­cal Weapons Arsenal

JMonty🔸Jul 18, 2023, 1:47 AM
19 points
2 comments1 min readEA link
(www.nytimes.com)

Un­der­stand­ing prob­lems with U.S.-China hotlines

christian.rJun 24, 2024, 1:39 PM
11 points
0 comments1 min readEA link
(thebulletin.org)

EAGxVir­tual 2020 light­ning talks

EA GlobalJan 25, 2021, 3:32 PM
13 points
1 comment33 min readEA link
(www.youtube.com)

Five Years of Re­think Pri­ori­ties: Im­pact, Fu­ture Plans, Fund­ing Needs (July 2023)

Rethink PrioritiesJul 18, 2023, 3:59 PM
110 points
3 comments16 min readEA link

Shap­ing Hu­man­ity’s Longterm Trajectory

Toby_OrdJul 18, 2023, 10:09 AM
173 points
57 comments2 min readEA link
(files.tobyord.com)

We’re (sur­pris­ingly) more pos­i­tive about tack­ling bio risks: out­comes of a survey

SanjayAug 25, 2020, 9:14 AM
58 points
5 comments11 min readEA link

Im­prov­ing the fu­ture by in­fluenc­ing ac­tors’ benev­olence, in­tel­li­gence, and power

MichaelA🔸Jul 20, 2020, 10:00 AM
76 points
15 comments17 min readEA link

Par­ti­ci­pate in the Hy­brid Fore­cast­ing-Per­sua­sion Tour­na­ment (on X-risk top­ics)

JhrosenbergApr 25, 2022, 10:13 PM
53 points
4 comments2 min readEA link

[Question] MSc in Risk and Disaster Science? (UCL) - Does this fit the EA path?

yazanasadMay 25, 2021, 3:33 AM
10 points
6 comments1 min readEA link

Con­cepts of ex­is­ten­tial catas­tro­phe (Hilary Greaves)

Global Priorities InstituteNov 9, 2023, 5:42 PM
41 points
0 comments2 min readEA link
(globalprioritiesinstitute.org)

[Link post] Co­or­di­na­tion challenges for pre­vent­ing AI conflict

stefan.torgesMar 9, 2021, 9:39 AM
58 points
0 comments1 min readEA link
(longtermrisk.org)

An­nounc­ing the Ex­is­ten­tial In­foSec Forum

calebpJul 7, 2023, 9:08 PM
90 points
1 comment2 min readEA link

CSER Spe­cial Is­sue: ‘Fu­tures of Re­search in Catas­trophic and Ex­is­ten­tial Risk’

HaydnBelfieldOct 2, 2018, 5:18 PM
9 points
1 comment1 min readEA link

Launch of FERSTS Retreat

Theo KJun 17, 2022, 11:53 AM
26 points
0 comments2 min readEA link

AIS Nether­lands is look­ing for a Found­ing Ex­ec­u­tive Direc­tor (EOI form)

gergoMar 19, 2025, 9:24 AM
35 points
4 comments4 min readEA link

“Aligned with who?” Re­sults of sur­vey­ing 1,000 US par­ti­ci­pants on AI values

Holly MorganMar 21, 2023, 10:07 PM
41 points
0 comments2 min readEA link
(www.lesswrong.com)

On fu­ture peo­ple, look­ing back at 21st cen­tury longtermism

Joe_CarlsmithMar 22, 2021, 8:21 AM
102 points
13 comments12 min readEA link

Guardrails vs Goal-di­rect­ed­ness in AI Alignment

freedomandutilityDec 30, 2023, 12:58 PM
13 points
2 comments1 min readEA link

Cause Pri­ori­ti­za­tion in Light of In­spira­tional Disasters

stecasJun 7, 2020, 7:52 PM
2 points
15 comments3 min readEA link

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn ⏸️ Jul 19, 2023, 4:46 PM
31 points
2 comments1 min readEA link

So­nia Ben Oua­grham-Gorm­ley on Bar­ri­ers to Bioweapons

Vasco Grilo🔸Feb 15, 2024, 5:58 PM
21 points
0 comments1 min readEA link
(hearthisidea.com)

An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

MichaelA🔸Jun 16, 2021, 4:12 PM
38 points
0 comments2 min readEA link

Hauke Hille­brandt: In­ter­na­tional agree­ments to spend per­centage of GDP on global pub­lic goods

EA GlobalNov 21, 2020, 8:12 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

Age-Weighted Voting

William_MacAskillJul 12, 2019, 3:21 PM
73 points
40 comments6 min readEA link

Why mak­ing as­ter­oid deflec­tion tech might be bad

MichaelDelloMay 20, 2020, 11:01 PM
27 points
10 comments6 min readEA link

AMA: The new Open Philan­thropy Tech­nol­ogy Policy Fellowship

lukeprogJul 26, 2021, 3:11 PM
38 points
14 comments1 min readEA link

Catas­trophic rec­t­an­gles—vi­su­al­is­ing catas­trophic risks

Rémi TAug 22, 2021, 9:27 PM
33 points
3 comments5 min readEA link

Great power con­flict—prob­lem pro­file (sum­mary and high­lights)

Stephen ClareJul 7, 2023, 2:40 PM
110 points
6 comments5 min readEA link
(80000hours.org)

Toby Ord: Q&A (2020)

EA GlobalJun 13, 2020, 8:17 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

Rea­sons for op­ti­mism about mea­sur­ing malev­olence to tackle x- and s-risks

Jamie_HarrisApr 2, 2024, 10:26 AM
85 points
12 comments8 min readEA link

How bad could a war get?

Stephen ClareNov 4, 2022, 9:25 AM
130 points
11 comments9 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸Jul 8, 2023, 8:40 AM
8 points
0 comments2 min readEA link

An­nounc­ing the Pivotal Re­search Fel­low­ship – Ap­ply Now!

Tobias HäberliApr 3, 2024, 5:30 PM
51 points
5 comments2 min readEA link

An­nounc­ing Man­i­fund Regrants

AustinJul 5, 2023, 7:42 PM
217 points
51 comments4 min readEA link
(manifund.org)

On the Differ­ences Between Eco­mod­ernism and Effec­tive Altruism

PeterSlatteryDec 6, 2022, 1:21 AM
38 points
3 comments1 min readEA link
(thebreakthrough.org)

19 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan, Feb & Mar 2020 up­date)

HaydnBelfieldApr 8, 2020, 1:19 PM
13 points
0 comments12 min readEA link

Re­think­ing longter­mism and global development

Eevee🔹Sep 2, 2022, 5:28 AM
10 points
2 comments8 min readEA link
(sunyshore.substack.com)

A Gen­tle In­tro­duc­tion to Risk Frame­works Beyond Forecasting

pending_survivalApr 11, 2024, 9:15 AM
81 points
4 comments27 min readEA link

The GiveWiki’s Top Picks in AI Safety for the Giv­ing Sea­son of 2023

Dawn DrescherDec 7, 2023, 9:23 AM
26 points
0 comments3 min readEA link
(impactmarkets.substack.com)

Marc Lip­sitch: Prevent­ing catas­trophic risks by miti­gat­ing sub­catas­trophic ones

EA GlobalJun 2, 2017, 8:48 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

World fed­er­al­ism and EA

Eevee🔹Jul 14, 2021, 5:53 AM
47 points
4 comments1 min readEA link

Still no strong ev­i­dence that LLMs in­crease bioter­ror­ism risk

freedomandutilityNov 2, 2023, 9:23 PM
58 points
9 comments1 min readEA link

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevinSep 4, 2023, 7:02 PM
21 points
2 comments6 min readEA link

How Re­think Pri­ori­ties’ Re­search could in­form your grantmaking

kierangreig🔸Oct 4, 2023, 6:24 PM
59 points
0 comments2 min readEA link

A re­sponse to Michael Plant’s re­view of What We Owe The Future

JackMOct 4, 2023, 11:40 PM
61 points
14 comments10 min readEA link

An­nounc­ing the EA Archive

Aaron BergmanJul 6, 2023, 1:49 PM
70 points
18 comments2 min readEA link

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah WeilerJun 26, 2022, 10:59 AM
57 points
2 comments21 min readEA link

[Question] What are the best ar­ti­cles/​blogs on the psy­chol­ogy of ex­is­ten­tial risk?

Geoffrey MillerDec 16, 2020, 6:05 PM
24 points
7 comments1 min readEA link

A case for strat­egy re­search: what it is and why we need more of it

SiebeRozendalJun 20, 2019, 8:18 PM
70 points
8 comments20 min readEA link

In­tro­duc­ing the Ex­is­ten­tial Risk Observatory

OttoAug 12, 2021, 3:51 PM
39 points
0 comments5 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

Gideon FutermanMar 16, 2023, 2:37 PM
59 points
11 comments15 min readEA link

What Re­think Pri­ori­ties Gen­eral Longter­mism Team Did in 2022, and Up­dates in Light of the Cur­rent Situation

LinchDec 14, 2022, 1:37 PM
162 points
9 comments19 min readEA link

Start­ing the sec­ond Green Revolution

freedomandutilityJun 29, 2023, 12:23 PM
30 points
3 comments1 min readEA link

Vi­talik: Cryp­toe­co­nomics and X-Risk Re­searchers Should Listen to Each Other More

Emerson SpartzNov 21, 2021, 6:50 PM
56 points
3 comments5 min readEA link

Juan B. Gar­cía Martínez on tack­ling many causes at once and his jour­ney into EA

Amber DawnJun 30, 2023, 1:48 PM
92 points
3 comments8 min readEA link
(contemplatonist.substack.com)

Jaan Tal­linn: Fireside chat (2020)

EA GlobalNov 21, 2020, 8:12 AM
7 points
0 comments1 min readEA link
(www.youtube.com)

Up­dates on the EA catas­trophic risk land­scape

Benjamin_ToddMay 6, 2024, 4:52 AM
194 points
46 comments2 min readEA link

Global Devel­op­ment → re­duced ex-risk/​long-ter­mism. (Ini­tial draft/​ques­tion)

ArnoAug 13, 2022, 4:29 PM
3 points
3 comments1 min readEA link

Ap­ply to Spring 2024 policy in­tern­ships (we can help)

ESOct 4, 2023, 2:45 PM
26 points
2 comments1 min readEA link

Int’l agree­ments to spend % of GDP on global pub­lic goods

Hauke HillebrandtNov 22, 2020, 10:33 AM
18 points
1 comment1 min readEA link

AGI risk: analo­gies & arguments

technicalitiesMar 23, 2021, 1:18 PM
31 points
3 comments8 min readEA link
(www.gleech.org)

Tips for Ad­vanc­ing GCR and Food Re­silience Policy

Stan PinsentSep 6, 2024, 11:38 AM
18 points
0 comments4 min readEA link

Re­sponse to Re­cent Crit­i­cisms of Longtermism

abDec 13, 2021, 1:36 PM
249 points
31 comments28 min readEA link

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroffApr 12, 2021, 7:11 PM
60 points
15 comments1 min readEA link

MIT Fu­tureTech are hiring for a Tech­ni­cal As­so­ci­ate role

PeterSlatterySep 9, 2024, 8:14 PM
9 points
6 comments3 min readEA link

Some EA Fo­rum Posts I’d like to write

LinchFeb 23, 2021, 5:27 AM
100 points
10 comments5 min readEA link

A ty­pol­ogy of s-risks

Tobias_BaumannDec 21, 2018, 6:23 PM
26 points
1 comment1 min readEA link
(s-risks.org)

Up­date on civ­i­liza­tional col­lapse research

Jeffrey LadishFeb 10, 2020, 11:40 PM
56 points
7 comments3 min readEA link

Teruji Thomas, ‘The Asym­me­try, Uncer­tainty, and the Long Term’

PabloNov 5, 2019, 8:24 PM
43 points
6 comments1 min readEA link
(globalprioritiesinstitute.org)

More than Earth War­riors: The Di­verse Roles of Geo­scien­tists in Effec­tive Altruism

Christopher ChanAug 31, 2023, 6:30 AM
56 points
5 comments16 min readEA link

Notes on “The Poli­tics of Cri­sis Man­age­ment” (Boin et al., 2016)

imp4rtial 🔸Jan 30, 2022, 10:51 PM
31 points
1 comment17 min readEA link

Mo­ral er­ror as an ex­is­ten­tial risk

William_MacAskillMar 17, 2025, 4:22 PM
75 points
3 comments11 min readEA link

Ad­dress­ing Global Poverty as a Strat­egy to Im­prove the Long-Term Future

bshumwayAug 7, 2020, 6:27 AM
40 points
18 comments16 min readEA link

Fund biose­cu­rity officers at universities

freedomandutilityOct 31, 2022, 11:49 AM
13 points
3 comments1 min readEA link

Pre­sen­ta­tion—The Un­jour­nal: Bridg­ing the gap be­tween EA and academia

david_reinsteinJan 22, 2024, 7:49 PM
14 points
2 comments4 min readEA link
(www.youtube.com)

Luisa Ro­driguez: The like­li­hood and sever­ity of a US-Rus­sia nu­clear exchange

EA GlobalOct 18, 2019, 6:05 PM
11 points
0 comments1 min readEA link
(www.youtube.com)

My Cur­rent Claims and Cruxes on LLM Fore­cast­ing & Epistemics

Ozzie GooenJun 26, 2024, 12:40 AM
46 points
7 comments24 min readEA link

Sen­tinel’s Global Risks Weekly Roundup #11/​2025. Trump in­vokes Alien Ene­mies Act, Chi­nese in­va­sion barges de­ployed in ex­er­cise.

NunoSempereMar 17, 2025, 7:37 PM
40 points
0 comments6 min readEA link
(blog.sentinel-team.org)

“Safety Cul­ture for AI” is im­por­tant, but isn’t go­ing to be easy

DavidmanheimJun 26, 2023, 11:27 AM
53 points
0 comments2 min readEA link
(papers.ssrn.com)

An­nounc­ing the 2023 CLR Sum­mer Re­search Fellowship

stefan.torgesMar 17, 2023, 12:11 PM
81 points
0 comments3 min readEA link

ProMED, plat­form which alerted the world to Covid, might col­lapse—can EA donors fund it?

freedomandutilityAug 4, 2023, 4:42 PM
41 points
4 comments1 min readEA link

Select ex­am­ples of ad­verse se­lec­tion in longter­mist grantmaking

LinchAug 23, 2023, 3:45 AM
201 points
32 comments4 min readEA link

Dispel­ling the An­thropic Shadow (Teruji Thomas)

Global Priorities InstituteOct 16, 2024, 1:25 PM
11 points
1 comment3 min readEA link
(globalprioritiesinstitute.org)

[Question] Is there any­thing like “green bonds” for x-risk miti­ga­tion?

RamiroJun 30, 2020, 12:33 AM
21 points
1 comment1 min readEA link

Open Let­ter Against Reck­less Nu­clear Es­ca­la­tion and Use

Vasco Grilo🔸Nov 3, 2022, 3:08 PM
10 points
2 comments1 min readEA link
(futureoflife.org)

[Question] Are so­cial me­dia al­gorithms an ex­is­ten­tial risk?

Barry GrimesSep 15, 2020, 8:52 AM
24 points
13 comments1 min readEA link

Reflect on Your Ca­reer Ap­ti­tudes (Ex­er­cise)

AkashApr 10, 2022, 2:40 AM
16 points
1 comment2 min readEA link

The Charle­magne Effect: The Longter­mist Case For Neartermism

Reed Shafer-RayJul 25, 2022, 8:12 AM
15 points
7 comments29 min readEA link

Eco­nomic in­equal­ity and the long-term future

Global Priorities InstituteApr 30, 2021, 1:26 PM
11 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

You won’t solve al­ign­ment with­out agent foundations

MikhailSaminNov 6, 2022, 8:07 AM
14 points
0 comments1 min readEA link

Longter­mist Im­pli­ca­tions of the Ex­is­tence Neu­tral­ity Hypothesis

Maxime Riché 🔸Mar 20, 2025, 12:20 PM
19 points
0 comments21 min readEA link

Could re­al­is­tic de­pic­tions of catas­trophic AI risks effec­tively re­duce said risks?

Matthew BarberAug 17, 2022, 8:01 PM
26 points
11 comments2 min readEA link

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotalSep 22, 2022, 3:00 PM
49 points
11 comments15 min readEA link

Su­perfore­cast­ing Long-Term Risks and Cli­mate Change

LuisEUrtubeyAug 19, 2022, 6:05 PM
48 points
0 comments2 min readEA link

Birth rates and civil­i­sa­tion doom loop

deus777Nov 18, 2022, 10:56 AM
−40 points
1 comment2 min readEA link

[Question] How long does it take to un­der­srand AI X-Risk from scratch so that I have a con­fi­dent, clear men­tal model of it from first prin­ci­ples?

Jordan ArelJul 27, 2022, 4:58 PM
29 points
6 comments1 min readEA link

[Question] What’s the like­li­hood of ir­recov­er­able civ­i­liza­tional col­lapse if 90% of the pop­u­la­tion dies?

simeon_cAug 7, 2022, 7:47 PM
21 points
3 comments1 min readEA link

[Video] How hav­ing Fast Fourier Trans­forms sooner could have helped with Nu­clear Disar­ma­ment—Veritasium

mako yassNov 3, 2022, 8:52 PM
12 points
1 comment1 min readEA link
(www.youtube.com)

Cur­rent Es­ti­mates for Like­li­hood of X-Risk?

rhys_lindmarkAug 6, 2018, 6:05 PM
24 points
23 comments1 min readEA link

Ja­pan AI Align­ment Conference

ChrisScammellMar 10, 2023, 9:23 AM
17 points
2 comments1 min readEA link
(www.conjecture.dev)

An­thropic: Core Views on AI Safety: When, Why, What, and How

jonmenasterMar 9, 2023, 5:30 PM
107 points
6 comments22 min readEA link
(www.anthropic.com)

My notes on: A Very Ra­tional End of the World | Thomas Moynihan

Vasco Grilo🔸Jun 20, 2022, 8:50 AM
13 points
1 comment5 min readEA link

A Roundtable for Safe AI (RSAI)?

Lara_THMar 9, 2023, 12:11 PM
9 points
0 comments4 min readEA link

Fake Meat and Real Talk 1 - Are We All Gonna Die? Yud­kowsky and the Dangers of AI (Please RSVP)

David NMar 8, 2023, 8:40 PM
11 points
2 comments1 min readEA link

We can’t put num­bers on ev­ery­thing and try­ing to weak­ens our col­lec­tive epistemics

ConcernedEAsMar 8, 2023, 3:09 PM
9 points
0 comments11 min readEA link

[Question] A bill to mas­sively ex­pand NSF to tech do­mains. What’s the rele­vance for x-risk?

EdoAradJul 12, 2020, 3:20 PM
22 points
4 comments1 min readEA link

Civ­i­liza­tion Re­cov­ery Kits

Soof GolanSep 21, 2022, 9:26 AM
25 points
9 comments2 min readEA link

My Cause Selec­tion: Dave Denkenberger

Denkenberger🔸Aug 16, 2015, 3:06 PM
13 points
7 comments3 min readEA link

New Ar­tifi­cial In­tel­li­gence quiz: can you beat ChatGPT?

AndreFerrettiMar 3, 2023, 3:46 PM
29 points
3 comments1 min readEA link

Who will be in charge once al­ign­ment is achieved?

trurlDec 16, 2022, 4:53 PM
8 points
2 comments1 min readEA link

AGI as a Black Swan Event

Stephen McAleeseDec 4, 2022, 11:35 PM
5 points
2 comments7 min readEA link
(www.lesswrong.com)

Ad­vice on com­mu­ni­cat­ing in and around the biose­cu­rity policy community

ESMar 2, 2023, 9:32 PM
225 points
27 comments6 min readEA link

Send funds to earth­quake sur­vivors in Turkey via GiveDirectly

GiveDirectlyMar 2, 2023, 1:19 PM
38 points
1 comment3 min readEA link

[Question] What are some sources re­lated to big-pic­ture AI strat­egy?

Jacob Watts🔸Mar 2, 2023, 5:04 AM
9 points
4 comments1 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will BradshawOct 28, 2019, 3:32 PM
31 points
8 comments1 min readEA link

Safe Sta­sis Fallacy

DavidmanheimFeb 5, 2024, 10:54 AM
23 points
4 comments1 min readEA link

Seek­ing in­put on a list of AI books for broader audience

Darren McKeeFeb 27, 2023, 10:40 PM
49 points
14 comments5 min readEA link

Ap­ply to the Stan­ford Ex­is­ten­tial Risks Con­fer­ence! (April 17-18)

kuhanjMar 26, 2021, 6:28 PM
26 points
2 comments1 min readEA link

In­sects raised for food and feed — global scale, prac­tices, and policy

abrahamroweJun 29, 2020, 1:57 PM
95 points
13 comments29 min readEA link

Cli­mate change, geo­eng­ineer­ing, and ex­is­ten­tial risk

John G. HalsteadMar 20, 2018, 10:48 AM
20 points
8 comments1 min readEA link

What is the ar­gu­ment against a Thanos-ing all hu­man­ity to save the lives of other sen­tient be­ings?

somethoughtsMar 7, 2021, 8:02 AM
0 points
11 comments3 min readEA link

[Question] AI Eth­i­cal Committee

eaaicommitteeMar 1, 2022, 11:35 PM
8 points
0 comments1 min readEA link

The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

DannyBresslerMay 20, 2021, 12:39 AM
66 points
6 comments2 min readEA link

Sum­mary of Deep Time Reck­on­ing by Vin­cent Ialenti

vinegar10@gmail.comOct 31, 2022, 8:00 PM
10 points
1 comment10 min readEA link

Beyond Astro­nom­i­cal Waste

Wei DaiDec 27, 2018, 9:27 AM
25 points
2 comments1 min readEA link
(www.lesswrong.com)

The Hu­man Con­di­tion: A Cru­cial Com­po­nent of Ex­is­ten­tial Risk Calcu­la­tions

Phil TannyAug 28, 2022, 2:51 PM
−10 points
5 comments1 min readEA link

Nu­clear Ex­pert Com­ment on Samotsvety Nu­clear Risk Forecast

JhrosenbergMar 26, 2022, 9:22 AM
129 points
13 comments18 min readEA link

EA, Psy­chol­ogy & AI Safety Research

Sam EllisMay 26, 2022, 11:46 PM
28 points
3 comments6 min readEA link

[Notes] Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

BenJan 14, 2020, 10:13 PM
40 points
7 comments14 min readEA link

Assess­ing SERI/​CHERI/​CERI sum­mer pro­gram im­pact by sur­vey­ing fellows

L Rudolf LSep 26, 2022, 3:29 PM
102 points
11 comments15 min readEA link

Shal­low Re­port on Nu­clear War (Arse­nal Limi­ta­tion)

Joel Tan🔸Feb 21, 2023, 4:57 AM
44 points
13 comments29 min readEA link

In­ter­view with Ro­man Yam­polskiy about AGI on The Real­ity Check

Darren McKeeFeb 18, 2023, 11:29 PM
27 points
0 comments1 min readEA link
(www.trcpodcast.com)

[Pod­cast] Si­mon Beard on Parfit, Cli­mate Change, and Ex­is­ten­tial Risk

finmJan 28, 2021, 7:47 PM
11 points
0 comments1 min readEA link
(hearthisidea.com)

Pan­demic pre­ven­tion in Ger­man par­ties’ fed­eral elec­tion platforms

tilboySep 19, 2021, 7:40 AM
17 points
2 comments6 min readEA link

Ok Doomer! SRM and Catas­trophic Risk Podcast

Gideon FutermanAug 20, 2022, 12:22 PM
10 points
4 comments1 min readEA link
(open.spotify.com)

Deep­Mind’s gen­er­al­ist AI, Gato: A non-tech­ni­cal explainer

frances_lorenzMay 16, 2022, 9:19 PM
128 points
13 comments6 min readEA link

Forethought: A new AI macros­trat­egy group

Amrit Sidhu-Brar 🔸Mar 11, 2025, 3:36 PM
166 points
6 comments3 min readEA link

The Im­por­tance of AI Align­ment, ex­plained in 5 points

Daniel_EthFeb 11, 2023, 2:56 AM
50 points
4 comments13 min readEA link

Speedrun: De­mon­strate the abil­ity to rapidly scale food pro­duc­tion in the case of nu­clear winter

BuhlFeb 13, 2023, 7:00 PM
39 points
2 comments16 min readEA link

High risk, low re­ward: A challenge to the as­tro­nom­i­cal value of ex­is­ten­tial risk miti­ga­tion (David Thorstad)

Global Priorities InstituteJul 4, 2023, 1:23 PM
32 points
3 comments3 min readEA link
(globalprioritiesinstitute.org)

EA on nu­clear war and expertise

beanAug 28, 2022, 4:59 AM
154 points
17 comments4 min readEA link

[Question] What are the best ex­am­ples of ob­ject-level work that was done by (or at least in­spired by) the longter­mist EA com­mu­nity that con­cretely and leg­ibly re­duced ex­is­ten­tial risk?

Ben SnodinFeb 11, 2023, 1:49 PM
118 points
18 comments1 min readEA link

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

FroolowOct 18, 2022, 10:54 PM
111 points
63 comments39 min readEA link

AI Safety Endgame Stories

IvanVendrovSep 28, 2022, 5:12 PM
31 points
1 comment1 min readEA link

Vol­canic win­ters have hap­pened be­fore—should we pre­pare for the next one?

Stan PinsentAug 7, 2024, 11:08 AM
18 points
1 comment3 min readEA link

An­nounc­ing the Le­gal Pri­ori­ties Pro­ject Writ­ing Com­pe­ti­tion: Im­prov­ing Cost-Benefit Anal­y­sis to Ac­count for Ex­is­ten­tial and Catas­trophic Risks

MackenzieJun 7, 2022, 9:37 AM
104 points
8 comments9 min readEA link

Launch­ing The Col­lec­tive In­tel­li­gence Pro­ject: Whitepa­per and Pilots

jasmine_wangFeb 6, 2023, 5:00 PM
38 points
8 comments2 min readEA link
(cip.org)

Economist: “What’s the worst that could hap­pen”. A pos­i­tive, sharable but vague ar­ti­cle on Ex­is­ten­tial Risk

Nathan YoungJul 8, 2020, 10:37 AM
12 points
3 comments2 min readEA link

Over­re­act­ing to cur­rent events can be very costly

Kelsey PiperOct 4, 2022, 9:30 PM
281 points
68 comments4 min readEA link

What Is The Most Effec­tive Way To Look At Ex­is­ten­tial Risk?

Phil TannyAug 26, 2022, 11:21 AM
−2 points
2 comments2 min readEA link

OpenAI board re­ceived let­ter warn­ing of pow­er­ful AI

JordanStoneNov 23, 2023, 12:16 AM
26 points
2 comments1 min readEA link
(www.reuters.com)

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard RSep 5, 2022, 3:07 PM
79 points
5 comments5 min readEA link

Four rea­sons I find AI safety emo­tion­ally compelling

Kat WoodsJun 28, 2022, 2:01 PM
32 points
5 comments4 min readEA link

[Pod­cast] Thomas Moynihan on the His­tory of Ex­is­ten­tial Risk

finmMar 22, 2021, 11:07 AM
26 points
2 comments1 min readEA link
(hearthisidea.com)

Les­sons from Run­ning Stan­ford EA and SERI

kuhanjAug 20, 2021, 2:51 PM
267 points
26 comments23 min readEA link

The Ex­is­ten­tial Risk Alli­ance is hiring mul­ti­ple Cause Area Leads

Rethink PrioritiesFeb 2, 2023, 5:10 PM
20 points
0 comments4 min readEA link
(careers.rethinkpriorities.org)

A full syl­labus on longtermism

jtmMar 5, 2021, 10:57 PM
110 points
13 comments8 min readEA link

Crit­i­cism of the main frame­work in AI alignment

Michele CampoloAug 31, 2022, 9:44 PM
42 points
4 comments7 min readEA link

Prometheus Un­leashed: Mak­ing sense of in­for­ma­tion hazards

basil.iciousFeb 15, 2023, 6:44 AM
0 points
0 comments4 min readEA link
(basil08.github.io)

Im­pact Academy is hiring an AI Gover­nance Lead—more in­for­ma­tion, up­com­ing Q&A and $500 bounty

Lowe LundinAug 29, 2023, 6:42 PM
9 points
1 comment1 min readEA link

‘Force mul­ti­pli­ers’ for EA research

Craig DraytonJun 18, 2022, 1:39 PM
18 points
7 comments4 min readEA link

[Linkpost] Hu­man-nar­rated au­dio ver­sion of “Is Power-Seek­ing AI an Ex­is­ten­tial Risk?”

Joe_CarlsmithJan 31, 2023, 7:19 PM
9 points
0 comments1 min readEA link

Causal Net­work Model III: Findings

Alex_BarryNov 22, 2017, 3:43 PM
7 points
3 comments9 min readEA link

There are no peo­ple to be effec­tively al­tru­is­tic for on a dead planet: EA fund­ing of pro­jects with­out con­duct­ing En­vi­ron­men­tal Im­pact Assess­ments (EIAs), Health and Safety Assess­ments (HSAs) and Life Cy­cle Assess­ments (LCAs) = catastrophe

Deborah W.A. FoulkesMay 26, 2022, 11:46 PM
12 points
22 comments8 min readEA link

Scal­able longter­mist pro­jects: Speedrun se­ries – In­tro­duc­tion

BuhlFeb 7, 2023, 6:43 PM
63 points
2 comments5 min readEA link

Space coloniza­tion and the closed ma­te­rial economy

Arturo MaciasFeb 2, 2023, 3:37 PM
2 points
0 comments2 min readEA link

Le­gal Pri­ori­ties Re­search: A Re­search Agenda

jonasschuettJan 6, 2021, 9:47 PM
58 points
4 comments1 min readEA link

Why Billion­aires Will Not Sur­vive an AGI Ex­tinc­tion Event

funnyfrancoMar 13, 2025, 7:03 PM
1 point
0 comments14 min readEA link

Does the US pub­lic sup­port ul­tra­vi­o­let ger­mi­ci­dal ir­ra­di­a­tion tech­nol­ogy for re­duc­ing risks from pathogens?

Jam KraprayoonFeb 3, 2023, 2:10 PM
111 points
3 comments10 min readEA link

How Rood­man’s GWP model trans­lates to TAI timelines

kokotajlodNov 16, 2020, 2:11 PM
22 points
0 comments2 min readEA link

RESILIENCER Work­shop Re­port on So­lar Ra­di­a­tion Mod­ifi­ca­tion Re­search and Ex­is­ten­tial Risk Released

Gideon FutermanFeb 3, 2023, 6:58 PM
24 points
0 comments3 min readEA link

I No Longer Feel Com­fortable in EA

disgruntled_eaFeb 5, 2023, 8:45 PM
2 points
29 comments1 min readEA link

Cri­tiques of promi­nent AI safety labs: Red­wood Research

OmegaMar 31, 2023, 8:58 AM
339 points
91 comments20 min readEA link

What Are The Biggest Threats To Hu­man­ity? (A Hap­pier World video)

Jeroen Willems🔸Jan 31, 2023, 7:50 PM
17 points
1 comment15 min readEA link

Post-Mortem: McGill EA x Law Pre­sents: Ex­is­ten­tial Ad­vo­cacy with Prof. John Bliss

McGill EA x LawJan 31, 2023, 6:57 PM
11 points
0 comments4 min readEA link

Tech­nol­ogy is Power: Rais­ing Aware­ness Of Tech­nolog­i­cal Risks

Marc WongFeb 9, 2023, 3:13 PM
3 points
0 comments2 min readEA link

#213 – AI caus­ing a “cen­tury in a decade” — and how we’re com­pletely un­pre­pared (Will MacAskill on The 80,000 Hours Pod­cast)

80000_HoursMar 11, 2025, 5:55 PM
24 points
0 comments22 min readEA link

[Question] Why are we not talk­ing more about the metacrisis per­spec­tive on ex­is­ten­tial risk?

Alexander Herwix 🔸Jan 29, 2023, 9:35 AM
52 points
44 comments1 min readEA link

Vacuum De­cay: Ex­pert Sur­vey Results

Jess_RiedelMar 13, 2025, 6:31 PM
68 points
3 comments13 min readEA link

Spec­u­la­tive sce­nar­ios for cli­mate-caused ex­is­ten­tial catastrophes

vincentzhJan 27, 2023, 5:01 PM
26 points
2 comments4 min readEA link

FYI there is a Ger­man in­sti­tute study­ing so­ciolog­i­cal as­pects of ex­is­ten­tial risk

Max GörlitzFeb 12, 2023, 5:35 PM
77 points
10 comments1 min readEA link

How to Take Over the Uni­verse (in Three Easy Steps)

WriterOct 18, 2022, 3:04 PM
14 points
2 comments12 min readEA link
(youtu.be)

Philan­thropy to the Right of Boom [Founders Pledge]

christian.rFeb 14, 2023, 5:08 PM
83 points
11 comments20 min readEA link

Biose­cu­rity newslet­ters you should sub­scribe to

Swan 🔸Jan 29, 2023, 5:00 PM
104 points
14 comments1 min readEA link

“How to Es­cape from the Si­mu­la­tion”—Seeds of Science call for reviewers

rogersbacon1Jan 26, 2023, 3:12 PM
7 points
0 comments1 min readEA link

[Question] Huh. Bing thing got me real anx­ious about AI. Re­sources to help with that please?

ArvinFeb 15, 2023, 4:55 PM
2 points
7 comments1 min readEA link

Select Challenges with Crit­i­cism & Eval­u­a­tion Around EA

Ozzie GooenFeb 10, 2023, 11:36 PM
111 points
5 comments6 min readEA link
(quri.substack.com)

Sum­mit on Ex­is­ten­tial Se­cu­rity 2023

Amy LabenzJan 27, 2023, 6:39 PM
120 points
6 comments2 min readEA link

Ar­tifi­cial In­tel­li­gence and Nu­clear Com­mand, Con­trol, & Com­mu­ni­ca­tions: The Risks of Integration

Peter RautenbachNov 18, 2022, 1:01 PM
60 points
3 comments50 min readEA link

High­est pri­or­ity threat: in­finite tor­ture

KAraxJan 26, 2023, 8:51 AM
−39 points
1 comment9 min readEA link

Call me, maybe? Hotlines and Global Catas­trophic Risk [Founders Pledge]

christian.rJan 24, 2023, 4:28 PM
83 points
10 comments26 min readEA link
(docs.google.com)

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk8Nov 30, 2023, 10:15 AM
−9 points
0 comments12 min readEA link

Mili­tary sup­port in a global catastrophe

Tom GardinerJan 24, 2023, 4:30 PM
37 points
0 comments3 min readEA link

[Question] Can we es­ti­mate the ex­pected value of hu­man’s fu­ture life(in 500 years)

jackchang110Feb 25, 2023, 3:13 PM
5 points
5 comments1 min readEA link

New pop­u­lar sci­ence book on x-risks: “End Times”

Hauke HillebrandtOct 1, 2019, 7:18 AM
17 points
2 comments2 min readEA link

[Question] Which is more im­por­tant for re­duc­ing s-risks, re­search­ing on AI sen­tience or an­i­mal welfare?

jackchang110Feb 25, 2023, 2:20 AM
9 points
0 comments1 min readEA link

[Question] I’m in­ter­view­ing Bear Brau­moel­ler about ‘Only The Dead: The Per­sis­tence of War in the Modern Age’. What should I ask?

Robert_WiblinAug 19, 2022, 3:18 PM
12 points
2 comments1 min readEA link

Jaime Yas­sif: Re­duc­ing global catas­trophic biolog­i­cal risks

EA GlobalOct 25, 2020, 5:48 AM
8 points
0 comments1 min readEA link
(www.youtube.com)

The Next Pan­demic Could Be Worse, What Can We Do? (A Hap­pier World video)

Jeroen Willems🔸Dec 21, 2020, 9:07 PM
37 points
6 comments1 min readEA link

French 2d ex­plainer videos on longter­mism (en­glish sub­ti­tles)

Gaetan_Selle 🔷Feb 27, 2023, 9:00 AM
20 points
0 comments1 min readEA link

Don’t Be Com­forted by Failed Apocalypses

ColdButtonIssuesMay 17, 2022, 11:20 AM
20 points
13 comments1 min readEA link

Lec­ture Videos from Cam­bridge Con­fer­ence on Catas­trophic Risk

HaydnBelfieldApr 23, 2019, 4:03 PM
15 points
3 comments1 min readEA link

ChatGPT not so clever or not so ar­tifi­cial as hyped to be?

Haris ShekerisMar 2, 2023, 6:16 AM
−7 points
2 comments1 min readEA link

Ex­is­ten­tial Risk of Misal­igned In­tel­li­gence Aug­men­ta­tion (Par­tic­u­larly Us­ing High-Band­width BCI Im­plants)

Damian GorskiJan 24, 2023, 5:02 PM
1 point
0 comments9 min readEA link

Re­silience Via Frag­mented Power

steve6320Jul 14, 2022, 3:37 PM
2 points
0 comments6 min readEA link

A re­view of how nu­cleic acid (or DNA) syn­the­sis is cur­rently reg­u­lated across the world, and some ideas about re­form (sum­mary of and link to Law dis­ser­ta­tion)

Isaac HeronFeb 5, 2024, 10:37 AM
53 points
4 comments16 min readEA link
(acrobat.adobe.com)

Joscha Bach on Syn­thetic In­tel­li­gence [an­no­tated]

Roman LeventovMar 2, 2023, 11:21 AM
8 points
0 comments9 min readEA link
(www.jimruttshow.com)

Distil­la­tion of The Offense-Defense Balance of Scien­tific Knowledge

Arjun YadavAug 12, 2022, 7:01 AM
17 points
0 comments2 min readEA link

New re­port on how much com­pu­ta­tional power it takes to match the hu­man brain (Open Philan­thropy)

Aaron Gertler 🔸Sep 15, 2020, 1:06 AM
45 points
1 comment18 min readEA link
(www.openphilanthropy.org)

[Question] Re­cent pa­per on cli­mate tip­ping points

jackvaMar 2, 2023, 11:11 PM
22 points
7 comments1 min readEA link

Help me to un­der­stand AI al­ign­ment!

britomartJan 18, 2023, 9:13 AM
3 points
12 comments1 min readEA link

The Jour­nal of Danger­ous Ideas

rogersbacon1Feb 3, 2024, 3:43 PM
−26 points
1 comment5 min readEA link
(www.secretorum.life)

Jan Kirch­ner on AI Alignment

birtesJan 17, 2023, 3:11 PM
5 points
0 comments1 min readEA link

[Question] Math­e­mat­i­cal mod­els of Ethics

Victor-SBMar 8, 2023, 10:50 AM
6 points
1 comment1 min readEA link

EA rele­vant Fore­sight In­sti­tute Work­shops in 2023: WBE & AI safety, Cryp­tog­ra­phy & AI safety, XHope, Space, and Atom­i­cally Pre­cise Manufacturing

elteerkersJan 16, 2023, 2:02 PM
20 points
1 comment3 min readEA link

How would you es­ti­mate the value of de­lay­ing AGI by 1 day, in marginal dona­tions to GiveWell?

AnonymousTurtleDec 16, 2022, 9:25 AM
30 points
19 comments2 min readEA link

[Question] Donat­ing against Short Term AI risks

Jan-WillemNov 16, 2020, 12:23 PM
6 points
10 comments1 min readEA link

What is time se­ries fore­cast­ing tool?

Jack KevinJan 12, 2023, 10:48 AM
−5 points
0 comments1 min readEA link

McGill EA x Law Pre­sents: Ex­is­ten­tial Ad­vo­cacy with Prof. John Bliss

McGill EA x LawJan 10, 2023, 11:56 PM
3 points
0 comments1 min readEA link

Tay­lor Swift’s “long story short” Is Ac­tu­ally About Effec­tive Altru­ism and Longter­mism (PARODY)

shepardspieJul 23, 2021, 1:25 PM
34 points
12 comments7 min readEA link

How to make cli­mate ac­tivists care for other ex­is­ten­tial risks

ExponentialDragonMar 12, 2023, 9:05 AM
22 points
7 comments2 min readEA link

[Creative writ­ing con­test] The sor­cerer in chains

SwimmerOct 30, 2021, 1:23 AM
17 points
0 comments31 min readEA link

ea.do­mains—Do­mains Free to a Good Home

plexJan 12, 2023, 1:32 PM
48 points
8 comments4 min readEA link

Two po­si­tions at Non-Triv­ial: En­able young peo­ple to tackle the world’s most press­ing problems

Peter McIntyreOct 17, 2023, 11:46 AM
24 points
4 comments5 min readEA link
(www.non-trivial.org)

[Question] What AI Take-Over Movies or Books Will Scare Me Into Tak­ing AI Se­ri­ously?

Jordan ArelJan 10, 2023, 8:30 AM
11 points
8 comments1 min readEA link

Overview of the Pathogen Bio­surveillance Land­scape

Brianna GopaulJan 9, 2023, 6:05 AM
54 points
4 comments20 min readEA link

The Silent War: AGI-on-AGI War­fare and What It Means For Us

funnyfrancoMar 15, 2025, 3:32 PM
4 points
0 comments22 min readEA link

Other Civ­i­liza­tions Would Re­cover 84+% of Our Cos­mic Re­sources—A Challenge to Ex­tinc­tion Risk Prioritization

Maxime Riché 🔸Mar 17, 2025, 1:11 PM
17 points
0 comments12 min readEA link

Nines of safety: Ter­ence Tao’s pro­posed unit of mea­sure­ment of risk

ansonDec 12, 2021, 6:01 PM
41 points
17 comments4 min readEA link

Tech­nolog­i­cal Bot­tle­necks for PCR, LAMP, and Me­tage­nomics Sequencing

Ziyue ZengJan 9, 2023, 6:05 AM
39 points
0 comments17 min readEA link

Public Cog­ni­tive Dis­so­nance About Ex­is­ten­tial Risk Is Terrifying

Evan_GaensbauerAug 22, 2023, 12:13 AM
20 points
2 comments4 min readEA link

Ly­ing is Cowardice, not Strategy

Connor LeahyOct 25, 2023, 5:59 AM
−5 points
15 comments5 min readEA link
(cognition.cafe)

[Linkpost] Shorter ver­sion of re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_CarlsmithMar 22, 2023, 6:06 PM
49 points
1 comment1 min readEA link

An­nounc­ing the Swiss Ex­is­ten­tial Risk Ini­ti­a­tive (CHERI) 2023 Re­search Fellowship

Tobias HäberliMar 27, 2023, 3:35 PM
32 points
0 comments2 min readEA link

[Question] What longter­mist pro­jects would you like to see im­ple­mented?

BuhlMar 28, 2023, 6:41 PM
55 points
6 comments1 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibsMar 29, 2023, 11:30 PM
212 points
75 comments3 min readEA link
(time.com)

Longter­mism and short­ter­mism can dis­agree on nu­clear war to stop ad­vanced AI

David JohnstonMar 30, 2023, 11:22 PM
2 points
0 comments1 min readEA link

存亡リスクを減らす取り組みを支持する議論

EA JapanAug 4, 2023, 2:47 PM
4 points
0 comments2 min readEA link

[Question] Can AI safely ex­ist at all?

Hayven FrienbyNov 27, 2023, 5:33 PM
6 points
7 comments2 min readEA link

An­nounc­ing EA Vir­tual Pro­grams Pilot Biose­cu­rity Book Club

JMonty🔸Sep 27, 2023, 1:35 AM
24 points
0 comments1 min readEA link

De-em­pha­sise al­ign­ment, em­pha­sise restraint

EuanMcLeanFeb 4, 2025, 5:43 PM
19 points
2 comments7 min readEA link

Pes­simism about AI Safety

Max_He-HoApr 2, 2023, 7:57 AM
5 points
0 comments25 min readEA link
(www.lesswrong.com)

Stuxnet, not Skynet: Hu­man­ity’s dis­em­pow­er­ment by AI

RokoApr 4, 2023, 11:46 AM
11 points
0 comments7 min readEA link

Pillars to Convergence

PhlobtonApr 1, 2023, 1:04 PM
1 point
0 comments8 min readEA link

Pre­limi­nary in­ves­ti­ga­tions on if STEM and EA com­mu­ni­ties could benefit from more overlap

elteerkersApr 11, 2023, 4:08 PM
31 points
17 comments8 min readEA link

Food se­cu­rity and catas­trophic famine risks—Manag­ing com­plex­ity and climate

Michael HingeApr 5, 2023, 8:33 PM
26 points
0 comments23 min readEA link

[Question] Should we pub­lish ar­gu­ments for the preser­va­tion of hu­man­ity?

JeremyApr 7, 2023, 1:51 PM
8 points
4 comments1 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn KhooSep 27, 2023, 2:44 AM
16 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

A web­site you can share with Chris­ti­ans to get them on board with reg­u­lat­ing AI

JonCefaluApr 8, 2023, 1:36 PM
−4 points
8 comments1 min readEA link
(jesus-the-antichrist.com)

“Guardianes de Dere­cho” Pod­cast: High­light­ing the role of law in Manag­ing Global Catas­trophic Risks to latam law stu­dents

Alba del Valle Moreno SalazarAug 13, 2024, 1:14 PM
7 points
0 comments9 min readEA link

A New Model for Com­pute Cen­ter Verification

Damin Curtis🔹Oct 10, 2023, 7:23 PM
21 points
2 comments5 min readEA link

[Question] Who here knows?: Cryp­tog­ra­phy [An­swered]

No longer EA-affiliatedSep 9, 2023, 8:30 PM
6 points
3 comments1 min readEA link

Scal­able And Trans­fer­able Black-Box Jailbreaks For Lan­guage Models Via Per­sona Modulation

soroushjpNov 7, 2023, 6:00 PM
10 points
0 comments2 min readEA link
(arxiv.org)

[Question] Cu­ri­ous if GWWC takes into ac­count ex­is­ten­tial risk prob­a­bil­ities in calcu­lat­ing im­pact of re­cur­ring donors.

PhibApr 10, 2023, 5:03 PM
14 points
4 comments1 min readEA link

Hiring Ret­ro­spec­tive: ERA Fel­low­ship 2023

OscarD🔸Aug 5, 2023, 9:56 AM
62 points
16 comments6 min readEA link

Mea­sur­ing ar­tifi­cial in­tel­li­gence on hu­man bench­marks is naive

Ward AApr 11, 2023, 11:28 AM
9 points
2 comments1 min readEA link

Ar­tifi­cial In­tel­li­gence as exit strat­egy from the age of acute ex­is­ten­tial risk

Arturo MaciasApr 12, 2023, 2:41 PM
11 points
11 comments7 min readEA link

Perché è im­por­tante ri­durre il ris­chio esistenziale

EA ItalyJan 12, 2023, 2:54 AM
1 point
0 comments2 min readEA link

[Opz­ionale] Per ap­profondire su “Il nos­tro ul­timo se­c­olo”

EA ItalyJan 12, 2023, 3:15 AM
1 point
0 comments2 min readEA link

[Opz­ionale] ‘Con­sid­er­az­ioni cru­ciali e filantropia sag­gia’, di Nick Bostrom

EA ItalyJan 12, 2023, 3:11 AM
1 point
0 comments1 min readEA link
(altruismoefficace.it)

[US] NTIA: AI Ac­countabil­ity Policy Re­quest for Comment

Kyle J. LuccheseApr 13, 2023, 4:12 PM
47 points
4 comments1 min readEA link
(ntia.gov)

Tech­ni­cal Re­port on Mir­ror Bac­te­ria: Fea­si­bil­ity and Risks

Aaron Gertler 🔸Dec 12, 2024, 7:07 PM
244 points
18 comments1 min readEA link
(purl.stanford.edu)

Open-source LLMs may prove Bostrom’s vuln­er­a­ble world hypothesis

Roope AhvenharjuApr 14, 2023, 9:25 AM
14 points
2 comments1 min readEA link

“Risk Aware­ness Mo­ments” (Rams): A con­cept for think­ing about AI gov­er­nance interventions

oegApr 14, 2023, 5:40 PM
53 points
0 comments9 min readEA link

Loop-me­di­ated isother­mal am­plifi­ca­tion (LAMP) for pan­demic pathogen di­ag­nos­tics: How it differs from PCR and why it isn’t more widely used

Julia NiggemeyerSep 2, 2024, 12:17 AM
21 points
0 comments13 min readEA link

Prospects for AI safety agree­ments be­tween countries

oegApr 14, 2023, 5:41 PM
104 points
3 comments22 min readEA link

World and Mind in Ar­tifi­cial In­tel­li­gence: ar­gu­ments against the AI pause

Arturo MaciasApr 18, 2023, 2:35 PM
6 points
3 comments5 min readEA link

Con­flict­ing Effects of Ex­is­ten­tial Risk Miti­ga­tion Interventions

Pete RowlettMay 10, 2023, 10:20 PM
10 points
0 comments8 min readEA link

In­tro­duc­ing the Men­tal Health Roadmap Series

EmilyApr 11, 2023, 10:26 PM
18 points
2 comments2 min readEA link

In­ter­ven­tions to Re­duce Risk for Pathogen Spillover

JMonty🔸Apr 22, 2023, 2:29 PM
13 points
0 comments3 min readEA link
(wwwnc.cdc.gov)

What can we do now to pre­pare for AI sen­tience, in or­der to pro­tect them from the global scale of hu­man sadism?

rimeApr 18, 2023, 9:58 AM
44 points
0 comments2 min readEA link

[Opz­ionale] Tutte le pos­si­bili con­clu­sioni sul fu­turo dell’uman­ità sono incredibili

EA ItalyJan 17, 2023, 2:59 PM
1 point
0 comments8 min readEA link

[Opz­ionale] Perché prob­a­bil­mente non sono una lungoterminista

EA ItalyJan 17, 2023, 6:12 PM
1 point
0 comments8 min readEA link

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

Oliver ZApr 18, 2023, 6:36 PM
56 points
1 comment4 min readEA link
(newsletter.safe.ai)

[Question] “We Are the Weather” Reviews

JonCApr 18, 2023, 10:49 PM
3 points
2 comments1 min readEA link

Perché i rischi di sofferenza sono i rischi es­isten­ziali peg­giori e come pos­si­amo prevenirli

EA ItalyJan 17, 2023, 11:14 AM
1 point
0 comments1 min readEA link

OPEC for a slow AGI takeoff

vyraxApr 21, 2023, 10:53 AM
4 points
0 comments3 min readEA link

In­ves­tiga­tive Jour­nal­ists are more effec­tive al­tru­ists than most

AlanGreenspanSep 27, 2023, 12:26 PM
2 points
8 comments1 min readEA link

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubKApr 21, 2023, 10:07 AM
44 points
3 comments1 min readEA link

AI Progress: The Game Show

Alex ArnettApr 21, 2023, 4:47 PM
3 points
0 comments2 min readEA link

Silly idea to en­hance List rep­re­sen­ta­tion accuracy

PhibApr 24, 2023, 12:30 AM
7 points
4 comments2 min readEA link

Why aren’t we look­ing at the stars?

CMDR DantaeApr 24, 2023, 9:40 AM
5 points
4 comments2 min readEA link

A Coun­ter­ar­gu­ment to the Ar­gu­ment of Astro­nom­i­cal Waste

Markus BredbergApr 24, 2023, 5:09 PM
13 points
0 comments4 min readEA link

AI Safety Newslet­ter #3: AI policy pro­pos­als and a new challenger approaches

Oliver ZApr 25, 2023, 4:15 PM
35 points
1 comment4 min readEA link
(newsletter.safe.ai)

Briefly how I’ve up­dated since ChatGPT

rimeApr 25, 2023, 7:39 PM
29 points
8 comments2 min readEA link
(www.lesswrong.com)

Pro­pos­als for the AI Reg­u­la­tory Sand­box in Spain

Guillem BasApr 27, 2023, 10:33 AM
55 points
2 comments11 min readEA link
(riesgoscatastroficosglobales.com)

[Question] How come there isn’t that much fo­cus in EA on re­search into whether /​ when AI’s are likely to be sen­tient?

callumApr 27, 2023, 10:09 AM
83 points
23 comments1 min readEA link

The AI guide I’m send­ing my grandparents

James MartinApr 27, 2023, 8:04 PM
41 points
3 comments30 min readEA link

The­ory: “WAW might be of higher im­pact than x-risk pre­ven­tion based on util­i­tar­i­anism”

Jens Aslaug 🔸Sep 12, 2023, 1:11 PM
51 points
20 comments17 min readEA link

UK Prime Minister Rishi Su­nak’s Speech on AI

Tobias HäberliOct 26, 2023, 10:34 AM
112 points
6 comments8 min readEA link
(www.gov.uk)

Risk and Re­silience in the Face of Global Catas­tro­phe: A Closer Look at New Zealand’s Food Se­cu­rity [link(s)post]

Matt BoydApr 27, 2023, 10:23 PM
21 points
0 comments1 min readEA link

Call for sub­mis­sions: Choice of Fu­tures sur­vey questions

c.troutApr 30, 2023, 6:59 AM
11 points
0 comments1 min readEA link

New Nu­clear Se­cu­rity Syl­labus + Sum­mer Course

Maya DMay 1, 2023, 5:02 PM
45 points
5 comments1 min readEA link

My cur­rent take on ex­is­ten­tial AI risk [FB post]

Aryeh EnglanderMay 1, 2023, 4:22 PM
10 points
0 comments3 min readEA link

Ret­ro­spec­tive on re­cent ac­tivity of Ries­gos Catas­trófi­cos Globales

Jaime SevillaMay 1, 2023, 6:35 PM
45 points
0 comments5 min readEA link

Si­mu­lat­ing a pos­si­ble al­ign­ment solu­tion in GPT2-medium us­ing Archety­pal Trans­fer Learning

MiguelMay 2, 2023, 4:23 PM
4 points
0 comments18 min readEA link

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

Center for AI SafetyMay 2, 2023, 4:51 PM
35 points
2 comments5 min readEA link
(newsletter.safe.ai)

Up­dates from Cam­paign for AI Safety

Jolyn KhooAug 7, 2023, 6:09 AM
32 points
2 comments2 min readEA link
(www.campaignforaisafety.org)

Me­tac­u­lus Launches Space Tech­nol­ogy & Cli­mate Fore­cast­ing Ini­ti­a­tive

christianOct 11, 2023, 1:29 AM
11 points
1 comment1 min readEA link
(www.metaculus.com)

Some for-profit AI al­ign­ment org ideas

Eric HoDec 14, 2023, 3:52 PM
33 points
1 comment9 min readEA link

Avert­ing Catas­tro­phe: De­ci­sion The­ory for COVID-19, Cli­mate Change, and Po­ten­tial Disasters of All Kinds

JakubKMay 2, 2023, 10:50 PM
15 points
0 comments1 min readEA link
(nyupress.org)

The Hid­den Com­plex­ity of Wishes—The Animation

WriterSep 27, 2023, 5:59 PM
7 points
0 comments1 min readEA link
(youtu.be)

The Eth­i­cal Basilisk Thought Experiment

KyrtinAug 23, 2023, 1:24 PM
1 point
6 comments1 min readEA link

An­nounc­ing the Con­fido app: bring­ing fore­cast­ing to everyone

BlankaMay 16, 2023, 10:25 AM
104 points
2 comments9 min readEA link

Most Lead­ing AI Ex­perts Believe That Ad­vanced AI Could Be Ex­tremely Danger­ous to Humanity

jaiMay 4, 2023, 4:19 PM
31 points
1 comment1 min readEA link
(laneless.substack.com)

An Up­date On The Cam­paign For AI Safety Dot Org

yanni kyriacosMay 5, 2023, 12:19 AM
26 points
4 comments1 min readEA link

Sum­mary: High risk, low re­ward: A challenge to the as­tro­nom­i­cal value of ex­is­ten­tial risk mitigation

Global Priorities InstituteSep 12, 2023, 4:31 PM
70 points
20 comments5 min readEA link
(globalprioritiesinstitute.org)

Se­cu­rity Among The Stars—a de­tailed ap­praisal of space set­tle­ment and ex­is­ten­tial risk

Christopher LankhofNov 13, 2023, 2:54 PM
27 points
9 comments2 min readEA link

In­tro­duc­ing the AI Ob­jec­tives In­sti­tute’s Re­search: Differ­en­tial Paths to­ward Safe and Benefi­cial AI

cmckMay 5, 2023, 8:26 PM
43 points
1 comment8 min readEA link

Align­ment, Goals, & The Gut-Head Gap: A Re­view of Ngo. et al

Violet HourMay 11, 2023, 5:16 PM
26 points
0 comments13 min readEA link

Graph­i­cal Rep­re­sen­ta­tions of Paul Chris­ti­ano’s Doom Model

Nathan YoungMay 7, 2023, 1:03 PM
48 points
2 comments1 min readEA link

Open call: AI Act Stan­dard for Dev. Phase Risk Assess­ment

miller-maxDec 8, 2023, 7:57 PM
5 points
1 comment1 min readEA link

Why “just make an agent which cares only about bi­nary re­wards” doesn’t work.

Lysandre TerrisseMay 9, 2023, 4:51 PM
4 points
1 comment3 min readEA link

Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

OttoMay 8, 2023, 10:49 AM
28 points
5 comments6 min readEA link

US pub­lic opinion of AI policy and risk

Jamie EMay 12, 2023, 1:22 PM
111 points
7 comments15 min readEA link

Towards a Global Nose to Sniff (and Snuff) Out Fu­ture Pandemics

Akash KulgodMay 10, 2023, 9:48 AM
39 points
1 comment3 min readEA link

Chilean AIS Hackathon Retrospective

Agustín Covarrubias 🔸May 9, 2023, 1:34 AM
67 points
0 comments5 min readEA link

You don’t need to be a ge­nius to be in AI safety research

Claire ShortMay 10, 2023, 10:23 PM
28 points
4 comments6 min readEA link

How The EthiSizer Al­most Broke `Story’

Velikovsky_of_NewcastleMay 8, 2023, 4:58 PM
1 point
0 comments5 min readEA link

Can AI solve cli­mate change?

VivianMay 13, 2023, 8:44 PM
2 points
2 comments1 min readEA link

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

Center for AI SafetyMay 9, 2023, 3:26 PM
60 points
0 comments4 min readEA link
(newsletter.safe.ai)

OpenAI’s new Pre­pared­ness team is hiring

leopoldOct 26, 2023, 8:41 PM
85 points
13 comments1 min readEA link

AI-Risk in the State of the Euro­pean Union Address

Sam BogerdSep 13, 2023, 1:27 PM
25 points
0 comments3 min readEA link
(state-of-the-union.ec.europa.eu)

AI Ex­is­ten­tial Safety Fellowships

mmfliOct 27, 2023, 12:14 PM
15 points
1 comment1 min readEA link

Ex­is­ten­tial Hope and Ex­is­ten­tial Risk: Ex­plor­ing the value of op­ti­mistic ap­proaches to shap­ing the long-term future

Vilhelm SkoglundOct 27, 2023, 9:07 AM
36 points
3 comments24 min readEA link

[Question] In­tel­lec­tual prop­erty of AI and ex­is­ten­tial risk in gen­eral?

WillPearsonJun 11, 2024, 1:50 PM
3 points
3 comments1 min readEA link

Best prac­tices for risk com­mu­ni­ca­tion from the aca­demic literature

Existential Risk Communication ProjectAug 12, 2024, 6:54 PM
9 points
3 comments23 min readEA link

Two Rea­sons For Res­tart­ing the Test­ing of Nu­clear Weapons

niplavAug 8, 2023, 7:50 AM
17 points
2 comments5 min readEA link

Im­pos­ing a Lifestyle: A New Ar­gu­ment for Anti­na­tal­ism

OldphanAug 23, 2023, 10:23 PM
10 points
1 comment1 min readEA link
(www.cambridge.org)

[Question] Can in­creas­ing Trust amongst hu­mans be con­sid­ered our great­est pri­or­ity?

Firas NajjarAug 24, 2023, 8:45 AM
4 points
4 comments1 min readEA link

Safety-con­cerned EAs should pri­ori­tize AI gov­er­nance over alignment

sammyboizJun 11, 2024, 3:47 PM
59 points
20 comments1 min readEA link

Fun­da­men­tals of Global Pri­ori­ties Re­search in Eco­nomics Syllabus

poliboniAug 8, 2023, 12:16 PM
74 points
1 comment8 min readEA link

Tyler Cowen’s challenge to de­velop an ‘ac­tual math­e­mat­i­cal model’ for AI X-Risk

Joe BrentonMay 16, 2023, 4:55 PM
20 points
4 comments1 min readEA link

A model-based ap­proach to AI Ex­is­ten­tial Risk

SammyDMartinAug 25, 2023, 10:44 AM
17 points
0 comments1 min readEA link
(www.lesswrong.com)

Microdooms averted by work­ing on AI Safety

NikolaSep 17, 2023, 9:51 PM
39 points
6 comments3 min readEA link
(www.lesswrong.com)

Ex­is­ten­tial Cy­ber­se­cu­rity Risks & AI (A Re­search Agenda)

Madhav MalhotraSep 20, 2023, 12:03 PM
7 points
0 comments8 min readEA link

Rad­i­cal Longter­mism and the Se­duc­tion of End­less Growth: A Cri­tique of William MacAskill’s ‘What We Owe the Fu­ture’

Alexander Herwix 🔸Sep 14, 2023, 2:43 PM
−13 points
15 comments1 min readEA link
(perspecteeva.substack.com)

[Cross­post] AI Reg­u­la­tion May Be More Im­por­tant Than AI Align­ment For Ex­is­ten­tial Safety

OttoAug 24, 2023, 4:01 PM
14 points
2 comments5 min readEA link

AISN #18: Challenges of Re­in­force­ment Learn­ing from Hu­man Feed­back, Microsoft’s Se­cu­rity Breach, and Con­cep­tual Re­search on AI Safety

Center for AI SafetyAug 8, 2023, 3:52 PM
12 points
0 comments5 min readEA link
(newsletter.safe.ai)

Jan Kul­veit’s Cor­rigi­bil­ity Thoughts Distilled

brookAug 25, 2023, 1:42 PM
16 points
0 comments5 min readEA link
(www.lesswrong.com)

How Eng­ineers can Con­tribute to Re­duc­ing the Risks from Nu­clear War

Jessica WenOct 12, 2023, 1:22 PM
33 points
4 comments22 min readEA link
(www.highimpactengineers.org)

[Question] Why Stanis­lav Petrov was not awarded the No­bel Peace Price?

Miquel Banchs-Piqué (prev. mikbp)Oct 12, 2023, 1:24 PM
4 points
2 comments1 min readEA link

At Our World in Data we’re hiring our first Com­mu­ni­ca­tions & Outreach Manager

Charlie GiattinoOct 13, 2023, 1:12 PM
25 points
0 comments1 min readEA link
(ourworldindata.org)

U.S. Reg­u­la­tory Up­dates to Benefit-Cost Anal­y­sis: High­lights and En­courage­ment to Sub­mit Public Comments

DannyBresslerMay 18, 2023, 6:37 AM
79 points
6 comments6 min readEA link

La­bor Par­ti­ci­pa­tion is a High-Pri­or­ity AI Align­ment Risk

alxAug 12, 2024, 6:48 PM
16 points
3 comments16 min readEA link

Case study: Safety stan­dards on Cal­ifor­nia util­ities to pre­vent wildfires

Coby JosephDec 6, 2023, 10:32 AM
7 points
1 comment26 min readEA link

G7 Sum­mit—Co­op­er­a­tion on AI Policy

Leonard_BarrettMay 19, 2023, 10:10 AM
22 points
2 comments1 min readEA link
(www.japantimes.co.jp)

Anki deck for learn­ing the main AI safety orgs, pro­jects, and programs

Bryce RobertsonSep 29, 2023, 6:42 PM
17 points
5 comments1 min readEA link

An­nounc­ing a new or­ga­ni­za­tion: Epistea

EpisteaMay 22, 2023, 5:52 AM
49 points
2 comments2 min readEA link

Nu­clear win­ter—Re­view­ing the ev­i­dence, the com­plex­ities, and my conclusions

Michael HingeAug 25, 2023, 3:45 PM
148 points
26 comments36 min readEA link

Ilya: The AI sci­en­tist shap­ing the world

David VargaNov 20, 2023, 12:43 PM
6 points
1 comment4 min readEA link

Is fear pro­duc­tive when com­mu­ni­cat­ing AI x-risk? [Study re­sults]

Johanna RonigerJan 22, 2024, 5:38 AM
73 points
10 comments5 min readEA link

Effec­tive Altru­ism Florida’s AI Ex­pert Panel—Record­ing and Slides Available

Sam_E_24May 19, 2023, 7:15 PM
2 points
0 comments1 min readEA link

Can Quan­tum Com­pu­ta­tion be used to miti­gate ex­is­ten­tial risk?

Angus LaFeminaSep 18, 2023, 8:02 PM
10 points
3 comments10 min readEA link

Manag­ing the con­tri­bu­tion of So­lar Ra­di­a­tion Mod­ifi­ca­tion (SRM) and Cli­mate Change to Global Catas­trophic Risk (GCR) - Work­shop Report

Gideon FutermanDec 8, 2023, 3:01 PM
12 points
0 comments5 min readEA link

An­nounc­ing the Prague Fall Sea­son 2023 and the Epistea Res­i­dency Program

EpisteaMay 22, 2023, 5:52 AM
88 points
2 comments4 min readEA link

Is­raeli Prime Minister, Musk and Teg­mark on AI Safety

Michaël TrazziSep 18, 2023, 11:21 PM
23 points
13 comments1 min readEA link
(twitter.com)

Former Is­raeli Prime Minister Speaks About AI X-Risk

Yonatan CaleMay 20, 2023, 12:09 PM
73 points
6 comments1 min readEA link

An­nounc­ing the Prague com­mu­nity space: Fixed Point

EpisteaMay 22, 2023, 5:52 AM
69 points
2 comments3 min readEA link

X-risk dis­cus­sion in a col­lege com­mence­ment speech

SWKMay 22, 2023, 11:01 AM
37 points
6 comments1 min readEA link

4 types of AGI se­lec­tion, and how to con­strain them

RemmeltAug 9, 2023, 3:02 PM
7 points
0 comments3 min readEA link

Is AI Safety drop­ping the ball on pri­vacy?

markovSep 19, 2023, 8:17 AM
10 points
0 comments7 min readEA link

Pod­cast on Op­pen­heimer and Nu­clear Se­cu­rity with Carl Robichaud

GarrisonAug 9, 2023, 7:36 PM
23 points
0 comments2 min readEA link
(bit.ly)

New s-risks au­dio­book available now

Alistair WebsterMay 24, 2023, 8:27 PM
87 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Di­a­gram with Com­men­tary for AGI as an X-Risk

Jared LeibowichMay 24, 2023, 10:27 PM
20 points
4 comments8 min readEA link

Will AI end ev­ery­thing? A guide to guess­ing | EAG Bay Area 23

Katja_GraceMay 25, 2023, 5:01 PM
74 points
1 comment21 min readEA link

Ukraine War sup­port and tar­geted sanctions

Arturo MaciasDec 11, 2023, 4:11 PM
−7 points
1 comment2 min readEA link

[Question] I’m in­ter­view­ing Carl Shul­man — what should I ask him?

Robert_WiblinDec 8, 2023, 4:48 PM
53 points
16 comments1 min readEA link

[Linkpost] Longter­mists Are Push­ing a New Cold War With China

Radical Empath IsmamMay 27, 2023, 6:53 AM
37 points
16 comments1 min readEA link
(jacobin.com)

Diminish­ing Re­turns in Ma­chine Learn­ing Part 1: Hard­ware Devel­op­ment and the Phys­i­cal Frontier

Brian ChauMay 27, 2023, 12:39 PM
16 points
3 comments12 min readEA link
(www.fromthenew.world)

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkgMay 29, 2023, 9:59 AM
29 points
6 comments26 min readEA link

Sum­mary: Ex­is­ten­tial risk from power-seek­ing AI by Joseph Carlsmith

rileyharrisOct 28, 2023, 3:05 PM
11 points
0 comments6 min readEA link
(www.millionyearview.com)

The US plans to spend $1.5 Trillion up­grad­ing its Nu­clear Mis­siles!!

Denis Nov 15, 2023, 12:27 AM
9 points
9 comments2 min readEA link

Biolog­i­cal su­per­in­tel­li­gence: a solu­tion to AI safety

YarrowDec 4, 2023, 1:09 PM
0 points
6 comments1 min readEA link

From vol­un­tary to manda­tory, are the ESG dis­clo­sure frame­works still fer­tile ground for un­re­al­ised EA ca­reer path­ways? – A 2023 up­date on ESG po­ten­tial impact

Christopher ChanJun 4, 2023, 12:00 PM
21 points
5 comments11 min readEA link

AI Safety Newslet­ter #8: Rogue AIs, how to screen for AI risks, and grants for re­search on demo­cratic gov­er­nance of AI

Center for AI SafetyMay 30, 2023, 11:44 AM
16 points
3 comments6 min readEA link
(newsletter.safe.ai)

Sta­tus Quo Eng­ines—AI essay

Ilana_Goldowitz_JimenezMay 28, 2023, 2:33 PM
1 point
0 comments15 min readEA link

Im­pli­ca­tions of AGI on Sub­jec­tive Hu­man Experience

Erica S. May 30, 2023, 6:47 PM
2 points
0 comments19 min readEA link
(docs.google.com)

Boomerang—pro­to­col to dis­solve some com­mit­ment races

Filip SondejMay 30, 2023, 4:24 PM
20 points
0 comments8 min readEA link
(www.lesswrong.com)

Re­think Pri­ori­ties’ 2023 Sum­mary, 2024 Strat­egy, and Fund­ing Gaps

kierangreig🔸Nov 15, 2023, 8:56 PM
86 points
7 comments3 min readEA link

[Question] Is any­one work­ing on safe se­lec­tion pres­sure for digi­tal minds?

WillPearsonDec 12, 2023, 6:17 PM
10 points
9 comments1 min readEA link

EA is un­der­es­ti­mat­ing in­tel­li­gence agen­cies and this is dangerous

trevor1Aug 26, 2023, 4:52 PM
28 points
4 comments10 min readEA link

My Jour­ney Towards Effec­tive Altru­ism: Em­brac­ing Our Cos­mic Responsibility

JordanStoneNov 15, 2023, 12:32 PM
12 points
0 comments2 min readEA link

Beyond Hu­mans: Why All Sen­tient Be­ings Mat­ter in Ex­is­ten­tial Risk

Teun van der WeijMay 31, 2023, 9:21 PM
12 points
0 comments13 min readEA link

A sur­vey of con­crete risks de­rived from Ar­tifi­cial Intelligence

Guillem BasJun 8, 2023, 10:09 PM
36 points
2 comments6 min readEA link
(riesgoscatastroficosglobales.com)

Up­date from Cam­paign for AI Safety

Nik SamoylovJun 1, 2023, 10:46 AM
22 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Safe AI and moral AI

William D'AlessandroJun 1, 2023, 9:18 PM
3 points
0 comments11 min readEA link

Prior X%—<1%: A quan­tified ‘epistemic sta­tus’ of your pre­dic­tion.

tcelferactJun 2, 2023, 3:51 PM
11 points
1 comment1 min readEA link

In­trin­sic limi­ta­tions of GPT-4 and other large lan­guage mod­els, and why I’m not (very) wor­ried about GPT-n

James FodorJun 3, 2023, 1:09 PM
28 points
3 comments11 min readEA link

The Precipice (To read: Chap­ter 2)

Jesse RothmanFeb 1, 2022, 2:02 PM
13 points
2 comments16 min readEA link
(www.youtube.com)

In­put sought on next steps for the XPT (also, we’re hiring!)

Forecasting Research InstituteSep 29, 2023, 10:26 PM
34 points
3 comments5 min readEA link

De­com­pos­ing al­ign­ment to take ad­van­tage of paradigms

Christopher KingJun 4, 2023, 2:26 PM
2 points
0 comments4 min readEA link

Les­sons from the past for our global civilization

FJehnAug 10, 2023, 9:54 AM
4 points
0 comments7 min readEA link
(existentialcrunch.substack.com)

Fund­ing for work that builds ca­pac­ity to ad­dress risks from trans­for­ma­tive AI

GCR Capacity Building team (Open Phil)Aug 13, 2024, 1:13 PM
40 points
1 comment5 min readEA link

Mo­ral Spillover in Hu­man-AI Interaction

Katerina ManoliJun 5, 2023, 3:20 PM
17 points
1 comment13 min readEA link

Your Chance to Save Lives. To­day.

LiaHOct 13, 2023, 11:48 PM
−6 points
7 comments2 min readEA link

[Question] Would a much-im­proved un­der­stand­ing of regime tran­si­tions have a net pos­i­tive im­pact?

Michael LatowickiJun 5, 2023, 2:53 PM
18 points
8 comments1 min readEA link

Uncer­tainty about the fu­ture does not im­ply that AGI will go well

Lauro LangoscoJun 5, 2023, 3:02 PM
8 points
11 comments7 min readEA link
(www.alignmentforum.org)

Why microplas­tics should mat­ter to EAs

BiancaCojocaruDec 4, 2023, 9:27 AM
4 points
2 comments3 min readEA link

Si­mu­lat­ing the end of the world: Ex­plor­ing the cur­rent state of so­cietal dy­nam­ics modeling

FJehnNov 15, 2023, 12:53 PM
11 points
2 comments10 min readEA link
(existentialcrunch.substack.com)

EA Ar­chi­tect: Disser­ta­tion on Im­prov­ing the So­cial Dy­nam­ics of Con­fined Spaces & Shelters Prece­dents Report

t46Jun 6, 2023, 11:58 AM
42 points
5 comments8 min readEA link

AISN #9: State­ment on Ex­tinc­tion Risks, Com­pet­i­tive Pres­sures, and When Will AI Reach Hu­man-Level?

Center for AI SafetyJun 6, 2023, 3:56 PM
12 points
2 comments7 min readEA link
(newsletter.safe.ai)

Tim Cook was asked about ex­tinc­tion risks from AI

Saul MunnJun 6, 2023, 6:46 PM
8 points
1 comment1 min readEA link

Map­ping out col­lapse research

FJehnJun 7, 2023, 12:10 PM
18 points
2 comments11 min readEA link
(existentialcrunch.substack.com)

Fund­ing for pro­grams and events on global catas­trophic risk, effec­tive al­tru­ism, and other topics

GCR Capacity Building team (Open Phil)Aug 13, 2024, 1:13 PM
46 points
0 comments2 min readEA link

The Offense-Defense Balance Rarely Changes

Maxwell TabarrokDec 9, 2023, 3:22 PM
81 points
16 comments3 min readEA link
(maximumprogress.substack.com)

Ar­ti­cle Sum­mary: Cur­rent and Near-Term AI as a Po­ten­tial Ex­is­ten­tial Risk Factor

AndreFerrettiJun 7, 2023, 1:53 PM
12 points
1 comment1 min readEA link
(dl.acm.org)

Au­to­mated Par­li­a­ments — A Solu­tion to De­ci­sion Uncer­tainty and Misal­ign­ment in Lan­guage Models

Shak RagolerOct 2, 2023, 9:47 AM
8 points
0 comments17 min readEA link

Ex­am­in­ing path­ways through which nar­row AI sys­tems might in­crease the like­li­hood of nu­clear war

oegJun 14, 2023, 1:54 PM
8 points
2 comments2 min readEA link

ERA Fel­low­ship Alumni Stories

MvK🔸Oct 1, 2023, 12:33 PM
18 points
1 comment8 min readEA link

Un­der­stand­ing how hard al­ign­ment is may be the most im­por­tant re­search di­rec­tion right now

AronJun 7, 2023, 7:05 PM
26 points
3 comments6 min readEA link
(coordinationishard.substack.com)

[Question] What is MIRI cur­rently do­ing?

RokoDec 14, 2024, 2:55 AM
9 points
2 comments1 min readEA link

ERA’s The­ory of Change

nandiniAug 10, 2023, 1:13 PM
28 points
1 comment13 min readEA link

Wild An­i­mal Welfare Sce­nar­ios for AI Doom

utilistrutilJun 8, 2023, 7:41 PM
53 points
2 comments3 min readEA link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver SourbutSep 20, 2023, 12:46 PM
52 points
19 comments1 min readEA link
(www.oliversourbut.net)

En­gag­ing with AI in a Per­sonal Way

Spyder RexDec 4, 2023, 9:23 AM
−9 points
0 comments1 min readEA link

Chap­ter 4 of The Precipice in poem form

LaurianderNov 29, 2023, 9:12 AM
20 points
3 comments1 min readEA link

The con­ver­gent dy­namic we missed

RemmeltDec 12, 2023, 10:50 PM
2 points
0 comments3 min readEA link

Bryan Ca­plan on pacifism

Vasco Grilo🔸Dec 9, 2023, 8:58 AM
10 points
4 comments7 min readEA link
(www.econlib.org)

If you are too stressed, walk away from the front lines

Neil WarrenJun 12, 2023, 9:01 PM
7 points
2 comments4 min readEA link

There is only one goal or drive—only self-per­pet­u­a­tion counts

freest oneJun 13, 2023, 1:37 AM
2 points
4 comments8 min readEA link

Seek­ing In­put to AI Safety Book for non-tech­ni­cal audience

Darren McKeeAug 10, 2023, 6:03 PM
11 points
4 comments1 min readEA link

[Question] What’s the ex­act way you pre­dict prob­a­bil­ity of AI ex­tinc­tion?

jackchang110Jun 13, 2023, 3:11 PM
18 points
7 comments1 min readEA link

Rais­ing the voices that ac­tu­ally count

Kim HolderJun 13, 2023, 7:21 PM
2 points
3 comments2 min readEA link

More co­or­di­nated civil so­ciety ac­tion on re­duc­ing nu­clear risk

Sarah WeilerDec 13, 2023, 11:18 AM
8 points
1 comment8 min readEA link

Sum­mary of Eliezer Yud­kowsky’s “Cog­ni­tive Bi­ases Po­ten­tially Affect­ing Judg­ment of Global Risks”

Damin Curtis🔹Nov 7, 2023, 6:19 PM
5 points
2 comments6 min readEA link

Re­port: Ar­tifi­cial In­tel­li­gence Risk Man­age­ment in Spain

JorgeTorresCJun 15, 2023, 4:08 PM
22 points
0 comments3 min readEA link
(riesgoscatastroficosglobales.com)

LLMs won’t lead to AGI—Fran­cois Chollet

tobycrisford 🔸Jun 11, 2024, 8:19 PM
37 points
23 comments1 min readEA link
(www.youtube.com)

EU AI Act passed vote, and x-risk was a main topic

ArielJun 15, 2023, 1:16 PM
43 points
2 comments1 min readEA link
(www.euractiv.com)

In­tro­duc­ing Col­lec­tive Ac­tion for Ex­is­ten­tial Safety: 80+ ac­tions in­di­vi­d­u­als, or­ga­ni­za­tions, and na­tions can take to im­prove our ex­is­ten­tial safety

James NorrisFeb 5, 2025, 3:58 PM
9 points
0 comments1 min readEA link

Hash­marks: Pri­vacy-Pre­serv­ing Bench­marks for High-Stakes AI Evaluation

Paul BricmanDec 4, 2023, 7:41 AM
4 points
0 comments16 min readEA link
(arxiv.org)

Up­dates from Cam­paign for AI Safety

Jolyn KhooJun 16, 2023, 9:45 AM
15 points
3 comments2 min readEA link
(www.campaignforaisafety.org)

Sce­nario plan­ning for AI x-risk

Corin KatzkeFeb 10, 2024, 12:07 AM
40 points
0 comments15 min readEA link
(www.convergenceanalysis.org)

Against Anony­mous Hit Pieces

Anti-OmegaJun 18, 2023, 7:36 PM
−25 points
3 comments1 min readEA link

In­tro­duc­ing the Cen­ter for AI Policy (& we’re hiring!)

Thomas LarsenAug 28, 2023, 9:27 PM
53 points
1 comment2 min readEA link
(www.aipolicy.us)

Com­mu­ni­ca­tion by ex­is­ten­tial risk or­ga­ni­za­tions: State of the field and sug­ges­tions for improvement

Existential Risk Communication ProjectAug 13, 2024, 7:06 AM
10 points
3 comments13 min readEA link

Book sum­mary: ‘Why In­tel­li­gence Fails’ by Robert Jervis

Ben StewartJun 19, 2023, 4:04 PM
40 points
3 comments12 min readEA link

LPP Sum­mer Re­search Fel­low­ship in Law & AI 2023: Ap­pli­ca­tions Open

Legal Priorities ProjectJun 20, 2023, 2:31 PM
43 points
4 comments4 min readEA link

What We Owe The Fu­ture: A Buried Es­say

haven_worshamJun 20, 2023, 5:49 PM
19 points
0 comments16 min readEA link

Against Mak­ing Up Our Con­scious Minds

SilicaFeb 10, 2024, 7:12 AM
13 points
0 comments5 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn KhooOct 31, 2023, 5:46 AM
14 points
1 comment2 min readEA link
(www.campaignforaisafety.org)

Global Catas­trophic Biolog­i­cal Risks: A Guide for Philan­thropists [Founders Pledge]

christian.rOct 31, 2023, 3:42 PM
32 points
0 comments6 min readEA link
(www.founderspledge.com)

20 con­crete pro­jects for re­duc­ing ex­is­ten­tial risk

BuhlJun 21, 2023, 3:54 PM
132 points
27 comments20 min readEA link
(rethinkpriorities.org)

US pub­lic per­cep­tion of CAIS state­ment and the risk of extinction

Jamie EJun 22, 2023, 4:39 PM
126 points
4 comments9 min readEA link

Think­ing-in-limits about TAI from the de­mand per­spec­tive. De­mand sat­u­ra­tion, re­source wars, new debt.

Ivan MadanNov 7, 2023, 10:44 PM
2 points
0 comments4 min readEA link

The price is right

EJTOct 16, 2023, 4:34 PM
27 points
5 comments4 min readEA link
(openairopensea.substack.com)

Model­ling large-scale cy­ber at­tacks from ad­vanced AI sys­tems with Ad­vanced Per­sis­tent Threats

Iyngkarran KumarOct 2, 2023, 9:54 AM
28 points
2 comments30 min readEA link

Nu­clear win­ter scepticism

Vasco Grilo🔸Aug 13, 2023, 10:55 AM
110 points
42 comments10 min readEA link
(www.navalgazing.net)

Sum­mary of “The Precipice” (2 of 4): We are a dan­ger to ourselves

rileyharrisAug 13, 2023, 11:53 PM
5 points
0 comments8 min readEA link
(www.millionyearview.com)

In­tro­duc­ing Pivotal, an es­say con­test on global prob­lems for high school stu­dents

SahebGAug 14, 2023, 5:03 AM
34 points
7 comments1 min readEA link

AISN #20: LLM Pro­lifer­a­tion, AI De­cep­tion, and Con­tin­u­ing Drivers of AI Capabilities

Center for AI SafetyAug 29, 2023, 3:03 PM
12 points
0 comments8 min readEA link
(newsletter.safe.ai)

Ob­ser­va­to­rio de Ries­gos Catas­trófi­cos Globales (ORCG) Re­cap 2023

JorgeTorresCDec 14, 2023, 2:27 PM
75 points
0 comments3 min readEA link
(riesgoscatastroficosglobales.com)

“Longter­mist causes” is a tricky classification

LizkaAug 29, 2023, 5:41 PM
63 points
3 comments5 min readEA link

Sam Alt­man’s Chip Am­bi­tions Un­der­cut OpenAI’s Safety Strategy

GarrisonFeb 10, 2024, 7:52 PM
286 points
20 comments3 min readEA link
(garrisonlovely.substack.com)

Be­ware of the new scal­ing paradigm

JohanEASep 19, 2024, 5:03 PM
9 points
2 comments3 min readEA link

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

GarrisonFeb 12, 2024, 11:34 PM
154 points
10 comments6 min readEA link
(jacobin.com)

We seek pro­fes­sion­als to iden­tify, pre­vent, and miti­gate Global Catas­trophic Risks in Latin Amer­ica and Spain

JorgeTorresCFeb 13, 2024, 5:01 PM
23 points
0 comments1 min readEA link

AUKUS Mili­tary AI Trial

CAISIDFeb 14, 2024, 2:52 PM
10 points
0 comments2 min readEA link

Trump talk­ing about AI risks

defun 🔸Jun 14, 2024, 12:24 PM
43 points
2 comments1 min readEA link
(x.com)

Some thoughts on Leopold Aschen­bren­ner’s Si­tu­a­tional Aware­ness paper

Luke DawesJun 14, 2024, 1:50 PM
14 points
1 comment3 min readEA link

Nu­clear weapons – Prob­lem profile

Benjamin HiltonJul 19, 2024, 5:17 PM
53 points
7 comments31 min readEA link

At Our World in Data we’re hiring a Se­nior Full-stack Engineer

Charlie GiattinoDec 15, 2023, 3:51 PM
16 points
0 comments1 min readEA link
(ourworldindata.org)

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term RiskFeb 15, 2024, 6:26 PM
89 points
2 comments8 min readEA link

In­tro­duc­ing In­ter­na­tional AI Gover­nance Alli­ance (IAIGA)

James NorrisFeb 5, 2025, 3:59 PM
12 points
0 comments1 min readEA link

“No-one in my org puts money in their pen­sion”

tobyjFeb 16, 2024, 3:04 PM
157 points
11 comments9 min readEA link
(seekingtobejolly.substack.com)

Does nat­u­ral se­lec­tion fa­vor AIs over hu­mans?

cdkgOct 3, 2024, 7:02 PM
21 points
0 comments1 min readEA link
(link.springer.com)

How Tech­ni­cal AI Safety Re­searchers Can Help Im­ple­ment Pu­ni­tive Da­m­ages to Miti­gate Catas­trophic AI Risk

Gabriel WeilFeb 19, 2024, 5:43 PM
28 points
2 comments4 min readEA link

IFRC cre­ative com­pe­ti­tion: product or ser­vice from fu­ture au­tonomous weapons sys­tems and emerg­ing digi­tal risks

Devin LamJul 21, 2024, 1:08 PM
9 points
0 comments1 min readEA link
(solferinoacademy.com)

Big Pic­ture AI Safety: teaser

EuanMcLeanFeb 20, 2024, 1:09 PM
18 points
0 comments1 min readEA link

Pod­cast with David Thorstad: Ev­i­dence, Uncer­tainty, and Ex­is­ten­tial Risk

Leah PiersonFeb 11, 2025, 11:47 PM
34 points
2 comments1 min readEA link
(www.biounethical.com)

An­i­mal Weapons: Les­son learned from biolog­i­cal arms race to mod­ern day weapons

Halwenge Feb 25, 2024, 2:06 PM
2 points
0 comments4 min readEA link

The Pend­ing Disaster Fram­ing as it Re­lates to AI Risk

Chris LeongFeb 25, 2024, 3:47 PM
8 points
2 comments6 min readEA link

Com­par­ing sam­pling strate­gies for early de­tec­tion of stealth biothreats

slgFeb 26, 2024, 11:14 PM
19 points
3 comments26 min readEA link
(naobservatory.org)

[Question] Is there a re­cap of rele­vant jobs in the nu­clear risk sec­tor/​nu­clear en­ergy sec­tor for EAs?

VaipanFeb 26, 2024, 2:21 PM
6 points
7 comments1 min readEA link

[Question] Why won’t nan­otech kill us all?

YarrowDec 16, 2023, 11:27 PM
19 points
5 comments1 min readEA link

(Ap­pli­ca­tions Open!) UChicago XLab Sum­mer Re­search Fel­low­ship 2024

ZacharyRudolphFeb 26, 2024, 6:20 PM
15 points
0 comments4 min readEA link
(xrisk.uchicago.edu)

#180 – Why gullibil­ity and mis­in­for­ma­tion are over­rated (Hugo Mercier on the 80,000 Hours Pod­cast)

80000_HoursFeb 26, 2024, 7:16 PM
15 points
0 comments18 min readEA link

The Value of a Statis­ti­cal Life is not a good metric

Christopher ClayMar 19, 2025, 9:11 AM
23 points
3 comments1 min readEA link

Is it pos­si­bly de­sir­able for sen­tient ASI to ex­ter­mi­nate hu­mans?

DuckruckJun 18, 2024, 3:20 PM
0 points
4 comments1 min readEA link

Crit­i­cal-Set Views, Bio­graph­i­cal Iden­tity, and the Long Term

EJTFeb 28, 2024, 2:30 PM
9 points
3 comments1 min readEA link
(philpapers.org)

IV. Par­allels and Review

Maynk02Feb 27, 2024, 11:10 PM
7 points
1 comment8 min readEA link
(open.substack.com)

‘Surveillance Cap­i­tal­ism’ & AI Gover­nance: Slip­pery Busi­ness Models, Se­cu­ri­ti­sa­tion, and Self-Regulation

Charlie HarrisonFeb 29, 2024, 3:47 PM
19 points
2 comments12 min readEA link

An in­ter­sec­tion be­tween an­i­mal welfare and AI

sammyboizJun 18, 2024, 3:23 AM
9 points
1 comment1 min readEA link

Re­duce AGI risks us­ing mod­ern lie de­tec­tion technology

NothingIsArtSep 30, 2024, 6:12 PM
1 point
0 comments1 min readEA link

#203 – In­terfer­ing with wild na­ture, ac­cept­ing death, and the ori­gin of com­plex civil­i­sa­tion (Peter God­frey-Smith on The 80,000 Hours Pod­cast)

80000_HoursOct 4, 2024, 1:00 PM
14 points
0 comments16 min readEA link

6) Speed is The Most Im­por­tant Vari­able in Pan­demic Risk Management

PandemicRiskManMar 5, 2024, 1:51 PM
3 points
0 comments9 min readEA link

An­thropic An­nounces new S.O.T.A. Claude 3

Joseph MillerMar 4, 2024, 7:02 PM
10 points
5 comments1 min readEA link
(twitter.com)

[Question] Ex­is­ten­tial risk man­age­ment in cen­tral gov­ern­ment? Where is it?

WillPearsonMar 4, 2024, 4:22 PM
6 points
2 comments1 min readEA link

S-risks, X-risks, and Ideal Futures

OscarD🔸Jun 18, 2024, 3:12 PM
15 points
6 comments1 min readEA link

INTERVIEW: StakeOut.AI w/​ Dr. Peter Park

Jacob-HaimesMar 5, 2024, 6:04 PM
21 points
7 comments1 min readEA link
(into-ai-safety.github.io)

Ja­panese or­ga­ni­za­tion for atomic bomb sur­vivors Nihon Hi­dankyo has been awarded the No­bel Peace Prize

Jonny Spicer 🔸Oct 11, 2024, 11:31 AM
6 points
1 comment1 min readEA link

NTIA Solic­its Com­ments on Open-Weight AI Models

Jacob WoessnerMar 6, 2024, 8:05 PM
11 points
1 comment2 min readEA link
(www.ntia.gov)

Sum­mary: Longter­mism, Ag­gre­ga­tion, and Catas­trophic Risk (Emma J. Cur­ran)

Noah Varley🔸Mar 7, 2024, 2:31 PM
24 points
7 comments7 min readEA link

Why mis­al­igned AGI won’t lead to mass kil­lings (and what ac­tu­ally mat­ters in­stead)

Julian NalenzFeb 6, 2025, 1:22 PM
−3 points
5 comments3 min readEA link
(blog.hermesloom.org)

Cli­mate Ad­vo­cacy and AI Safety: Su­per­charg­ing AI Slow­down Advocacy

Matthew McRedmond🔹Jul 25, 2024, 12:08 PM
8 points
7 comments2 min readEA link

Case stud­ies on so­cial-welfare-based stan­dards in var­i­ous industries

Holden KarnofskyJun 20, 2024, 1:33 PM
73 points
2 comments1 min readEA link

Aiming for heaven [short poem]

Ávila CarmesíMar 10, 2024, 6:14 AM
31 points
4 comments1 min readEA link

Giv­ing What We Can is now its own le­gal en­tity!

Alana HFSep 3, 2024, 8:05 PM
109 points
2 comments1 min readEA link
(www.givingwhatwecan.org)

FLI is hiring across Comms and Ops

Ben_EisenpressJul 25, 2024, 12:02 AM
8 points
0 comments1 min readEA link

Pro­ject pro­posal: Sce­nario anal­y­sis group for AI safety strategy

BuhlDec 18, 2023, 6:31 PM
35 points
0 comments5 min readEA link
(rethinkpriorities.org)

AI Safety In­cu­ba­tion Pro­gram—Ap­pli­ca­tions Open

Catalyze ImpactAug 16, 2024, 3:37 PM
11 points
0 comments2 min readEA link

Re­search pro­ject idea: food stock­piling as a GCR intervention

Will Howard🔹Mar 12, 2024, 12:59 PM
8 points
5 comments3 min readEA link

Re­sults from an Ad­ver­sar­ial Col­lab­o­ra­tion on AI Risk (FRI)

Forecasting Research InstituteMar 11, 2024, 3:54 PM
193 points
25 comments9 min readEA link
(forecastingresearch.org)

An­nounc­ing the Cam­bridge ERA:AI Fel­low­ship 2024

erafellowshipMar 11, 2024, 7:06 PM
31 points
5 comments3 min readEA link

The ‘Ne­glected Ap­proaches’ Ap­proach: AE Stu­dio’s Align­ment Agenda

Marc CarauleanuDec 18, 2023, 9:13 PM
21 points
0 comments12 min readEA link

Me­tac­u­lus Launches Fu­ture of AI Series, Based on Re­search Ques­tions by Arb

christianMar 13, 2024, 9:14 PM
34 points
0 comments1 min readEA link
(www.metaculus.com)

AI gov­er­nance needs a the­ory of victory

Corin KatzkeJun 21, 2024, 4:08 PM
80 points
8 comments20 min readEA link
(www.convergenceanalysis.org)

7) How to Build Speed Into Our Pan­demic Re­sponse Plans

PandemicRiskManMar 15, 2024, 4:53 PM
1 point
0 comments13 min readEA link

[Question] What hap­pened to the ‘only 400 peo­ple work in AI safety/​gov­er­nance’ num­ber dated from 2020?

VaipanMar 15, 2024, 3:25 PM
27 points
1 comment1 min readEA link

8) The Lines of Defence Ap­proach to Pan­demic Risk Management

PandemicRiskManMar 17, 2024, 7:00 PM
4 points
0 comments17 min readEA link

The Fermi para­dox, and why suffer­ing re­duc­tion re­duces ex­tinc­tion risk

Alex SchwalbMar 17, 2024, 12:26 AM
12 points
0 comments3 min readEA link

Balanc­ing safety and waste

Daniel_FriedrichMar 17, 2024, 10:57 AM
6 points
0 comments7 min readEA link

Join the AI Eval­u­a­tion Tasks Bounty Hackathon

Esben KranMar 18, 2024, 8:15 AM
20 points
0 comments4 min readEA link

Re­vis­it­ing the Evolu­tion An­chor in the Biolog­i­cal An­chors Re­port

JanviMar 18, 2024, 3:01 AM
13 points
1 comment4 min readEA link

Assess­ment of AI safety agen­das: think about the down­side risk

Roman LeventovDec 19, 2023, 9:02 AM
6 points
0 comments1 min readEA link

INTERVIEW: Round 2 - StakeOut.AI w/​ Dr. Peter Park

Jacob-HaimesMar 18, 2024, 9:26 PM
8 points
0 comments1 min readEA link
(into-ai-safety.github.io)

[Question] How much (more) data do we need to claim ex­treme cost-effec­tive­ness?

Niek Versteegde, founder GOAL 3Oct 1, 2024, 12:36 PM
28 points
14 comments6 min readEA link

[Question] How might a mis­al­igned Ar­tifi­cial Su­per­in­tel­li­gence break up a hu­man be­ing into us­able elec­tro­mag­netic en­ergy?

CarusoOct 5, 2024, 5:33 PM
−5 points
3 comments1 min readEA link

You prob­a­bly won’t solve malaria or x-risk, and that’s ok

Rory FentonMar 19, 2025, 3:07 PM
142 points
9 comments5 min readEA link

CEEALAR’s The­ory of Change

CEEALARDec 19, 2023, 8:21 PM
51 points
5 comments3 min readEA link

Timelines to Trans­for­ma­tive AI: an investigation

Zershaaneh QureshiMar 25, 2024, 6:11 PM
73 points
8 comments50 min readEA link

Sum­mary: Against the Sin­gu­lar­ity Hy­poth­e­sis (David Thorstad)

Noah Varley🔸Mar 27, 2024, 1:48 PM
63 points
10 comments5 min readEA link

My mo­ti­va­tion and the­ory of change for work­ing in AI healthtech

Andrew CritchOct 12, 2024, 12:36 AM
47 points
1 comment1 min readEA link

[Question] What is the na­ture of hu­mans gen­eral in­tel­li­gence and it’s im­pli­ca­tions for AGI?

WillPearsonMar 26, 2024, 4:22 PM
6 points
0 comments1 min readEA link

In­tro­duc­ing Bunker in Paradise

James NorrisFeb 5, 2025, 4:00 PM
2 points
0 comments1 min readEA link

[Question] Where would I find the hard­core to­tal­iz­ing seg­ment of EA?

Peter BerggrenDec 28, 2023, 9:16 AM
16 points
22 comments1 min readEA link

AI safety ad­vo­cates should con­sider pro­vid­ing gen­tle push­back fol­low­ing the events at OpenAI

I_machinegun_KellyDec 22, 2023, 9:05 PM
86 points
5 comments3 min readEA link
(www.lesswrong.com)

More thoughts on the Hu­man-AGI War

AhrenbachDec 27, 2023, 1:52 AM
2 points
0 comments7 min readEA link

Should YouTube make recom­men­da­tions for the cli­mate?

Matrice JacobineSep 5, 2024, 3:22 PM
1 point
0 comments1 min readEA link
(link.springer.com)

Why I Should Work on AI Safety—Part 2: Will AI Ac­tu­ally Sur­pass Hu­man In­tel­li­gence?

Aditya AswaniDec 27, 2023, 9:08 PM
8 points
0 comments8 min readEA link

Is Paus­ing AI Pos­si­ble?

Richard AnniloOct 9, 2024, 1:22 PM
89 points
4 comments18 min readEA link

Promethean Gover­nance and Memetic Le­gi­t­i­macy: Les­sons from the Vene­tian Doge for AI Era Institutions

LutherSloanMar 19, 2025, 6:09 PM
1 point
0 comments3 min readEA link

Re­port on the De­sir­a­bil­ity of Science Given New Biotech Risks

Matt ClancyJan 17, 2024, 7:42 PM
78 points
23 comments4 min readEA link

Stop talk­ing about p(doom)

Isaac KingJan 1, 2024, 10:57 AM
115 points
12 comments1 min readEA link

When safety is dan­ger­ous: risks of an in­definite pause on AI de­vel­op­ment, and call for re­al­is­tic alternatives

Hayven FrienbyJan 18, 2024, 2:59 PM
5 points
0 comments5 min readEA link

Why Solv­ing Ex­is­ten­tial Risks Re­lated to AI Might Re­quire Rad­i­cally New Approaches

Andy E WilliamsJan 10, 2024, 10:31 AM
1 point
0 comments6 min readEA link

MIRI 2024 Mis­sion and Strat­egy Update

MaloJan 5, 2024, 1:10 AM
154 points
38 comments1 min readEA link

Oxford Biose­cu­rity Group: Ap­pli­ca­tions Open and 2023 Retrospective

Swan 🔸Jan 6, 2024, 6:20 AM
33 points
0 comments11 min readEA link

Are Far-UVC In­ter­ven­tions Over­hyped? [Founders Pledge]

christian.rJan 9, 2024, 5:38 PM
142 points
8 comments61 min readEA link

How Effec­tive Altru­ism im­pacts the views on Ex­is­ten­tial Risks

Daniel PidgornyiFeb 2, 2024, 3:34 PM
1 point
0 comments3 min readEA link

Re­port: Latin Amer­ica and Global Catas­trophic Risks, trans­form­ing risk man­age­ment.

JorgeTorresCJan 9, 2024, 2:13 AM
25 points
1 comment2 min readEA link
(riesgoscatastroficosglobales.com)

AI Devel­op­ment Readi­ness Con­di­tion (AI-DRC): A Call to Action

AI-DRC3Jan 11, 2024, 11:00 AM
−5 points
0 comments2 min readEA link

Towards AI Safety In­fras­truc­ture: Talk & Outline

Paul BricmanJan 7, 2024, 9:35 AM
14 points
1 comment2 min readEA link
(www.youtube.com)

Win­ning Non-Triv­ial Pro­ject: Set­ting a high stan­dard for fron­tier model security

XaviCFJan 8, 2024, 11:20 AM
31 points
0 comments18 min readEA link

[Question] What is the im­pact of chip pro­duc­tion on paus­ing AI de­vel­op­ment?

JohanEAJan 10, 2024, 10:20 PM
7 points
0 comments1 min readEA link

An­nounc­ing Con­fido 2.0: Pro­mot­ing the un­cer­tainty-aware mind­set in orgs

BlankaJan 10, 2024, 11:45 AM
20 points
2 comments2 min readEA link

Was Re­leas­ing Claude-3 Net-Negative

Logan RiggsMar 27, 2024, 5:41 PM
12 points
1 comment4 min readEA link

Why we’re en­ter­ing a new nu­clear age — and how to re­duce the risks (Chris­tian Ruhl on the 80k After Hours Pod­cast)

80000_HoursMar 27, 2024, 7:17 PM
52 points
2 comments7 min readEA link

Water Pre­pared­ness for Disasters

FinMar 8, 2022, 5:03 PM
13 points
0 comments3 min readEA link

The case for a com­mon observatory

Light_of_IlluvatarMar 29, 2024, 10:14 AM
17 points
6 comments5 min readEA link

Prob­lem: Guaran­tee­ing the right to life for ev­ery­one, in the in­finitely long term (part 1)

lamparitaAug 18, 2024, 12:13 PM
2 points
2 comments8 min readEA link

S-risk for Christians

MoneroMar 31, 2024, 8:34 PM
−1 points
5 comments1 min readEA link

Food Pre­pared­ness for Disasters

FinMar 8, 2022, 5:03 PM
20 points
1 comment4 min readEA link

Have your say on the fu­ture of AI reg­u­la­tion: Dead­line ap­proach­ing for your feed­back on UN High-Level Ad­vi­sory Body on AI In­terim Re­port ‘Govern­ing AI for Hu­man­ity’

Deborah W.A. FoulkesMar 29, 2024, 6:37 AM
17 points
1 comment1 min readEA link

Ap­ply to be a Safety Eng­ineer at Lock­heed Martin!

yanni kyriacosMar 31, 2024, 9:01 PM
31 points
5 comments1 min readEA link

Prepar­ing for Power Ou­tages in Disasters

FinMar 8, 2022, 5:04 PM
9 points
0 comments4 min readEA link

AI scal­ing myths

Noah Varley🔸Jun 27, 2024, 8:29 PM
30 points
0 comments1 min readEA link
(open.substack.com)

Re-in­tro­duc­ing Upgrad­able (a.k.a., 700,000 Hours): Life op­ti­miza­tion as a ser­vice for altruists

James NorrisFeb 5, 2025, 4:00 PM
4 points
0 comments1 min readEA link

Thou­sands of mal­i­cious ac­tors on the fu­ture of AI misuse

Zershaaneh QureshiApr 1, 2024, 10:03 AM
75 points
1 comment1 min readEA link

An­nounc­ing the As­so­ci­a­tion for Feast­ing Ahead of Time, a novel GCR philos­o­phy and solution

HughJazzApr 1, 2024, 3:12 PM
18 points
0 comments2 min readEA link

Gover­nance Strate­gies for Dual-Use Re­search of con­cern: Balanc­ing Scien­tific Progress and Global Security

Diane LetourneurJun 28, 2024, 5:01 PM
9 points
1 comment13 min readEA link

Open As­teroid Im­pact an­nounces lead­er­ship transition

Patrick HoangApr 1, 2024, 12:51 PM
15 points
0 comments1 min readEA link

[Question] Ne­glected Trans­mis­sion-Block­ing In­ter­ven­tions?

christian.rJan 11, 2024, 9:28 PM
12 points
5 comments1 min readEA link

#200 – What su­perfore­cast­ers and ex­perts think about ex­is­ten­tial risks (Ezra Karger on The 80,000 Hours Pod­cast)

80000_HoursSep 6, 2024, 5:53 PM
12 points
2 comments14 min readEA link

Bounty for Ev­i­dence on Some of Pal­isade Re­search’s Beliefs

bwrSep 23, 2024, 8:05 PM
5 points
0 comments1 min readEA link

De­cep­tive Align­ment is <1% Likely by Default

DavidWFeb 21, 2023, 3:07 PM
54 points
26 comments14 min readEA link

‘The AI Dilemma: Growth vs Ex­is­ten­tial Risk’: An Ex­ten­sion for EAs and a Sum­mary for Non-economists

TomHouldenApr 21, 2024, 4:28 PM
65 points
1 comment16 min readEA link

Not un­der­stand­ing sen­tience is a sig­nifi­cant x-risk

Cameron BergJul 1, 2024, 3:38 PM
27 points
8 comments2 min readEA link

My Feed­back to the UN Ad­vi­sory Body on AI

Heramb PodarApr 4, 2024, 11:39 PM
7 points
1 comment4 min readEA link

Think­ing About Propen­sity Evaluations

Maxime Riché 🔸Aug 19, 2024, 9:24 AM
12 points
1 comment27 min readEA link

A Tax­on­omy Of AI Sys­tem Evaluations

Maxime Riché 🔸Aug 19, 2024, 9:08 AM
8 points
0 comments14 min readEA link

How to ne­glect the long term (Hay­den Wilk­in­son)

Global Priorities InstituteOct 13, 2023, 11:09 AM
21 points
0 comments5 min readEA link
(globalprioritiesinstitute.org)

Arkose: Or­ga­ni­za­tional Up­dates & Ways to Get Involved

ArkoseAug 1, 2024, 1:03 PM
28 points
1 comment1 min readEA link

In­ves­ti­gat­ing the role of agency in AI x-risk

Corin KatzkeApr 8, 2024, 3:12 PM
22 points
3 comments40 min readEA link
(www.convergenceanalysis.org)

How long un­til re­cov­ery af­ter col­lapse?

FJehnSep 24, 2024, 8:43 AM
12 points
3 comments7 min readEA link
(existentialcrunch.substack.com)

The Bunker Fallacy

SimonKSApr 10, 2024, 8:33 AM
12 points
11 comments6 min readEA link

[Question] Should Open Philan­thropy build de­tailed quan­ti­ta­tive mod­els which es­ti­mate global catas­trophic risk?

Vasco Grilo🔸Apr 10, 2024, 5:17 PM
11 points
4 comments1 min readEA link

[Link Post] Elon Musk wants to colonize Mars. It’s a dis­as­trous idea

BrianKApr 11, 2024, 12:58 PM
10 points
14 comments1 min readEA link
(www.fastcompany.com)

A Case for Nuanced Risk Assessment

Molly HickmanAug 20, 2024, 9:23 AM
25 points
3 comments6 min readEA link

In­creas­ing risks of GCRs due to cli­mate change

Leonora_CamnerApr 12, 2024, 3:57 PM
19 points
3 comments1 min readEA link

Space set­tle­ment and the time of per­ils: a cri­tique of Thorstad

Matthew RendallApr 14, 2024, 3:29 PM
46 points
10 comments4 min readEA link

Reimag­in­ing Malev­olence: A Primer on Malev­olence and Im­pli­ca­tions for EA

Kenneth_DiaoApr 11, 2024, 12:50 PM
28 points
3 comments44 min readEA link

Ap­pli­ca­tions Open for the Next Cy­cle of Oxford Biose­cu­rity Group

Lin BLApr 14, 2024, 8:03 AM
25 points
1 comment2 min readEA link

A sim­ple ar­gu­ment for the bad­ness of hu­man extinction

Matthew RendallApr 17, 2024, 10:35 AM
4 points
14 comments2 min readEA link

An­nounc­ing: EA Fo­rum Pod­cast – Au­dio nar­ra­tions of EA Fo­rum posts

peterhartreeDec 5, 2022, 9:50 PM
155 points
33 comments2 min readEA link

Notes on new UK AISI minister

PseudaemoniaJul 5, 2024, 7:50 PM
92 points
0 comments1 min readEA link

Part­ner with Us: Ad­vanc­ing Global Catas­trophic and AI Risk Re­search at Plateau State Univer­sity,Bokkos

emmannaemekaOct 10, 2024, 1:19 AM
15 points
0 comments2 min readEA link

How to re­duce risks re­lated to con­scious AI: A user guide [Con­scious AI & Public Per­cep­tion]

Jay LuongJul 5, 2024, 2:19 PM
9 points
1 comment15 min readEA link

The case for con­scious AI: Clear­ing the record [AI Con­scious­ness & Public Per­cep­tion]

Jay LuongJul 5, 2024, 8:29 PM
3 points
7 comments8 min readEA link

Ap­ply to Aether—In­de­pen­dent LLM Agent Safety Re­search Group

RohanSAug 21, 2024, 9:40 AM
47 points
13 comments8 min readEA link

We’re hiring a Writer to join our team at Our World in Data

Charlie GiattinoApr 18, 2024, 8:50 PM
29 points
0 comments1 min readEA link
(ourworldindata.org)

[Question] AI con­scious­ness & moral sta­tus: What do the ex­perts think?

Jay LuongJul 6, 2024, 3:27 PM
0 points
3 comments1 min readEA link

I cre­ated an Asi Align­ment Tier List

TimeGoatApr 22, 2024, 12:14 PM
0 points
0 comments1 min readEA link

Ex­tinc­tion risk and longter­mism: a broader cri­tique of Thorstad

Matthew RendallApr 21, 2024, 1:55 PM
31 points
5 comments3 min readEA link

AI Reg­u­la­tion is Unsafe

Maxwell TabarrokApr 22, 2024, 4:38 PM
19 points
8 comments4 min readEA link
(www.maximum-progress.com)

New org an­nounce­ment: Would your pro­ject benefit from OSINT, satel­lite imagery anal­y­sis, or in­ter­na­tional se­cu­rity-re­lated re­search sup­port?

ChristinaApr 22, 2024, 6:02 PM
54 points
2 comments1 min readEA link

Ba­sic game the­ory and how you can do a bunch of good in ~3 Hours. (de­vel­op­ing ar­ti­cle.)

No longer EA-affiliatedOct 10, 2024, 4:30 AM
−3 points
2 comments7 min readEA link

Nav­i­gat­ing men­tal health challenges in global catas­trophic risk fields

Ewelina_TurOct 15, 2024, 2:46 PM
40 points
1 comment17 min readEA link

An AI Man­hat­tan Pro­ject is Not Inevitable

Maxwell TabarrokJul 6, 2024, 4:43 PM
53 points
2 comments4 min readEA link
(www.maximum-progress.com)

[Question] How bad would AI progress need to be for us to think gen­eral tech­nolog­i­cal progress is also bad?

Jim BuhlerJul 6, 2024, 6:44 PM
10 points
0 comments1 min readEA link

Ex­plor­ing Key Cases with the Port­fo­lio Builder

Hayley ClatterbuckJul 10, 2024, 12:07 PM
73 points
1 comment6 min readEA link

Re­silience to Nu­clear & Vol­canic Winter

Stan PinsentJul 9, 2024, 10:39 AM
96 points
14 comments3 min readEA link

AI-nu­clear in­te­gra­tion: ev­i­dence of au­toma­tion bias from hu­mans and LLMs [re­search sum­mary]

TaoApr 27, 2024, 9:59 PM
17 points
2 comments12 min readEA link

To Be Born in a Bag

Niko McCartyOct 7, 2024, 12:39 PM
18 points
1 comment16 min readEA link
(www.asimov.press)

The Guardian calls EA “cultish” and ac­cuses the late FHI of “Eu­gen­ics on Steroids”

Damin Curtis🔹Apr 28, 2024, 1:44 PM
13 points
12 comments1 min readEA link
(www.theguardian.com)

Max Teg­mark — The AGI En­tente Delusion

Matrice JacobineOct 13, 2024, 5:42 PM
0 points
1 comment1 min readEA link
(www.lesswrong.com)

80,000 Hours is shift­ing its strate­gic ap­proach to fo­cus more on AGI

80000_HoursMar 20, 2025, 11:24 AM
156 points
84 comments8 min readEA link

I read ev­ery ma­jor AI lab’s safety plan so you don’t have to

sarahhwDec 16, 2024, 2:12 PM
67 points
2 comments11 min readEA link
(longerramblings.substack.com)

A be­gin­ner’s in­tro­duc­tion to AI-driven biorisk: Large Lan­guage Models, Biolog­i­cal De­sign Tools, In­for­ma­tion Hazards, and Biosecurity

NatKiiluMay 3, 2024, 3:49 PM
6 points
1 comment16 min readEA link

On Ar­tifi­cial Wisdom

Jordan ArelJul 11, 2024, 7:14 AM
22 points
1 comment14 min readEA link

AI and Chem­i­cal, Biolog­i­cal, Ra­diolog­i­cal, & Nu­clear Hazards: A Reg­u­la­tory Review

Elliot MckernonMay 10, 2024, 8:41 AM
8 points
1 comment1 min readEA link

The Failed Strat­egy of Ar­tifi­cial In­tel­li­gence Doomers

yhoisethFeb 5, 2025, 7:34 PM
12 points
2 comments1 min readEA link
(letter.palladiummag.com)

Last days to ap­ply to EAGxLATAM 2024

Daniela TiznadoJan 17, 2024, 8:24 PM
16 points
0 comments1 min readEA link

Vague jus­tifi­ca­tions for longter­mist as­sump­tions?

Venky1024May 11, 2024, 9:20 AM
30 points
9 comments7 min readEA link

SB 1047 Simplified

Gabe KSep 25, 2024, 12:00 PM
14 points
0 comments4 min readEA link

The Align­ment Prob­lem No One is Talk­ing About

Non-zero-sum JamesMay 14, 2024, 10:42 AM
5 points
0 comments2 min readEA link

An­nounc­ing the AI Safety Sum­mit Talks with Yoshua Bengio

OttoMay 14, 2024, 12:49 PM
33 points
1 comment1 min readEA link

#187 – How re­search­ing his book turned him from a space op­ti­mist into a “space bas­tard” (Zach Wein­er­smith on the 80,000 Hours Pod­cast)

80000_HoursMay 15, 2024, 2:03 PM
28 points
1 comment18 min readEA link

[Question] Why hasn’t there been any sig­nifi­cant AI protest

sammyboizMay 17, 2024, 2:59 AM
21 points
14 comments1 min readEA link

Do you want to do a de­bate on youtube? I’m look­ing for po­lite, truth-seek­ing par­ti­ci­pants.

Nathan YoungOct 10, 2024, 9:32 AM
19 points
3 comments1 min readEA link

That Alien Mes­sage—The Animation

WriterSep 7, 2024, 2:53 PM
43 points
6 comments1 min readEA link
(youtu.be)

De­sign­ing Ar­tifi­cial Wis­dom: The Wise Work­flow Re­search Organization

Jordan ArelJul 12, 2024, 6:57 AM
14 points
1 comment9 min readEA link

Sen­si­tive as­sump­tions in longter­mist modeling

Owen MurphySep 18, 2024, 1:39 AM
82 points
12 comments7 min readEA link
(ohmurphy.substack.com)

An economist’s per­spec­tive on AI safety

David StinsonJun 7, 2024, 7:55 AM
7 points
1 comment9 min readEA link

Don’t panic: 90% of EAs are good people

Closed Limelike CurvesMay 19, 2024, 4:37 AM
22 points
13 comments2 min readEA link

Fill out this cen­sus of ev­ery­one in­ter­ested in re­duc­ing catas­trophic AI risks

Alex HTMay 18, 2024, 3:53 PM
105 points
1 comment1 min readEA link

#192 – What would hap­pen if North Korea launched a nu­clear weapon at the US (An­nie Ja­cob­sen on the 80,000 Hours Pod­cast)

80000_HoursJul 12, 2024, 7:38 PM
13 points
1 comment12 min readEA link

Doubt­ing Deter­rence by Denial

C.K.Mar 20, 2025, 3:55 PM
4 points
1 comment6 min readEA link
(conradkunadu.substack.com)

De­sign­ing Ar­tifi­cial Wis­dom: GitWise and AlphaWise

Jordan ArelJul 13, 2024, 12:04 AM
6 points
1 comment7 min readEA link

“If we go ex­tinct due to mis­al­igned AI, at least na­ture will con­tinue, right? … right?”

plexMay 18, 2024, 3:06 PM
13 points
10 comments1 min readEA link
(aisafety.info)

Los­ing faith in big tech altruism

sammyboizMay 22, 2024, 4:49 AM
7 points
1 comment1 min readEA link

[Question] Com­mon re­but­tal to “paus­ing” or reg­u­lat­ing AI

sammyboizMay 22, 2024, 4:21 AM
4 points
2 comments1 min readEA link

De­sign­ing Ar­tifi­cial Wis­dom: De­ci­sion Fore­cast­ing AI & Futarchy

Jordan ArelJul 14, 2024, 5:10 AM
5 points
1 comment6 min readEA link

A short sum­mary of what I have been post­ing about on LessWrong

ThomasCederborgSep 10, 2024, 12:26 PM
3 points
0 comments2 min readEA link

Ex­plo­ra­tion of Foods High in Vi­tamin D as a Die­tary Strat­egy in the Event of Abrupt Sun­light Reduction

Juliana AlvarezSep 19, 2024, 3:28 PM
13 points
6 comments20 min readEA link

Ex­plor­ing Blood-Based Bio­surveillance, Part 2: Sam­pling Strate­gies within the US Blood Supply

ljustenSep 10, 2024, 4:39 PM
14 points
0 comments13 min readEA link
(naobservatory.org)

Linkpost: Epis­tle to the Successors

ukc10014Jul 14, 2024, 8:07 PM
4 points
0 comments1 min readEA link
(ukc10014.github.io)

Big Pic­ture AI Safety: Introduction

EuanMcLeanMay 23, 2024, 11:28 AM
32 points
3 comments5 min readEA link

What will the first hu­man-level AI look like, and how might things go wrong?

EuanMcLeanMay 23, 2024, 11:28 AM
12 points
1 comment15 min readEA link

#188 – On whether sci­ence is good (Matt Clancy on the 80,000 Hours Pod­cast)

80000_HoursMay 24, 2024, 3:04 PM
13 points
0 comments17 min readEA link

What should AI safety be try­ing to achieve?

EuanMcLeanMay 23, 2024, 11:28 AM
13 points
1 comment13 min readEA link

The ne­ces­sity of “Guardian AI” and two con­di­tions for its achievement

ProicaMay 28, 2024, 11:42 AM
1 point
1 comment15 min readEA link

AI Safety Seed Fund­ing Net­work—Join as a Donor or Investor

Alexandra BosDec 16, 2024, 7:30 PM
45 points
1 comment2 min readEA link

What mis­takes has the AI safety move­ment made?

EuanMcLeanMay 23, 2024, 11:29 AM
61 points
3 comments12 min readEA link

Filling the Void: A Com­pre­hen­sive Database for AI Risks Materials

J.A.M.May 28, 2024, 4:03 PM
10 points
1 comment4 min readEA link

Ad­dress­ing cli­mate change?

Donald ZepedaJul 15, 2024, 8:32 PM
3 points
1 comment1 min readEA link

AI and the feel­ing of liv­ing in two worlds

michelOct 10, 2024, 5:51 PM
40 points
3 comments7 min readEA link

[Question] Thoughts on this $16.7M “AI safety” grant?

defun 🔸Jul 16, 2024, 9:16 AM
61 points
24 comments1 min readEA link

New Zealand pro­poses reg­u­la­tory re­quire­ments for nu­cleic acid syn­the­sis screening

Policy AotearoaFeb 6, 2025, 1:30 PM
13 points
1 comment1 min readEA link

My (cur­rent) model of what an AI gov­er­nance re­searcher does

JohanEAAug 26, 2024, 11:22 AM
7 points
1 comment5 min readEA link

Map­ping How Alli­ances, Ac­qui­si­tions, and An­titrust are Shap­ing the Fron­tier AI Industry

t6aguirreJun 3, 2024, 9:43 AM
24 points
1 comment2 min readEA link

A nec­es­sary Mem­brane for­mal­ism feature

ThomasCederborgSep 10, 2024, 9:03 PM
1 point
0 comments11 min readEA link

Ge­offrey Hin­ton on the Past, Pre­sent, and Fu­ture of AI

Stephen McAleeseOct 12, 2024, 4:41 PM
5 points
1 comment1 min readEA link

Every­thing’s An Emergency

Bentham's BulldogMar 20, 2025, 5:11 PM
22 points
1 comment2 min readEA link

An­nounc­ing Open Philan­thropy’s AI gov­er­nance and policy RFP

JulianHazellJul 17, 2024, 12:25 AM
73 points
2 comments1 min readEA link
(www.openphilanthropy.org)

Database of re­search pro­jects for vol­un­teers in food se­cu­rity dur­ing global catas­tro­phes (ALLFED)

JuanGarciaSep 26, 2024, 7:39 PM
47 points
1 comment1 min readEA link

A longter­mist case for di­rected panspermia

AhrenbachJan 21, 2024, 7:29 PM
1 point
1 comment4 min readEA link

1) Pan­demics are a Solv­able Problem

PandemicRiskManJan 26, 2024, 7:48 PM
−9 points
2 comments5 min readEA link

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

Amrit Sidhu-Brar 🔸Jan 25, 2024, 6:43 PM
55 points
0 comments6 min readEA link

2) Pan­demics Are Solved With Risk Man­age­ment, Not Science

PandemicRiskManJan 31, 2024, 3:51 PM
−9 points
0 comments7 min readEA link

1st Alinha Hacka Re­cap: Reflect­ing on the Brazilian AI Align­ment Hackathon

Thiago USPJan 31, 2024, 10:38 AM
7 points
0 comments2 min readEA link

Gaia Net­work: An Illus­trated Primer

Roman LeventovJan 26, 2024, 11:55 AM
4 points
4 comments15 min readEA link

Musk’s Ques­tion­able Ex­is­ten­tial Risk Rhetoric Amidst Le­gal Challenges

MJan 31, 2024, 7:40 AM
5 points
2 comments1 min readEA link

Sum­mary: Max­i­mal Clue­less­ness (An­dreas Mo­gensen)

Noah Varley🔸Feb 6, 2024, 2:49 PM
39 points
17 comments4 min readEA link

The­o­ret­i­cal New Tech­nol­ogy for En­ergy Generation

GraviticEngine7Feb 7, 2025, 1:17 PM
−1 points
2 comments9 min readEA link

So­bre pen­sar no pior

RamiroJan 25, 2024, 8:45 PM
6 points
1 comment4 min readEA link

EA Nether­lands’ An­nual Strat­egy for 2024

James HerbertJun 5, 2024, 3:07 PM
40 points
4 comments6 min readEA link

Ex­plor­ing Blood-Based Bio­surveillance, Part 1: Blood as a Sam­ple Type

ljustenJul 18, 2024, 1:10 PM
28 points
2 comments10 min readEA link
(naobservatory.org)

Shut­ting down all com­pet­ing AI pro­jects might not buy a lot of time due to In­ter­nal Time Pressure

ThomasCederborgOct 3, 2024, 12:05 AM
6 points
1 comment12 min readEA link

De­bat­ing AI’s Mo­ral Sta­tus: The Most Hu­mane and Silliest Thing Hu­mans Do(?)

Soe LinSep 29, 2024, 5:01 AM
5 points
5 comments3 min readEA link

New Book: ‘Nexus’ by Yu­val Noah Harari

timfarkasOct 3, 2024, 1:54 PM
14 points
2 comments5 min readEA link

Eric Sch­midt’s blueprint for US tech­nol­ogy strategy

OscarD🔸Oct 15, 2024, 7:54 PM
29 points
4 comments9 min readEA link

AIS Hun­gary is hiring a part-time Tech­ni­cal Lead! (Dead­line: Dec 31st)

gergoDec 17, 2024, 2:08 PM
9 points
0 comments2 min readEA link

The ELYSIUM Proposal

RokoOct 16, 2024, 2:14 AM
−10 points
0 comments1 min readEA link
(transhumanaxiology.substack.com)

Prize Money ($100) for Valid Tech­ni­cal Ob­jec­tions to Icesteading

RokoDec 18, 2024, 11:40 PM
−2 points
2 comments1 min readEA link
(twitter.com)

Ex­ec­u­tive Direc­tor for AIS France—Ex­pres­sion of interest

gergoDec 19, 2024, 8:11 AM
33 points
0 comments4 min readEA link

BERI is Hiring: My Ex­pe­rience as Deputy Direc­tor and Why You Should Apply

elizabethcooperOct 17, 2024, 11:59 AM
27 points
1 comment3 min readEA link

What AI com­pa­nies should do: Some rough ideas

Zach Stein-PerlmanOct 21, 2024, 2:00 PM
14 points
1 comment1 min readEA link

Bar­gain­ing among worldviews

Hayley ClatterbuckOct 18, 2024, 6:32 PM
57 points
5 comments12 min readEA link

QB: How Much do Fu­ture Gen­er­a­tions Mat­ter?

Richard Y Chappell🔸Oct 18, 2024, 3:22 PM
26 points
2 comments5 min readEA link
(www.goodthoughts.blog)

A Rocket–In­ter­pretabil­ity Analogy

plexOct 21, 2024, 1:55 PM
13 points
1 comment1 min readEA link

How Likely Are Var­i­ous Pre­cur­sors of Ex­is­ten­tial Risk?

NunoSempereOct 22, 2024, 4:51 PM
61 points
7 comments15 min readEA link
(samotsvety.org)

Tech­ni­cal Risks of (Lethal) Au­tonomous Weapons Systems

Heramb PodarOct 23, 2024, 8:43 PM
5 points
0 comments1 min readEA link
(www.lesswrong.com)

The Science of AI Is Too Im­por­tant to Be Left to the Scientists

AndrewDorisOct 23, 2024, 7:10 PM
3 points
0 comments1 min readEA link
(foreignpolicy.com)

[Question] Ur­gency in the ITN framework

Shaïman ThürlerOct 24, 2024, 3:02 PM
11 points
5 comments1 min readEA link

Towards the Oper­a­tional­iza­tion of Philos­o­phy & Wisdom

Thane RuthenisOct 28, 2024, 7:45 PM
1 point
1 comment1 min readEA link
(aiimpacts.org)

Moder­ately Skep­ti­cal of “Risks of Mir­ror Biol­ogy”

DavidmanheimDec 20, 2024, 12:57 PM
15 points
1 comment1 min readEA link
(substack.com)

Point-by-point re­ply to Yud­kowsky on UFOs

Magnus VindingDec 19, 2024, 9:24 PM
4 points
0 comments9 min readEA link

Can Knowl­edge Hurt You? The Dangers of In­fo­haz­ards (and Exfo­haz­ards)

A.G.G. LiuFeb 8, 2025, 3:51 PM
12 points
0 comments1 min readEA link
(www.youtube.com)

(Cross­post) Ar­ti­cle on Po­lari­sa­tion Against Longter­mism and Mis­ap­ply­ing Mo­ral Philosophy

Danny WardleMar 22, 2025, 4:42 AM
2 points
3 comments6 min readEA link
(www.pluralityofwords.com)

Sen­tinel min­utes #6/​2025: Power of the purse, D1.1 H5N1 flu var­i­ant, Ay­a­tol­lah against ne­go­ti­a­tions with Trump

NunoSempereFeb 10, 2025, 5:23 PM
39 points
2 comments7 min readEA link
(blog.sentinel-team.org)

Pub­lished re­port: Path­ways to short TAI timelines

Zershaaneh QureshiFeb 20, 2025, 10:10 PM
46 points
2 comments17 min readEA link
(www.convergenceanalysis.org)

The stan­dard case for de­lay­ing AI ap­pears to rest on non-util­i­tar­ian assumptions

Matthew_BarnettFeb 11, 2025, 4:04 AM
15 points
55 comments10 min readEA link

Oxford Biose­cu­rity Group: Fundrais­ing and Plans for Early 2025

Lin BLDec 20, 2024, 8:56 PM
33 points
0 comments2 min readEA link

LASST’s Pathogen Re­search Ami­cus Brief Project

Tyler WhitmerDec 23, 2024, 4:20 PM
13 points
1 comment6 min readEA link

What is com­pute gov­er­nance?

Vishakha AgrawalDec 23, 2024, 6:45 AM
5 points
0 comments2 min readEA link
(aisafety.info)

What is it to solve the al­ign­ment prob­lem?

Joe_CarlsmithFeb 13, 2025, 6:42 PM
25 points
1 comment1 min readEA link
(joecarlsmith.substack.com)

How do we solve the al­ign­ment prob­lem?

Joe_CarlsmithFeb 13, 2025, 6:27 PM
28 points
1 comment1 min readEA link
(joecarlsmith.substack.com)

Ex­plor­ing Blood-Based Bio­surveillance, Part 3: The Blood Virome

ljustenFeb 13, 2025, 5:51 PM
27 points
1 comment14 min readEA link
(naobservatory.org)

AI Safety Col­lab 2025 - Lo­cal Or­ga­nizer Sign-ups Open

Evander H. 🔸Feb 12, 2025, 11:27 AM
12 points
0 comments1 min readEA link

[Ap­ply] What I Love About AI Safety Field­build­ing at Cam­bridge (& We’re Hiring for a Lead­er­ship Role)

Harrison GietzFeb 14, 2025, 5:41 PM
14 points
0 comments3 min readEA link

The cur­rent AI strate­gic land­scape: one bear’s perspective

Matrice JacobineFeb 15, 2025, 9:49 AM
6 points
0 comments2 min readEA link
(philosophybear.substack.com)

Co­op­er­a­tion for AI safety must tran­scend geopoli­ti­cal interference

Matrice JacobineFeb 16, 2025, 6:18 PM
9 points
0 comments1 min readEA link
(www.scmp.com)

What are some good books about AI safety?

Vishakha AgrawalFeb 17, 2025, 11:54 AM
7 points
0 comments3 min readEA link
(aisafety.info)

Prologue | A Fire Upon the Deep | Ver­nor Vinge

semicycleFeb 17, 2025, 4:13 AM
5 points
1 comment1 min readEA link
(www.baen.com)

When do ex­perts think hu­man-level AI will be cre­ated?

Vishakha AgrawalJan 2, 2025, 11:17 PM
36 points
4 comments2 min readEA link
(aisafety.info)

Talk: Longter­mism, the “Spirit” of Digi­tal Capitalism

ludwigbaldJan 5, 2025, 2:27 PM
−2 points
1 comment1 min readEA link
(media.ccc.de)

[Question] Could hu­man­ity be saved by send­ing peo­ple to other planets (like Mars)?

lamparitaFeb 16, 2025, 7:40 PM
3 points
2 comments1 min readEA link

#212 – Why tech­nol­ogy is un­stop­pable & how to shape AI de­vel­op­ment any­way (Allan Dafoe on The 80,000 Hours Pod­cast)

80000_HoursFeb 17, 2025, 4:38 PM
16 points
0 comments19 min readEA link

There are a lot of up­com­ing re­treats/​con­fer­ences be­tween March and July (2025)

gergoFeb 18, 2025, 9:28 AM
17 points
2 comments1 min readEA link

AIS Ber­lin, events, op­por­tu­ni­ties and the flipped game­board—Field­builders Newslet­ter, Fe­bru­ary 2025

gergoFeb 17, 2025, 2:13 PM
7 points
0 comments3 min readEA link

Prevent­ing Catas­trophic Pan­demics – 80,000 Hours

EA HandbookFeb 18, 2025, 9:33 PM
6 points
0 comments1 min readEA link

When should we worry about AI power-seek­ing?

Joe_CarlsmithFeb 19, 2025, 7:44 PM
21 points
2 comments1 min readEA link
(joecarlsmith.substack.com)

Giv­ing What We Can global catas­trophic risk profile

EA HandbookFeb 18, 2025, 9:39 PM
5 points
0 comments1 min readEA link

Stable to­tal­i­tar­i­anism: an overview

80000_HoursOct 29, 2024, 4:07 PM
36 points
1 comment20 min readEA link
(80000hours.org)

Ex­plore jobs in biose­cu­rity, nu­clear se­cu­rity, and cli­mate change

EA HandbookFeb 18, 2025, 9:42 PM
5 points
0 comments1 min readEA link

What are poly­se­man­tic neu­rons?

Vishakha AgrawalJan 8, 2025, 7:39 AM
4 points
0 comments2 min readEA link
(aisafety.info)

Pos­si­ble im­por­tance of Effec­tive Altru­ism in the civ­i­liz­ing process

idea21Jan 8, 2025, 12:56 AM
1 point
0 comments1 min readEA link

Are AI safe­ty­ists cry­ing wolf?

sarahhwJan 8, 2025, 8:54 PM
61 points
21 comments16 min readEA link
(longerramblings.substack.com)

Tar­bell Fel­low­ship 2025 - Ap­pli­ca­tions Open (AI Jour­nal­ism)

Tarbell Center for AI JournalismJan 8, 2025, 3:25 PM
62 points
0 comments1 min readEA link

US AI Safety In­sti­tute will be ‘gut­ted,’ Ax­ios reports

Matrice JacobineFeb 20, 2025, 2:40 PM
12 points
1 comment1 min readEA link
(www.zdnet.com)

Min­i­miz­ing suffer­ing & ASI xrisk through brain digitization

Amy Louise JohnsonFeb 20, 2025, 9:08 PM
1 point
0 comments1 min readEA link

Longter­mist im­pli­ca­tions of aliens Space-Far­ing Civ­i­liza­tions—Introduction

Maxime Riché 🔸Feb 21, 2025, 12:07 PM
43 points
12 comments6 min readEA link

How do fic­tional sto­ries illus­trate AI mis­al­ign­ment?

Vishakha AgrawalJan 15, 2025, 6:16 AM
4 points
0 comments2 min readEA link
(aisafety.info)

Is Ge­netic Code Swap­ping as risky as it seems?

Invert_DOG_about_centre_OJan 12, 2025, 6:38 PM
23 points
2 comments10 min readEA link

‘Now Is the Time of Mon­sters’

Aaron GoldzimerJan 12, 2025, 11:31 PM
25 points
0 comments1 min readEA link
(www.nytimes.com)

Disen­tan­gling “Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing”

LizkaSep 13, 2021, 11:50 PM
96 points
16 comments19 min readEA link

Space gov­er­nance—prob­lem profile

finmMay 8, 2022, 5:16 PM
65 points
11 comments15 min readEA link

On longter­mism, Bayesi­anism, and the dooms­day argument

iporphyrySep 1, 2022, 12:27 AM
30 points
5 comments13 min readEA link

EA should help Tyler Cowen pub­lish his drafted book in China

Matt BrooksJan 14, 2023, 9:10 PM
38 points
8 comments3 min readEA link

Juan Gar­cía Martínez: In­dus­trial al­ter­na­tive foods for global catas­trophic risks

EA GlobalNov 21, 2020, 8:12 AM
12 points
0 comments1 min readEA link
(www.youtube.com)

Base Rates on United States Regime Collapse

AppliedDivinityStudiesApr 5, 2021, 5:14 PM
15 points
3 comments9 min readEA link

[Question] Disaster Relief?

Hira KhanAug 5, 2022, 8:57 PM
1 point
1 comment1 min readEA link

Katja Grace on Slow­ing Down AI, AI Ex­pert Sur­veys And Es­ti­mat­ing AI Risk

Michaël TrazziSep 16, 2022, 6:00 PM
48 points
6 comments3 min readEA link
(theinsideview.ai)

Sur­vey on AI ex­is­ten­tial risk scenarios

Sam ClarkeJun 8, 2021, 5:12 PM
154 points
11 comments6 min readEA link

A rel­a­tively athe­o­ret­i­cal per­spec­tive on as­tro­nom­i­cal waste

Nick_BecksteadAug 6, 2014, 12:55 AM
9 points
8 comments8 min readEA link

An­nounc­ing Me­tac­u­lus’s ‘Red Lines in Ukraine’ Fore­cast­ing Project

christianOct 21, 2022, 10:13 PM
17 points
0 comments1 min readEA link
(www.metaculus.com)

Noth­ing Wrong With AI Weapons

kbogAug 28, 2017, 2:52 AM
16 points
22 comments7 min readEA link

A “Solip­sis­tic” Repug­nant Conclusion

RamiroJul 21, 2022, 4:06 PM
13 points
0 comments6 min readEA link

Sum­ma­riz­ing the com­ments on William MacAskill’s NYT opinion piece on longtermism

WestSep 21, 2022, 5:46 PM
106 points
11 comments2 min readEA link

An epistemic cri­tique of longtermism

Nathan_BarnardJul 10, 2022, 10:59 AM
12 points
4 comments9 min readEA link

On­line Work­ing /​ Com­mu­nity Meetup for the Abo­li­tion of Suffering

Ruth_SeleoMay 31, 2022, 9:16 AM
7 points
5 comments1 min readEA link

Event on Oct 9: Fore­cast­ing Nu­clear Risk with Re­think Pri­ori­ties’ Michael Aird

MichaelA🔸Sep 29, 2021, 5:45 PM
24 points
3 comments2 min readEA link
(www.eventbrite.com)

The case for de­lay­ing so­lar geo­eng­ineer­ing research

John G. HalsteadMar 23, 2019, 3:26 PM
53 points
22 comments5 min readEA link

In­sti­tu­tions Can­not Res­train Dark-Triad AI Exploitation

RemmeltDec 27, 2022, 10:34 AM
8 points
0 comments1 min readEA link

Me­tac­u­lus Year in Re­view: 2022

christianJan 6, 2023, 1:23 AM
25 points
2 comments4 min readEA link
(metaculus.medium.com)

[Question] What are effec­tive ways to help Ukraini­ans right now?

Manuel AllgaierFeb 24, 2022, 10:20 PM
130 points
85 comments1 min readEA link

[Question] How can we de­crease the short-term prob­a­bil­ity of the nu­clear war?

Just LearningMar 1, 2022, 3:24 AM
18 points
0 comments1 min readEA link

[Linkpost] Nick Bostrom’s “Apol­ogy for an Old Email”

pseudonymJan 12, 2023, 4:55 AM
16 points
96 comments1 min readEA link
(nickbostrom.com)

The Com­pendium, A full ar­gu­ment about ex­tinc­tion risk from AGI

adamShimiOct 31, 2024, 12:02 PM
9 points
1 comment2 min readEA link
(www.thecompendium.ai)

Part 1/​4: A Case for Abolition

Dhruv MakwanaJan 11, 2023, 1:46 PM
33 points
7 comments3 min readEA link

The ap­pli­ca­bil­ity of transsen­tien­tist crit­i­cal path analysis

Peter SøllingAug 11, 2020, 11:26 AM
0 points
2 comments32 min readEA link
(www.optimalaltruism.com)

Agnes Cal­lard on our fu­ture, the hu­man quest, and find­ing purpose

Tobias HäberliMar 22, 2023, 12:29 PM
3 points
0 comments21 min readEA link

Pos­si­ble way of re­duc­ing great power war prob­a­bil­ity?

Denkenberger🔸Nov 28, 2019, 4:27 AM
33 points
2 comments2 min readEA link

Nu­clear Pre­pared­ness Guide

FinMar 8, 2022, 5:04 PM
106 points
13 comments11 min readEA link

AI Ver­sion of an Am­bi­tious Pro­posal to Effec­tively Ad­dress World Suffering

RobertDaoustJan 18, 2025, 3:33 PM
−12 points
2 comments3 min readEA link
(forum.effectivealtruism.org)

[Question] How con­fi­dent are you that it’s prefer­able for Amer­ica to de­velop AGI be­fore China does?

ScienceMon🔸Feb 22, 2025, 1:37 PM
200 points
47 comments1 min readEA link

Scal­ing Wargam­ing for Global Catas­trophic Risks with AI

raiJan 18, 2025, 3:07 PM
73 points
1 comment4 min readEA link
(blog.sentinel-team.org)

Space Gover­nance is not a cause area

JordanStoneFeb 24, 2025, 2:47 PM
24 points
10 comments5 min readEA link

Na­tion­wide Ac­tion Work­shop: Con­tact Congress about AI Safety!

Felix De SimoneFeb 24, 2025, 4:14 PM
5 points
0 comments1 min readEA link
(www.zeffy.com)

What Does an ASI Poli­ti­cal Ecol­ogy Mean for Hu­man Sur­vival?

Nathan SidneyFeb 23, 2025, 8:53 AM
7 points
3 comments1 min readEA link

What We Can Do to Prevent Ex­tinc­tion by AI

Joe RogeroFeb 24, 2025, 5:15 PM
22 points
2 comments11 min readEA link

New Re­port: Multi-Agent Risks from Ad­vanced AI

Lewis HammondFeb 23, 2025, 12:32 AM
39 points
3 comments2 min readEA link
(www.cooperativeai.com)

The sec­ond bit­ter les­son — there’s a fun­da­men­tal prob­lem with al­ign­ing AI

aelwoodJan 19, 2025, 6:48 PM
4 points
1 comment5 min readEA link
(pursuingreality.substack.com)

Why AI Safety Camp strug­gles with fundrais­ing (FBB #2)

gergoJan 21, 2025, 5:25 PM
63 points
10 comments7 min readEA link

Google AI Ac­cel­er­a­tor Open Call

Rochelle HarrisJan 22, 2025, 4:50 PM
10 points
1 comment1 min readEA link

Sen­tinel Fund­ing Memo — Miti­gat­ing GCRs with Fore­cast­ing & Emer­gency Response

Saul MunnNov 6, 2024, 1:57 AM
47 points
5 comments13 min readEA link

Should AI X-Risk Wor­ri­ers Short the Mar­ket?

postlibertarianNov 4, 2024, 4:16 PM
14 points
1 comment6 min readEA link

Freak­ing out about x-risk doesn’t help; set­tle in for the long war

Holly Elmore ⏸️ 🔸Nov 2, 2024, 12:00 AM
68 points
2 comments2 min readEA link

Patch­ing ~All Se­cu­rity-Rele­vant Open-Source Soft­ware?

niplavFeb 25, 2025, 9:35 PM
35 points
3 comments2 min readEA link

What are the differ­ences be­tween AGI, trans­for­ma­tive AI, and su­per­in­tel­li­gence?

Vishakha AgrawalJan 23, 2025, 10:11 AM
12 points
0 comments3 min readEA link
(aisafety.info)

Feed­back wanted! On script for an up­com­ing ~12 minute Rob Miles video on AI x-risk.

melissasamworthJan 23, 2025, 9:46 PM
25 points
0 comments1 min readEA link

AI com­pa­nies are un­likely to make high-as­surance safety cases if timelines are short

Ryan GreenblattJan 23, 2025, 6:41 PM
45 points
1 comment1 min readEA link

The GDM AGI Safety+Align­ment Team is Hiring for Ap­plied In­ter­pretabil­ity Research

Arthur ConmyFeb 25, 2025, 10:38 PM
3 points
0 comments7 min readEA link

An­nounc­ing Biose­cu­rity Fore­cast­ing Group—Ap­ply Now

Lin BLJan 23, 2025, 4:52 PM
21 points
0 comments1 min readEA link

Time to Think about ASI Con­sti­tu­tions?

ukc10014Jan 27, 2025, 9:28 AM
20 points
0 comments12 min readEA link

[Question] Share AI Safety Ideas: Both Crazy and Not

ankFeb 26, 2025, 1:09 PM
4 points
15 comments1 min readEA link

Space-Far­ing Civ­i­liza­tion den­sity es­ti­mates and mod­els—Review

Maxime Riché 🔸Feb 27, 2025, 11:44 AM
13 points
0 comments12 min readEA link

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

RemmeltNov 9, 2024, 4:10 PM
2 points
0 comments5 min readEA link
(docs.google.com)

Quan­tum Im­mor­tal­ity: A Per­spec­tive if AI Doomers are Prob­a­bly Right

turchinNov 7, 2024, 4:06 PM
7 points
0 comments1 min readEA link

An­thropic teams up with Palan­tir and AWS to sell AI to defense customers

Matrice JacobineNov 9, 2024, 11:47 AM
28 points
1 comment2 min readEA link
(techcrunch.com)

In­finite Re­wards, Finite Safety: New Models for AI Mo­ti­va­tion Without In­finite Goals

Whylome TeamNov 12, 2024, 7:21 AM
−5 points
1 comment2 min readEA link

[Question] What is the best way to ex­plain that s-risks are im­por­tant—ba­si­cally, why ex­is­tence is not in­her­ently bet­ter than non ex­is­tence? In­tend­ing this for some­one mostly un­fa­mil­iar with EA, like some­one in an in­tro program

shepardrileyNov 8, 2024, 6:12 PM
2 points
0 comments1 min readEA link

Ex­plor­ing AI Safety through “Es­cape Ex­per­i­ment”: A Short Film on Su­per­in­tel­li­gence Risks

Gaetan_Selle 🔷Nov 10, 2024, 4:42 AM
4 points
0 comments2 min readEA link

AGI Can­not Be Pre­dicted From Real In­ter­est Rates

Nicholas DeckerJan 28, 2025, 5:45 PM
24 points
3 comments1 min readEA link
(nicholasdecker.substack.com)

ALLFED needs your sup­port for global catas­tro­phe preparedness

JuanGarciaNov 11, 2024, 10:50 PM
31 points
5 comments4 min readEA link

An­nounc­ing In­dexes: Big Ques­tions, Quantified

Molly HickmanJan 27, 2025, 5:42 PM
44 points
1 comment3 min readEA link

The Game Board has been Flipped: Now is a good time to re­think what you’re doing

LintzAJan 28, 2025, 9:20 PM
379 points
69 comments13 min readEA link

[Question] Whose track record of AI pre­dic­tions would you like to see eval­u­ated?

Jonny Spicer 🔸Jan 29, 2025, 11:57 AM
10 points
13 comments1 min readEA link

[Question] Is AI x-risk be­com­ing a dis­trac­tion?

Non-zero-sum JamesFeb 27, 2025, 8:33 PM
2 points
0 comments1 min readEA link

Fake think­ing and real thinking

Joe_CarlsmithJan 28, 2025, 8:05 PM
75 points
3 comments1 min readEA link
(joecarlsmith.substack.com)

The Light­cone solu­tion to the trans­mit­ter room problem

OGTutzauer🔸Jan 29, 2025, 10:03 AM
10 points
6 comments3 min readEA link

Effec­tive AI Outreach | A Data Driven Approach

NoahCWilson🔸Feb 28, 2025, 12:44 AM
13 points
2 comments15 min readEA link

Trump-Ze­len­sky press con­fer­ence just now

Profile2024Feb 28, 2025, 6:52 PM
−14 points
0 comments1 min readEA link

An Open Let­ter To EA and AI Safety On De­cel­er­at­ing AI Development

Kenneth_DiaoFeb 28, 2025, 5:15 PM
21 points
0 comments14 min readEA link
(graspingatwaves.substack.com)

The Miss­ing Piece: Why We Need a Grand Strat­egy for AI

ColemanFeb 28, 2025, 11:49 PM
4 points
0 comments9 min readEA link

Iden­ti­fy­ing Geo­graphic Hotspots for Post-Catas­tro­phe Recovery

Liam 🔸Mar 1, 2025, 7:42 PM
4 points
0 comments33 min readEA link

Tether­ware #2: What ev­ery hu­man should know about our most likely AI future

Jáchym FibírFeb 28, 2025, 11:25 AM
3 points
0 comments11 min readEA link
(tetherware.substack.com)

Con­sider fund­ing the Nu­cleic Acid Ob­ser­va­tory to De­tect Stealth Pandemics

Jeff Kaufman 🔸Nov 11, 2024, 10:22 PM
46 points
0 comments8 min readEA link

The Case for Quan­tum Technologies

Elias X. HuberNov 14, 2024, 1:35 PM
13 points
4 comments6 min readEA link

The Struc­tural Trans­for­ma­tion Case For Peacekeeping

Lauren GilbertNov 12, 2024, 8:30 PM
32 points
9 comments1 min readEA link
(laurenpolicy.substack.com)

Hu­man ex­tinc­tion’s im­pact on non-hu­man an­i­mals re­mains largely underexplored

JoA🔸Mar 1, 2025, 9:31 PM
31 points
1 comment12 min readEA link

How much money should we be sav­ing for re­tire­ment?

Denkenberger🔸Mar 2, 2025, 6:21 AM
21 points
5 comments2 min readEA link

Opinionated take on EA and AI Safety

sammyboizMar 2, 2025, 9:37 AM
70 points
18 comments1 min readEA link

[Question] Where are all the deep­fakes?

SpiarrowMar 3, 2025, 11:46 AM
48 points
7 comments1 min readEA link

An Evolu­tion­ary Ar­gu­ment un­der­min­ing Longter­mist think­ing?

Jim BuhlerMar 3, 2025, 2:47 PM
22 points
10 comments8 min readEA link

From Com­fort Zone to Fron­tiers of Im­pact: Pur­su­ing A Late-Ca­reer Shift to Ex­is­ten­tial Risk Reduction

Jim ChapmanMar 4, 2025, 9:28 PM
214 points
6 comments10 min readEA link

Frac­tal Gover­nance: A Tractable, Ne­glected Ap­proach to Ex­is­ten­tial Risk Reduction

WillPearsonMar 5, 2025, 7:57 PM
3 points
1 comment3 min readEA link

From Con­flict to Coex­is­tence: Rewrit­ing the Game Between Hu­mans and AGI

Michael BatellMar 4, 2025, 2:10 PM
12 points
2 comments19 min readEA link

Give Neo a Chance

ankMar 6, 2025, 2:35 PM
1 point
3 comments7 min readEA link

AISN #49: Su­per­in­tel­li­gence Strategy

Center for AI SafetyMar 6, 2025, 5:43 PM
8 points
0 comments5 min readEA link
(newsletter.safe.ai)

PSA: Say­ing “1 in 5” Is Bet­ter Than “20%” When In­form­ing about risks publicly

BlankaJan 30, 2025, 7:03 PM
17 points
1 comment1 min readEA link

Is that DNA Danger­ous?

MslkmpJan 30, 2025, 7:27 PM
14 points
0 comments1 min readEA link
(press.asimov.com)

Grad­ual Disem­pow­er­ment: Sys­temic Ex­is­ten­tial Risks from In­cre­men­tal AI Development

Jan_KulveitJan 30, 2025, 5:07 PM
38 points
4 comments1 min readEA link
(gradual-disempowerment.ai)

De­ci­sion-Rele­vance of wor­lds and ADT im­ple­men­ta­tions

Maxime Riché 🔸Mar 6, 2025, 4:57 PM
9 points
1 comment15 min readEA link

How Can Aver­age Peo­ple Con­tribute to AI Safety?

Stephen McAleeseMar 6, 2025, 10:50 PM
15 points
4 comments1 min readEA link

Re­in­force­ment Learn­ing: A Non-Tech­ni­cal Primer on o1 and Deep­Seek-R1

AlexChalkFeb 9, 2025, 11:58 PM
4 points
0 comments9 min readEA link
(alexchalk.net)

An­thropic’s sub­mis­sion to the White House’s RFI on AI policy

Agustín Covarrubias 🔸Mar 6, 2025, 10:47 PM
47 points
7 comments1 min readEA link
(www.anthropic.com)

From Cri­sis to Con­trol: Estab­lish­ing a Re­silient In­ci­dent Re­sponse Frame­work for De­ployed AI Models

KevinNJan 31, 2025, 1:06 PM
10 points
1 comment6 min readEA link
(www.techpolicy.press)

AI and Non-Existence

Blue11Jan 31, 2025, 1:19 PM
4 points
0 comments2 min readEA link

Oxford Biose­cu­rity Group 2024 Im­pact Eval­u­a­tion: Ca­pac­ity Build­ing (Sum­mary/​Linkpost)

Lin BLFeb 3, 2025, 7:31 AM
20 points
0 comments1 min readEA link
(www.oxfordbiosecuritygroup.com)

Lead­er­ship change at the Cen­ter on Long-Term Risk

JesseCliftonJan 31, 2025, 9:08 PM
161 points
7 comments3 min readEA link

Con­sider keep­ing your threat mod­els pri­vate.

Miles KodamaFeb 1, 2025, 12:29 AM
18 points
2 comments4 min readEA link

Biose­cu­rity Re­sources I Often Recommend

Lin BLJan 31, 2025, 6:28 PM
22 points
0 comments1 min readEA link
(docs.google.com)

Repli­cat­ing AI Debate

Anthony FlemingFeb 1, 2025, 11:19 PM
9 points
0 comments5 min readEA link

Apoli­ti­cal global health pro­tec­tion and mak­ing an im­pact

SofiiaFFeb 1, 2025, 9:34 AM
4 points
0 comments2 min readEA link

Na­tional Se­cu­rity Is Not In­ter­na­tional Se­cu­rity: A Cri­tique of AGI Realism

C.K.Feb 2, 2025, 5:04 PM
44 points
2 comments36 min readEA link
(conradkunadu.substack.com)

Tether­ware #1: The case for hu­man­like AI with free will

Jáchym FibírJan 30, 2025, 11:57 AM
−1 points
2 comments10 min readEA link
(tetherware.substack.com)

Past foram­iniferal ac­clima­ti­za­tion ca­pac­ity is limited dur­ing fu­ture warming

Matrice JacobineNov 15, 2024, 8:38 PM
8 points
1 comment1 min readEA link
(www.nature.com)

The Tyranny of Ex­is­ten­tial Risk

Karl FaulksNov 18, 2024, 4:41 PM
4 points
1 comment5 min readEA link

US gov­ern­ment com­mis­sion pushes Man­hat­tan Pro­ject-style AI initiative

LarksNov 19, 2024, 4:22 PM
83 points
15 comments1 min readEA link
(www.reuters.com)

Align­ing AI Safety Pro­jects with a Repub­li­can Administration

Deric ChengNov 21, 2024, 10:13 PM
13 points
1 comment8 min readEA link

[Question] Seek­ing Tan­gible Ex­am­ples of AI Catastrophes

clifford.banesNov 25, 2024, 7:55 AM
9 points
2 comments1 min readEA link

Shar­ing in­sights from my mas­ter’s work on the Global Health Se­cu­rity In­dex: seek­ing feed­back and re­search directions

Vincent Niger🔸Nov 25, 2024, 12:06 PM
47 points
3 comments3 min readEA link

A Third World War?: Let’s help those who are hold­ing it back (Tim Sny­der)

Aaron GoldzimerNov 26, 2024, 3:07 AM
1 point
1 comment1 min readEA link
(snyder.substack.com)

One, per­haps un­der­rated, AI risk.

Alex (Αλέξανδρος)Nov 28, 2024, 10:34 AM
7 points
1 comment3 min readEA link

CAIDP State­ment on Lethal Au­tonomous Weapons Sys­tems

Heramb PodarNov 30, 2024, 6:00 PM
7 points
0 comments1 min readEA link
(www.linkedin.com)

MIRI’s 2024 End-of-Year Update

RobBensingerDec 3, 2024, 4:33 AM
32 points
7 comments1 min readEA link

GWWC’s 2025 Char­ity Recom­men­da­tions

Giving What We CanDec 2, 2024, 10:24 PM
40 points
0 comments2 min readEA link
(www.givingwhatwecan.org)

Should there be just one west­ern AGI pro­ject?

rosehadsharDec 4, 2024, 2:41 PM
49 points
3 comments1 min readEA link
(www.forethought.org)

De­tec­tion of Asymp­tomat­i­cally Spread­ing Pathogens

Jeff Kaufman 🔸Dec 5, 2024, 7:17 PM
52 points
6 comments1 min readEA link

AIxBio Newslet­ter #3 - At the Nexus

Andy Morgan 🔸Dec 7, 2024, 9:00 PM
7 points
0 comments2 min readEA link
(atthenexus.substack.com)

166 States Vote to Adopt Lethal Au­tonomous Weapons Re­s­olu­tion at the UNGA

Heramb PodarDec 8, 2024, 9:23 PM
14 points
0 comments1 min readEA link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 18, 2018, 4:48 AM
118 points
28 comments63 min readEA link

Ex­is­ten­tial risk miti­ga­tion: What I worry about when there are only bad options

MMMaasDec 19, 2022, 3:30 PM
62 points
3 comments9 min readEA link

Lec­ture Videos from Cam­bridge Con­fer­ence on Catas­trophic Risk

HaydnBelfieldApr 23, 2019, 4:03 PM
15 points
3 comments1 min readEA link

Which Post Idea Is Most Effec­tive?

Jordan ArelApr 25, 2022, 4:47 AM
26 points
6 comments2 min readEA link

Space gov­er­nance is im­por­tant, tractable and neglected

Tobias_BaumannJan 7, 2020, 11:24 AM
108 points
18 comments7 min readEA link

Do not go gen­tle: why the Asym­me­try does not sup­port anti-natalism

Global Priorities InstituteApr 30, 2021, 1:26 PM
4 points
0 comments2 min readEA link

Bioweapons shelter pro­ject launch

Benevolent_RainJun 14, 2022, 3:44 AM
75 points
19 comments8 min readEA link

Don’t Be Com­forted by Failed Apocalypses

ColdButtonIssuesMay 17, 2022, 11:20 AM
20 points
13 comments1 min readEA link

Sum­mary of Deep Time Reck­on­ing by Vin­cent Ialenti

vinegar10@gmail.comOct 31, 2022, 8:00 PM
10 points
1 comment10 min readEA link

[Link post] Will we see fast AI Take­off?

SammyDMartinSep 30, 2021, 2:03 PM
18 points
0 comments1 min readEA link

Risk fac­tors for s-risks

Tobias_BaumannFeb 13, 2019, 5:51 PM
40 points
3 comments1 min readEA link
(s-risks.org)

[Question] Is AI safety still ne­glected?

CoafosMar 30, 2022, 9:09 AM
13 points
13 comments1 min readEA link

Public Opinion about Ex­is­ten­tial Risk

cscanlon_duplicate0.8895599732012125Aug 25, 2018, 12:34 PM
13 points
9 comments8 min readEA link

Long-Term Fu­ture Fund: Ask Us Any­thing!

AdamGleaveDec 3, 2020, 1:44 PM
89 points
153 comments1 min readEA link

Cur­rent Es­ti­mates for Like­li­hood of X-Risk?

rhys_lindmarkAug 6, 2018, 6:05 PM
24 points
23 comments1 min readEA link

The Map of Shelters and Re­fuges from Global Risks (Plan B of X-risks Preven­tion)

turchinOct 22, 2016, 10:22 AM
16 points
9 comments7 min readEA link

[Question] Slow­ing down AI progress?

Eleni_AJul 26, 2022, 8:46 AM
16 points
9 comments1 min readEA link

Best Coun­tries dur­ing Nu­clear War

AndreFerrettiMar 4, 2022, 11:19 AM
7 points
15 comments1 min readEA link

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey MillerAug 23, 2022, 5:15 PM
39 points
17 comments1 min readEA link

De­lay, De­tect, Defend: Prepar­ing for a Fu­ture in which Thou­sands Can Re­lease New Pan­demics by Kevin Esvelt

JeremyNov 15, 2022, 4:23 PM
177 points
7 comments1 min readEA link
(dam.gcsp.ch)

The In­ter­gov­ern­men­tal Panel On Global Catas­trophic Risks (IPGCR)

DannyBresslerFeb 1, 2024, 5:36 PM
46 points
9 comments19 min readEA link

How to Sur­vive the End of the Universe

avturchinNov 28, 2019, 12:40 PM
54 points
11 comments33 min readEA link

[Paper] Sur­viv­ing global risks through the preser­va­tion of hu­man­ity’s data on the Moon

turchinMar 3, 2018, 6:39 PM
11 points
6 comments1 min readEA link

The Psy­cholog­i­cal Bar­rier to Ac­cept­ing AGI-In­duced Hu­man Ex­tinc­tion, and Why I Don’t Have It

funnyfrancoMar 11, 2025, 4:13 AM
0 points
0 comments17 min readEA link

[Question] Can we con­vince peo­ple to work on AI safety with­out con­vinc­ing them about AGI hap­pen­ing this cen­tury?

BrianTanNov 26, 2020, 2:46 PM
8 points
3 comments2 min readEA link

Good Fu­tures Ini­ti­a­tive: Win­ter Pro­ject In­tern­ship

a_e_rNov 27, 2022, 11:27 PM
67 points
7 comments3 min readEA link

[Creative writ­ing con­test] The sor­cerer in chains

SwimmerOct 30, 2021, 1:23 AM
17 points
0 comments31 min readEA link

The Next Pan­demic Could Be Worse, What Can We Do? (A Hap­pier World video)

Jeroen Willems🔸Dec 21, 2020, 9:07 PM
37 points
6 comments1 min readEA link

How Rood­man’s GWP model trans­lates to TAI timelines

kokotajlodNov 16, 2020, 2:11 PM
22 points
0 comments2 min readEA link

OpenAI board re­ceived let­ter warn­ing of pow­er­ful AI

JordanStoneNov 23, 2023, 12:16 AM
26 points
2 comments1 min readEA link
(www.reuters.com)

The NPT: Learn­ing from a Longter­mist Suc­cess [Links!]

DannyBresslerMay 20, 2021, 12:39 AM
66 points
6 comments2 min readEA link

AGI as a Black Swan Event

Stephen McAleeseDec 4, 2022, 11:35 PM
5 points
2 comments7 min readEA link
(www.lesswrong.com)

In­tro­duc­tory video on safe­guard­ing the long-term future

JulianHazellMar 7, 2022, 12:52 PM
23 points
3 comments1 min readEA link

How can economists best con­tribute to pan­demic pre­ven­tion and pre­pared­ness?

Rémi TAug 22, 2021, 8:49 PM
56 points
3 comments23 min readEA link

De­com­pos­ing Biolog­i­cal Risks: Harm, Po­ten­tial, and Strategies

simeon_cOct 14, 2021, 7:09 AM
26 points
3 comments9 min readEA link

On Pos­i­tivity given X-risks

YusefMosiahNathansonApr 28, 2022, 9:02 AM
1 point
0 comments4 min readEA link

A Case Against Strong Longtermism

A. WolffSep 2, 2022, 4:40 PM
10 points
4 comments39 min readEA link

Five GCR grants from the Global Challenges Foundation

Aaron Gertler 🔸Jan 16, 2020, 12:46 AM
34 points
1 comment5 min readEA link

X-risks of SETI and METI?

Geoffrey MillerJul 2, 2019, 10:41 PM
18 points
11 comments1 min readEA link

Test Your Knowl­edge of the World’s Biggest Problems

AndreFerrettiNov 9, 2022, 4:04 PM
30 points
3 comments1 min readEA link

It’s (not) how you use it

Eleni_ASep 7, 2022, 1:28 PM
6 points
3 comments2 min readEA link

Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing: Which In­sti­tu­tions? (A Frame­work)

IanDavidMossAug 23, 2021, 2:26 AM
86 points
7 comments34 min readEA link

Im­pact Op­por­tu­nity: In­fluence UK Biolog­i­cal Se­cu­rity Strategy

Jonathan NankivellFeb 17, 2022, 8:36 PM
49 points
0 comments3 min readEA link

Pangea: The Worst of Times

John G. HalsteadApr 5, 2020, 3:13 PM
88 points
7 comments8 min readEA link

Weekly EA Global Com­mu­nity Meet and Greet.

BrainyJun 10, 2022, 11:10 AM
1 point
0 comments1 min readEA link

Musk says de­stroy­ing Twit­ter was nec­es­sary to pre­serve hu­man­ity’s fu­ture in the cosmos

Max UtilityDec 14, 2022, 6:35 PM
−26 points
2 comments1 min readEA link
(twitter.com)

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

JakubKAug 4, 2022, 7:23 PM
18 points
9 comments1 min readEA link

[Link] New Founders Pledge re­port on ex­is­ten­tial risk

John G. HalsteadMar 28, 2019, 11:46 AM
40 points
1 comment1 min readEA link

Com­bi­na­tion Ex­is­ten­tial Risks

ozymandiasJan 14, 2019, 7:29 PM
27 points
5 comments2 min readEA link
(thingofthings.wordpress.com)

What if states don’t listen? A fun­da­men­tal gap in x-risk re­duc­tion strate­gies

HTCAug 30, 2022, 4:27 AM
30 points
1 comment18 min readEA link

How tractable is chang­ing the course of his­tory?

Jamie_HarrisMay 22, 2019, 3:29 PM
41 points
2 comments7 min readEA link
(www.sentienceinstitute.org)

Tay­lor Swift’s “long story short” Is Ac­tu­ally About Effec­tive Altru­ism and Longter­mism (PARODY)

shepardspieJul 23, 2021, 1:25 PM
34 points
12 comments7 min readEA link

Jaime Yas­sif: Re­duc­ing global catas­trophic biolog­i­cal risks

EA GlobalOct 25, 2020, 5:48 AM
8 points
0 comments1 min readEA link
(www.youtube.com)

Le­gal Pri­ori­ties Re­search: A Re­search Agenda

jonasschuettJan 6, 2021, 9:47 PM
58 points
4 comments1 min readEA link

What Is The Most Effec­tive Way To Look At Ex­is­ten­tial Risk?

Phil TannyAug 26, 2022, 11:21 AM
−2 points
2 comments2 min readEA link

Forethought: A new AI macros­trat­egy group

Amrit Sidhu-Brar 🔸Mar 11, 2025, 3:36 PM
166 points
6 comments3 min readEA link

Assess­ing SERI/​CHERI/​CERI sum­mer pro­gram im­pact by sur­vey­ing fellows

L Rudolf LSep 26, 2022, 3:29 PM
102 points
11 comments15 min readEA link

[Question] AI Eth­i­cal Committee

eaaicommitteeMar 1, 2022, 11:35 PM
8 points
0 comments1 min readEA link

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotalSep 22, 2022, 3:00 PM
49 points
11 comments15 min readEA link

The Charle­magne Effect: The Longter­mist Case For Neartermism

Reed Shafer-RayJul 25, 2022, 8:12 AM
15 points
7 comments29 min readEA link

31 Mississippi

Sean🔸Dec 8, 2024, 2:29 PM
−2 points
0 comments1 min readEA link

[Creative Writ­ing Con­test] [Fic­tion] The Rea­son Why

b_senOct 30, 2021, 2:37 AM
2 points
0 comments5 min readEA link
(archiveofourown.org)

The limited up­side of interpretability

Peter S. ParkNov 15, 2022, 8:22 PM
23 points
3 comments10 min readEA link

State Space of X-Risk Trajectories

David_KristofferssonFeb 6, 2020, 1:37 PM
24 points
7 comments9 min readEA link

Fo­cus on Civ­i­liza­tional Re­silience over Cause Areas

timfarkasMay 26, 2022, 5:37 PM
16 points
6 comments2 min readEA link

Time/​Ta­lent/​Money Con­trib­u­tors to Ex­is­ten­tial Risk Ventures

RubyTSep 6, 2022, 9:52 AM
2 points
2 comments1 min readEA link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan ArelDec 8, 2022, 9:44 PM
34 points
12 comments14 min readEA link

The His­tory, Episte­mol­ogy and Strat­egy of Tech­nolog­i­cal Res­traint, and les­sons for AI (short es­say)

MMMaasAug 10, 2022, 11:00 AM
90 points
6 comments9 min readEA link
(verfassungsblog.de)

Well-stud­ied Ex­is­ten­tial Risks with Pre­dic­tive Indicators

Noah ScalesJul 6, 2022, 10:13 PM
4 points
0 comments3 min readEA link

What are the most promis­ing strate­gies for re­duc­ing the prob­a­bil­ity of nu­clear war?

Sarah WeilerNov 16, 2022, 6:09 AM
36 points
1 comment27 min readEA link

An­nounc­ing the Fu­ture Fund

Nick_BecksteadFeb 28, 2022, 5:26 PM
366 points
185 comments4 min readEA link
(ftxfuturefund.org)

‘EA Ar­chi­tect’: Up­dates on Civ­i­liza­tional Shelters & Ca­reer Options

t46Jun 8, 2022, 1:45 PM
67 points
6 comments7 min readEA link

Ex­plor­ing Ex­is­ten­tial Risk—us­ing Con­nected Papers to find Effec­tive Altru­ism al­igned ar­ti­cles and researchers

Maris SalaJun 23, 2021, 5:03 PM
52 points
5 comments6 min readEA link

[Question] Please Share Your Per­spec­tives on the De­gree of So­cietal Im­pact from Trans­for­ma­tive AI Outcomes

KiliankApr 15, 2022, 1:23 AM
3 points
3 comments1 min readEA link

Surveillance and free ex­pres­sion | Sunyshore

Eevee🔹Feb 23, 2021, 2:14 AM
10 points
0 comments9 min readEA link
(sunyshore.substack.com)

Seek­ing feed­back/​gaug­ing in­ter­est: Crowd­sourc­ing x crowd­fund­ing for ex­is­ten­tial risk ven­tures

RubyTSep 4, 2022, 4:18 PM
4 points
0 comments1 min readEA link

In­tro­duc­ing The Log­i­cal Foun­da­tion, an EA-Aligned Non­profit with a Plan to End Poverty With Guaran­teed Income

Michael SimmNov 18, 2022, 8:13 AM
17 points
3 comments24 min readEA link

On the Risk of an Ac­ci­den­tal or Unau­tho­rized Nu­clear De­to­na­tion (Iklé, Aron­son, Madan­sky, 1958)

nathan98000Aug 4, 2022, 1:19 PM
4 points
0 comments1 min readEA link
(www.rand.org)

Can “sus­tain­abil­ity” help us safe­guard the fu­ture?

simonfriederichNov 24, 2022, 2:02 PM
4 points
1 comment2 min readEA link

13 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (Jan 2021 up­date)

HaydnBelfieldFeb 8, 2021, 12:42 PM
7 points
2 comments10 min readEA link

AGI will ar­rive by the end of this decade ei­ther as a uni­corn or as a black swan

Yuri BarzovOct 21, 2022, 10:50 AM
−4 points
7 comments3 min readEA link

Why EA needs Oper­a­tions Re­search: the sci­ence of de­ci­sion making

wesgJul 21, 2022, 12:47 AM
76 points
22 comments14 min readEA link

Mis­cel­la­neous & Meta X-Risk Overview: CERI Sum­mer Re­search Fellowship

Will AldredMar 30, 2022, 2:45 AM
39 points
0 comments3 min readEA link

The threat of syn­thetic bioter­ror de­mands even fur­ther ac­tion and leadership

dEAsignSep 30, 2022, 8:58 AM
8 points
0 comments2 min readEA link

A case against fo­cus­ing on tail-end nu­clear war risks

Sarah WeilerNov 16, 2022, 6:08 AM
32 points
15 comments10 min readEA link

[Question] Is there a “What We Owe The Fu­ture” fel­low­ship study guide?

Jordan ArelSep 1, 2022, 1:40 AM
8 points
2 comments1 min readEA link

[Cross­post]: Huge vol­canic erup­tions: time to pre­pare (Na­ture)

Mike CassidyAug 19, 2022, 12:02 PM
107 points
1 comment1 min readEA link
(www.nature.com)

The Hap­piness Max­i­mizer: Why EA is an x-risk

Obasi ShawAug 30, 2022, 4:29 AM
8 points
5 comments32 min readEA link

Prob­a­bil­ity of ex­tinc­tion for var­i­ous types of catastrophes

Vasco Grilo🔸Oct 9, 2022, 3:30 PM
16 points
0 comments10 min readEA link

Pos­si­ble OpenAI’s Q* break­through and Deep­Mind’s AlphaGo-type sys­tems plus LLMs

BurnydelicNov 23, 2023, 7:02 AM
13 points
4 comments2 min readEA link

[Question] Is nan­otech­nol­ogy (such as APM) im­por­tant for EAs’ to work on?

pixel_brownie_softwareMar 12, 2020, 3:36 PM
6 points
9 comments1 min readEA link

What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

ansonDec 11, 2021, 1:16 AM
12 points
0 comments42 min readEA link

[Question] Would cre­at­ing and bury­ing a se­ries of dooms­day chests to re­boot civ­i­liza­tion be a wor­thy use of re­sources?

ewuSep 7, 2022, 2:45 AM
5 points
1 comment1 min readEA link

Cen­tre for the Study of Ex­is­ten­tial Risk Four Month Re­port June—Septem­ber 2020

HaydnBelfieldDec 2, 2020, 6:33 PM
24 points
0 comments17 min readEA link

GCRI Open Call for Ad­visees and Col­lab­o­ra­tors 2022

McKenna_FitzgeraldMay 23, 2022, 9:41 PM
4 points
3 comments1 min readEA link

How to dis­solve moral clue­less­ness about donat­ing mosquito nets

ben.smithJun 8, 2022, 7:12 AM
25 points
8 comments12 min readEA link

AI Safety Overview: CERI Sum­mer Re­search Fellowship

Jamie BMar 24, 2022, 3:12 PM
29 points
0 comments2 min readEA link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael GatesJun 14, 2022, 12:49 AM
45 points
5 comments30 min readEA link

[Question] Benefits/​Risks of Scott Aaron­son’s Ortho­dox/​Re­form Fram­ing for AI Alignment

JeremyNov 21, 2022, 5:47 PM
15 points
5 comments1 min readEA link
(scottaaronson.blog)

Differ­en­tial progress /​ in­tel­lec­tual progress /​ tech­nolog­i­cal development

MichaelA🔸Apr 24, 2020, 2:08 PM
47 points
16 comments7 min readEA link

#213 – AI caus­ing a “cen­tury in a decade” — and how we’re com­pletely un­pre­pared (Will MacAskill on The 80,000 Hours Pod­cast)

80000_HoursMar 11, 2025, 5:55 PM
24 points
0 comments22 min readEA link

There are no peo­ple to be effec­tively al­tru­is­tic for on a dead planet: EA fund­ing of pro­jects with­out con­duct­ing En­vi­ron­men­tal Im­pact Assess­ments (EIAs), Health and Safety Assess­ments (HSAs) and Life Cy­cle Assess­ments (LCAs) = catastrophe

Deborah W.A. FoulkesMay 26, 2022, 11:46 PM
12 points
22 comments8 min readEA link

[Pod­cast] Thomas Moynihan on the His­tory of Ex­is­ten­tial Risk

finmMar 22, 2021, 11:07 AM
26 points
2 comments1 min readEA link
(hearthisidea.com)

Over­re­act­ing to cur­rent events can be very costly

Kelsey PiperOct 4, 2022, 9:30 PM
281 points
68 comments4 min readEA link

An­nounc­ing the Le­gal Pri­ori­ties Pro­ject Writ­ing Com­pe­ti­tion: Im­prov­ing Cost-Benefit Anal­y­sis to Ac­count for Ex­is­ten­tial and Catas­trophic Risks

MackenzieJun 7, 2022, 9:37 AM
104 points
8 comments9 min readEA link

AI Safety Endgame Stories

IvanVendrovSep 28, 2022, 5:12 PM
31 points
1 comment1 min readEA link

Deep­Mind’s gen­er­al­ist AI, Gato: A non-tech­ni­cal explainer

frances_lorenzMay 16, 2022, 9:19 PM
128 points
13 comments6 min readEA link

[Pod­cast] Si­mon Beard on Parfit, Cli­mate Change, and Ex­is­ten­tial Risk

finmJan 28, 2021, 7:47 PM
11 points
0 comments1 min readEA link
(hearthisidea.com)

[Notes] Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

BenJan 14, 2020, 10:13 PM
40 points
7 comments14 min readEA link

The Hu­man Con­di­tion: A Cru­cial Com­po­nent of Ex­is­ten­tial Risk Calcu­la­tions

Phil TannyAug 28, 2022, 2:51 PM
−10 points
5 comments1 min readEA link

What is the ar­gu­ment against a Thanos-ing all hu­man­ity to save the lives of other sen­tient be­ings?

somethoughtsMar 7, 2021, 8:02 AM
0 points
11 comments3 min readEA link

Who will be in charge once al­ign­ment is achieved?

trurlDec 16, 2022, 4:53 PM
8 points
2 comments1 min readEA link

[Video] How hav­ing Fast Fourier Trans­forms sooner could have helped with Nu­clear Disar­ma­ment—Veritasium

mako yassNov 3, 2022, 8:52 PM
12 points
1 comment1 min readEA link
(www.youtube.com)

4 Years Later: Pres­i­dent Trump and Global Catas­trophic Risk

HaydnBelfieldOct 25, 2020, 4:28 PM
43 points
10 comments10 min readEA link

Paths and waysta­tions in AI safety

Joe_CarlsmithMar 11, 2025, 6:52 PM
22 points
2 comments1 min readEA link
(joecarlsmith.substack.com)

Fron­tier AI sys­tems have sur­passed the self-repli­cat­ing red line

Greg_Colbourn ⏸️ Dec 10, 2024, 4:33 PM
25 points
14 comments1 min readEA link
(github.com)

Short­en­ing & en­light­en­ing dark ages as a sub-area of catas­trophic risk reduction

JpmosMar 5, 2022, 7:43 AM
27 points
7 comments5 min readEA link

An In­for­mal Re­view of Space Exploration

kbogJan 31, 2020, 1:16 PM
51 points
5 comments35 min readEA link

Crit­i­cism of EA and longtermism

St. IgnorantSep 2, 2022, 7:23 AM
2 points
0 comments14 min readEA link

Zvi on: A Play­book for AI Policy at the Man­hat­tan Institute

PhibAug 4, 2024, 9:34 PM
9 points
1 comment7 min readEA link
(thezvi.substack.com)

Ar­gu­ments for Why Prevent­ing Hu­man Ex­tinc­tion is Wrong

Anthony FlemingMay 21, 2022, 7:17 AM
32 points
48 comments3 min readEA link

In­vite: UnCon­fer­ence, How best for hu­mans to thrive and sur­vive over the long-term

Ben YeohJul 27, 2022, 10:19 PM
10 points
2 comments2 min readEA link

“The Physi­cists”: A play about ex­tinc­tion and the re­spon­si­bil­ity of scientists

Lara_THNov 29, 2022, 4:53 PM
28 points
1 comment8 min readEA link

A The­olo­gian’s Re­sponse to An­thro­pogenic Ex­is­ten­tial Risk

Fr Peter WygNov 3, 2022, 4:37 AM
108 points
17 comments11 min readEA link

BERI, Epoch, and FAR will ex­plain their work & cur­rent job open­ings on­line this Sunday

RockwellAug 19, 2022, 8:34 PM
7 points
0 comments1 min readEA link

Mili­tary Ar­tifi­cial In­tel­li­gence as Con­trib­u­tor to Global Catas­trophic Risk

MMMaasJun 27, 2022, 10:35 AM
42 points
0 comments52 min readEA link

Pre­sent-day good in­ten­tions aren’t suffi­cient to make the longterm fu­ture good in expectation

trurlSep 2, 2022, 3:22 AM
7 points
0 comments14 min readEA link

The US ex­pands re­stric­tions on AI ex­ports to China. What are the x-risk effects?

Stephen ClareOct 14, 2022, 6:17 PM
161 points
20 comments4 min readEA link

AI Safety Needs Great Product Builders

James BradyNov 2, 2022, 11:33 AM
45 points
1 comment6 min readEA link

A model about the effect of to­tal ex­is­ten­tial risk on ca­reer choice

Jonas MossSep 10, 2022, 7:18 AM
12 points
4 comments2 min readEA link

My sum­mary of “Prag­matic AI Safety”

Eleni_ANov 5, 2022, 2:47 PM
14 points
0 comments5 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

James FodorDec 13, 2018, 5:12 AM
10 points
12 comments7 min readEA link

An­thro­pocen­tric Altru­ism is Ineffec­tive—The EA Move­ment Must Em­brace En­vi­ron­men­tal­ism and Be­come Ecocentric

Deborah W.A. FoulkesAug 5, 2024, 3:51 AM
−14 points
8 comments6 min readEA link

Air Safety to Com­bat Global Catas­trophic Biorisks [OLD VERSION]

Jam KraprayoonDec 26, 2022, 4:58 PM
78 points
0 comments36 min readEA link
(docs.google.com)

Nu­clear Risk Overview: CERI Sum­mer Re­search Fellowship

Will AldredMar 27, 2022, 3:51 PM
57 points
2 comments13 min readEA link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni_ASep 1, 2022, 11:45 AM
17 points
1 comment3 min readEA link

Is­lands, nu­clear win­ter, and trade dis­rup­tion as a hu­man ex­is­ten­tial risk factor

Matt BoydAug 7, 2022, 2:18 AM
36 points
6 comments19 min readEA link

Rab­bits, robots and resurrection

Patrick WilsonMay 10, 2022, 3:00 PM
9 points
0 comments15 min readEA link

2023 Fu­ture Perfect 50

Toby Tremlett🔹Nov 29, 2023, 3:12 PM
10 points
1 comment1 min readEA link
(www.vox.com)

BERI is hiring a Deputy Director

sawyer🔸Jul 18, 2022, 10:12 PM
6 points
0 comments1 min readEA link

Fa­nat­i­cism in AI: SERI Project

Jake Arft-GuatelliSep 24, 2021, 4:39 AM
7 points
2 comments5 min readEA link

BERI is seek­ing new col­lab­o­ra­tors (2022)

sawyer🔸May 17, 2022, 5:31 PM
21 points
0 comments1 min readEA link

[Question] What is EA opinion on The Bul­letin of the Atomic Scien­tists?

VPetukhovDec 2, 2019, 5:45 AM
36 points
9 comments1 min readEA link

Toby Ord: Fireside chat (2018)

EA GlobalMar 1, 2019, 3:48 PM
20 points
0 comments28 min readEA link
(www.youtube.com)

[Question] EA’s Achieve­ments in 2022

ElliotJDaviesDec 14, 2022, 2:33 PM
98 points
11 comments1 min readEA link

Cli­mate-con­tin­gent Fi­nance, and A Gen­er­al­ized Mechanism for X-Risk Re­duc­tion Financing

johnjnaySep 26, 2022, 1:23 PM
6 points
1 comment25 min readEA link

Plan of Ac­tion to Prevent Hu­man Ex­tinc­tion Risks

turchinMar 14, 2016, 2:51 PM
11 points
3 comments7 min readEA link

Cli­mate change is Now Self-amplifying

Noah ScalesJul 11, 2022, 10:48 AM
−3 points
2 comments3 min readEA link

Anti-squat­ted AI x-risk do­mains index

plexAug 12, 2022, 12:00 PM
56 points
9 comments1 min readEA link

Model­ling civil­i­sa­tion be­yond a catastrophe

ArepoOct 30, 2022, 4:26 PM
58 points
5 comments13 min readEA link

A new database of nan­otech­nol­ogy strat­egy re­sources

Ben SnodinNov 5, 2022, 5:20 AM
39 points
0 comments1 min readEA link

My cur­rent thoughts on the risks from SETI

Matthew_BarnettMar 15, 2022, 5:17 PM
47 points
9 comments10 min readEA link

The case to abol­ish the biol­ogy of suffer­ing as a longter­mist action

Gaetan_Selle 🔷May 21, 2022, 8:51 AM
38 points
8 comments4 min readEA link

Emily Grundy: Aus­trali­ans’ per­cep­tions of global catas­trophic risks

EA GlobalNov 21, 2020, 8:12 AM
9 points
0 comments1 min readEA link
(www.youtube.com)

Pod­cast: Bryan Ca­plan on open bor­ders, UBI, to­tal­i­tar­i­anism, AI, pan­demics, util­i­tar­i­anism and la­bor economics

Gus DockerFeb 22, 2022, 3:04 PM
22 points
0 comments45 min readEA link
(www.utilitarianpodcast.com)

[Question] Re­quest for As­sis­tance—Re­search on Sce­nario Devel­op­ment for Ad­vanced AI Risk

KiliankMar 30, 2022, 3:01 AM
2 points
1 comment1 min readEA link

Sys­temic Cas­cad­ing Risks: Rele­vance in Longter­mism & Value Lock-In

Richard RSep 2, 2022, 7:53 AM
59 points
10 comments16 min readEA link

Carnegie Coun­cil MisUn­der­stands Longtermism

Jeff ASep 30, 2022, 2:57 AM
6 points
8 comments1 min readEA link
(www.carnegiecouncil.org)

I want Fu­ture Perfect, but for sci­ence publications

James LinMar 8, 2022, 5:09 PM
67 points
8 comments5 min readEA link

New Work­ing Paper Series of the Le­gal Pri­ori­ties Project

Legal Priorities ProjectOct 18, 2021, 10:30 AM
60 points
0 comments9 min readEA link

My first effec­tive al­tru­ism con­fer­ence: 10 learn­ings, my 121s and next steps

Milan.PatelMay 21, 2022, 8:51 AM
10 points
3 comments4 min readEA link

On­line Con­fer­ence Op­por­tu­nity for EA Grad Students

jonathancourtneyAug 21, 2020, 5:31 PM
8 points
1 comment1 min readEA link

Global Challenges Pro­ject—Ex­is­ten­tial Risk Workshop

Emma AbeleSep 23, 2022, 10:13 PM
3 points
0 comments1 min readEA link

[Question] How to dis­close a new x-risk?

harsimonyAug 24, 2022, 1:35 AM
20 points
9 comments1 min readEA link

[Cross­post] Rel­a­tivis­tic Colonization

itaibnDec 31, 2020, 2:30 AM
8 points
7 comments4 min readEA link

An­nounc­ing AI Safety Support

Linda LinseforsNov 19, 2020, 8:19 PM
55 points
0 comments4 min readEA link

Quotes about the long reflection

MichaelA🔸Mar 5, 2020, 7:48 AM
55 points
14 comments13 min readEA link

Samotsvety Nu­clear Risk up­date Oc­to­ber 2022

NunoSempereOct 3, 2022, 6:10 PM
262 points
52 comments16 min readEA link

[Question] How have nu­clear win­ter mod­els evolved?

Jordan ArelSep 11, 2022, 10:40 PM
14 points
3 comments1 min readEA link

[Question] Trac­tors that need to be con­nected to func­tion?

Miquel Banchs-Piqué (prev. mikbp)Oct 31, 2022, 8:42 PM
4 points
2 comments1 min readEA link

In­tel­li­gence failures and a the­ory of change for fore­cast­ing

Nathan_BarnardAug 31, 2022, 2:05 AM
12 points
1 comment10 min readEA link

War in Taiwan and AI Timelines

Jordan_SchneiderAug 24, 2022, 2:24 AM
19 points
3 comments8 min readEA link
(www.chinatalk.media)

In­tro­duc­ing the Fund for Align­ment Re­search (We’re Hiring!)

AdamGleaveJul 6, 2022, 2:00 AM
74 points
3 comments4 min readEA link

Re­sponse to Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfieldMar 8, 2021, 6:09 PM
138 points
73 comments5 min readEA link

Robert Wright on us­ing cog­ni­tive em­pa­thy to save the world

80000_HoursMay 27, 2021, 3:38 PM
7 points
0 comments69 min readEA link

Build­ing a Bet­ter Dooms­day Clock

christian.rMay 25, 2022, 5:02 PM
25 points
2 comments1 min readEA link
(www.lawfareblog.com)

[Question] Is ex­is­ten­tial risk more press­ing than other ways to im­prove the long-term fu­ture?

Eevee🔹Aug 20, 2020, 3:50 AM
23 points
1 comment1 min readEA link

FYI: I’m work­ing on a book about the threat of AGI/​ASI for a gen­eral au­di­ence. I hope it will be of value to the cause and the community

Darren McKeeJun 17, 2022, 11:52 AM
32 points
1 comment2 min readEA link

[Long ver­sion] Case study: re­duc­ing catas­trophic risk from in­side the US bureaucracy

Tom_GreenJun 27, 2022, 7:20 PM
49 points
0 comments43 min readEA link

Eco­nomic Pie Re­search as a Cause Area

medicheApr 15, 2022, 10:41 AM
4 points
3 comments3 min readEA link

The great en­ergy de­scent—Post 3: What we can do, what we can’t do

CB🔸Aug 31, 2022, 9:51 PM
18 points
3 comments22 min readEA link

Fo­cus of the IPCC Assess­ment Re­ports Has Shifted to Lower Temperatures

FJehnMay 12, 2022, 12:15 PM
10 points
15 comments8 min readEA link

Fermi es­ti­ma­tion of the im­pact you might have work­ing on AI safety

fribMay 13, 2022, 1:30 PM
24 points
13 comments1 min readEA link

[Question] What do we do if AI doesn’t take over the world, but still causes a sig­nifi­cant global prob­lem?

James_BanksAug 2, 2020, 3:35 AM
16 points
5 comments1 min readEA link

My Most Likely Rea­son to Die Young is AI X-Risk

AISafetyIsNotLongtermistJul 4, 2022, 3:34 PM
237 points
62 comments4 min readEA link
(www.lesswrong.com)

Let us know how psy­chol­ogy can help in­crease your impact

IngaOct 21, 2022, 10:32 AM
30 points
0 comments1 min readEA link

Is­lands as re­fuges for sur­viv­ing global catastrophes

turchinSep 13, 2018, 1:33 PM
9 points
0 comments2 min readEA link

A Cri­tique of Longter­mism by Pop­u­lar YouTube Science Chan­nel, Sabine Hossen­felder: “Elon Musk & The Longter­mists: What Is Their Plan?”

Ram AdityaOct 29, 2022, 5:31 PM
61 points
21 comments2 min readEA link

[Book] On Assess­ing the Risk of Nu­clear War

Aryeh EnglanderJul 7, 2022, 9:08 PM
28 points
2 comments8 min readEA link

Clas­sify­ing sources of AI x-risk

Sam ClarkeAug 8, 2022, 6:18 PM
41 points
4 comments3 min readEA link

How I Came To Longter­mism On My Own & An Out­sider Per­spec­tive On EA Longtermism

Jordan ArelAug 7, 2022, 2:42 AM
34 points
2 comments20 min readEA link

Nu­clear Strat­egy in a Semi-Vuln­er­a­ble World

Jackson WagnerJun 28, 2021, 5:35 PM
28 points
0 comments18 min readEA link

AI Risk in Africa

Claude FormanekOct 12, 2021, 2:28 AM
18 points
0 comments10 min readEA link

Time con­sis­tency for the EA com­mu­nity: Pro­jects that bridge the gap be­tween near-term boot­strap­ping and long-term targets

Arturo MaciasNov 12, 2022, 7:44 AM
7 points
0 comments7 min readEA link

An open let­ter to my great grand kids’ great grand kids

LockeAug 10, 2022, 3:07 PM
1 point
0 comments13 min readEA link

How to or­ganise ‘the one per­cent’ to fix cli­mate change

One Percent OrganiserApr 16, 2022, 5:18 PM
2 points
2 comments9 min readEA link

Nines of safety: Ter­ence Tao’s pro­posed unit of mea­sure­ment of risk

ansonDec 12, 2021, 6:01 PM
41 points
17 comments4 min readEA link

[Question] Donat­ing against Short Term AI risks

Jan-WillemNov 16, 2020, 12:23 PM
6 points
10 comments1 min readEA link

How would you es­ti­mate the value of de­lay­ing AGI by 1 day, in marginal dona­tions to GiveWell?

AnonymousTurtleDec 16, 2022, 9:25 AM
30 points
19 comments2 min readEA link

New re­port on how much com­pu­ta­tional power it takes to match the hu­man brain (Open Philan­thropy)

Aaron Gertler 🔸Sep 15, 2020, 1:06 AM
45 points
1 comment18 min readEA link
(www.openphilanthropy.org)

Re­silience Via Frag­mented Power

steve6320Jul 14, 2022, 3:37 PM
2 points
0 comments6 min readEA link

[Question] I’m in­ter­view­ing Bear Brau­moel­ler about ‘Only The Dead: The Per­sis­tence of War in the Modern Age’. What should I ask?

Robert_WiblinAug 19, 2022, 3:18 PM
12 points
2 comments1 min readEA link

New pop­u­lar sci­ence book on x-risks: “End Times”

Hauke HillebrandtOct 1, 2019, 7:18 AM
17 points
2 comments2 min readEA link

Ar­tifi­cial In­tel­li­gence and Nu­clear Com­mand, Con­trol, & Com­mu­ni­ca­tions: The Risks of Integration

Peter RautenbachNov 18, 2022, 1:01 PM
60 points
3 comments50 min readEA link

How to Take Over the Uni­verse (in Three Easy Steps)

WriterOct 18, 2022, 3:04 PM
14 points
2 comments12 min readEA link
(youtu.be)

Causal Net­work Model III: Findings

Alex_BarryNov 22, 2017, 3:43 PM
7 points
3 comments9 min readEA link

‘Force mul­ti­pli­ers’ for EA research

Craig DraytonJun 18, 2022, 1:39 PM
18 points
7 comments4 min readEA link

Crit­i­cism of the main frame­work in AI alignment

Michele CampoloAug 31, 2022, 9:44 PM
42 points
4 comments7 min readEA link

A full syl­labus on longtermism

jtmMar 5, 2021, 10:57 PM
110 points
13 comments8 min readEA link

Les­sons from Run­ning Stan­ford EA and SERI

kuhanjAug 20, 2021, 2:51 PM
267 points
26 comments23 min readEA link

Four rea­sons I find AI safety emo­tion­ally compelling

Kat WoodsJun 28, 2022, 2:01 PM
32 points
5 comments4 min readEA link

An en­tire cat­e­gory of risks is un­der­val­ued by EA [Sum­mary of pre­vi­ous fo­rum post]

Richard RSep 5, 2022, 3:07 PM
79 points
5 comments5 min readEA link

Economist: “What’s the worst that could hap­pen”. A pos­i­tive, sharable but vague ar­ti­cle on Ex­is­ten­tial Risk

Nathan YoungJul 8, 2020, 10:37 AM
12 points
3 comments2 min readEA link

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

FroolowOct 18, 2022, 10:54 PM
111 points
63 comments39 min readEA link

High risk, low re­ward: A challenge to the as­tro­nom­i­cal value of ex­is­ten­tial risk miti­ga­tion (David Thorstad)

Global Priorities InstituteJul 4, 2023, 1:23 PM
32 points
3 comments3 min readEA link
(globalprioritiesinstitute.org)

Ok Doomer! SRM and Catas­trophic Risk Podcast

Gideon FutermanAug 20, 2022, 12:22 PM
10 points
4 comments1 min readEA link
(open.spotify.com)

Pan­demic pre­ven­tion in Ger­man par­ties’ fed­eral elec­tion platforms

tilboySep 19, 2021, 7:40 AM
17 points
2 comments6 min readEA link

EA, Psy­chol­ogy & AI Safety Research

Sam EllisMay 26, 2022, 11:46 PM
28 points
3 comments6 min readEA link

Nu­clear Ex­pert Com­ment on Samotsvety Nu­clear Risk Forecast

JhrosenbergMar 26, 2022, 9:22 AM
129 points
13 comments18 min readEA link

Beyond Astro­nom­i­cal Waste

Wei DaiDec 27, 2018, 9:27 AM
25 points
2 comments1 min readEA link
(www.lesswrong.com)

Cli­mate change, geo­eng­ineer­ing, and ex­is­ten­tial risk

John G. HalsteadMar 20, 2018, 10:48 AM
20 points
8 comments1 min readEA link

Ap­ply to the Stan­ford Ex­is­ten­tial Risks Con­fer­ence! (April 17-18)

kuhanjMar 26, 2021, 6:28 PM
26 points
2 comments1 min readEA link

[Question] How wor­ried should I be about a child­less Dis­ney­land?

Will BradshawOct 28, 2019, 3:32 PM
31 points
8 comments1 min readEA link

My Cause Selec­tion: Dave Denkenberger

Denkenberger🔸Aug 16, 2015, 3:06 PM
13 points
7 comments3 min readEA link

Civ­i­liza­tion Re­cov­ery Kits

Soof GolanSep 21, 2022, 9:26 AM
25 points
9 comments2 min readEA link

[Question] A bill to mas­sively ex­pand NSF to tech do­mains. What’s the rele­vance for x-risk?

EdoAradJul 12, 2020, 3:20 PM
22 points
4 comments1 min readEA link

My notes on: A Very Ra­tional End of the World | Thomas Moynihan

Vasco Grilo🔸Jun 20, 2022, 8:50 AM
13 points
1 comment5 min readEA link

[Question] What’s the like­li­hood of ir­recov­er­able civ­i­liza­tional col­lapse if 90% of the pop­u­la­tion dies?

simeon_cAug 7, 2022, 7:47 PM
21 points
3 comments1 min readEA link

[Question] How long does it take to un­der­srand AI X-Risk from scratch so that I have a con­fi­dent, clear men­tal model of it from first prin­ci­ples?

Jordan ArelJul 27, 2022, 4:58 PM
29 points
6 comments1 min readEA link

Birth rates and civil­i­sa­tion doom loop

deus777Nov 18, 2022, 10:56 AM
−40 points
1 comment2 min readEA link

Su­perfore­cast­ing Long-Term Risks and Cli­mate Change

LuisEUrtubeyAug 19, 2022, 6:05 PM
48 points
0 comments2 min readEA link

Could re­al­is­tic de­pic­tions of catas­trophic AI risks effec­tively re­duce said risks?

Matthew BarberAug 17, 2022, 8:01 PM
26 points
11 comments2 min readEA link

You won’t solve al­ign­ment with­out agent foundations

MikhailSaminNov 6, 2022, 8:07 AM
14 points
0 comments1 min readEA link

Eco­nomic in­equal­ity and the long-term future

Global Priorities InstituteApr 30, 2021, 1:26 PM
11 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

Reflect on Your Ca­reer Ap­ti­tudes (Ex­er­cise)

AkashApr 10, 2022, 2:40 AM
16 points
1 comment2 min readEA link

[Question] Are so­cial me­dia al­gorithms an ex­is­ten­tial risk?

Barry GrimesSep 15, 2020, 8:52 AM
24 points
13 comments1 min readEA link

Open Let­ter Against Reck­less Nu­clear Es­ca­la­tion and Use

Vasco Grilo🔸Nov 3, 2022, 3:08 PM
10 points
2 comments1 min readEA link
(futureoflife.org)

[Question] Is there any­thing like “green bonds” for x-risk miti­ga­tion?

RamiroJun 30, 2020, 12:33 AM
21 points
1 comment1 min readEA link

Anat­o­miz­ing Chem­i­cal and Biolog­i­cal Non-State Adversaries

ncmouliosNov 11, 2022, 9:23 PM
2 points
0 comments1 min readEA link

[Question] Ex­is­ten­tial Biorisk vs. GCBR

Will AldredJul 15, 2022, 9:16 PM
37 points
2 comments1 min readEA link

Luisa Ro­driguez: How to do em­piri­cal cause pri­ori­ti­za­tion re­search

EA GlobalNov 21, 2020, 8:12 AM
7 points
0 comments1 min readEA link
(www.youtube.com)

Niel Bow­er­man: Could cli­mate change make Earth un­in­hab­it­able for hu­mans?

EA GlobalJan 17, 2020, 1:07 AM
7 points
2 comments15 min readEA link
(www.youtube.com)

An EA case for in­ter­est in UAPs/​UFOs and an idea as to what they are

TheNotSoGreatFilterDec 30, 2021, 5:13 PM
39 points
14 comments5 min readEA link

Trans­lat­ing The Precipice into Czech: My ex­pe­rience and recommendations

Anna StadlerovaAug 24, 2022, 4:51 AM
96 points
7 comments20 min readEA link

Pro­posal for a Nu­clear Off-Ramp Toolkit

Stan PinsentNov 29, 2022, 4:02 PM
15 points
0 comments3 min readEA link

An­nounc­ing Fu­ture Fo­rum—Ap­ply Now

isaakfreemanJul 6, 2022, 5:35 PM
88 points
11 comments4 min readEA link

[Doc­toral sem­i­nar] Chem­i­cal and biolog­i­cal weapons: In­ter­na­tional in­ves­tiga­tive mechanisms

ncmouliosNov 17, 2022, 12:26 PM
17 points
0 comments1 min readEA link
(www.asser.nl)

[Question] Does Fac­tory Farm­ing Make Nat­u­ral Pan­demics More Likely?

brookOct 31, 2022, 12:50 PM
12 points
2 comments1 min readEA link

Stu­art Rus­sell Hu­man Com­pat­i­ble AI Roundtable with Allan Dafoe, Rob Re­ich, & Ma­ri­etje Schaake

Mahendra PrasadFeb 11, 2021, 7:43 AM
16 points
0 comments1 min readEA link

A Case for Cli­mate Change as a Top Fund­ing Pri­or­ity

Ted ShieldsDec 22, 2022, 11:50 PM
2 points
9 comments4 min readEA link

Against Agents as an Ap­proach to Aligned Trans­for­ma­tive AI

𝕮𝖎𝖓𝖊𝖗𝖆Dec 27, 2022, 12:47 AM
4 points
0 comments1 min readEA link

Kris­tian Rönn: Global challenges

EA GlobalAug 11, 2017, 8:19 AM
8 points
0 comments1 min readEA link
(www.youtube.com)

Les­sons from Three Mile Is­land for AI Warn­ing Shots

NickGabsSep 26, 2022, 2:47 AM
42 points
0 comments15 min readEA link

The dan­ger of nu­clear war is greater than it has ever been. Why donat­ing to and sup­port­ing Back from the Brink is an effec­tive re­sponse to this threat

astuppleAug 2, 2022, 2:31 AM
14 points
8 comments5 min readEA link

Noah Tay­lor: Devel­op­ing a re­search agenda for bridg­ing ex­is­ten­tial risk and peace and con­flict studies

EA GlobalJan 21, 2021, 4:19 PM
21 points
0 comments20 min readEA link
(www.youtube.com)

En­light­ened Con­cerns of Tomorrow

cassidynelsonMar 15, 2018, 5:29 AM
15 points
7 comments4 min readEA link

What could a fel­low­ship scheme aimed at tack­ling the biggest threats to hu­man­ity look like?

james_rSep 1, 2022, 3:29 PM
4 points
0 comments5 min readEA link

The top X-fac­tor EA ne­glects: desta­bi­liza­tion of the United States

Yelnats T.J.Aug 31, 2022, 7:18 PM
33 points
2 comments18 min readEA link

[Question] What ques­tions could COVID-19 provide ev­i­dence on that would help guide fu­ture EA de­ci­sions?

MichaelA🔸Mar 27, 2020, 5:51 AM
7 points
7 comments1 min readEA link

Longter­mism, risk, and extinction

Richard PettigrewAug 4, 2022, 3:25 PM
78 points
12 comments41 min readEA link

Longter­mism Sus­tain­abil­ity Un­con­fer­ence Invite

Ben YeohSep 1, 2022, 12:34 PM
5 points
0 comments2 min readEA link

In­ter­view sub­jects for im­pact liti­ga­tion pro­ject (biose­cu­rity & pan­demic pre­pared­ness)

Legal Priorities ProjectMar 3, 2022, 2:20 PM
20 points
0 comments1 min readEA link

NASA will re-di­rect an as­ter­oid tonight as a test for plane­tary defence (link-post)

Ben StewartSep 26, 2022, 4:58 AM
70 points
14 comments1 min readEA link
(theconversation.com)

[Cause Ex­plo­ra­tion Prizes] NOT Get­ting Ab­solutely Hosed by a So­lar Flare

aurellemAug 26, 2022, 8:23 AM
5 points
1 comment2 min readEA link

5 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (April 2020 up­date)

HaydnBelfieldApr 29, 2020, 9:37 AM
23 points
1 comment4 min readEA link

Cryp­tocur­rency Ex­ploits Show the Im­por­tance of Proac­tive Poli­cies for AI X-Risk

eSpencerSep 16, 2022, 4:44 AM
14 points
1 comment4 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.hFeb 21, 2021, 1:41 PM
16 points
1 comment7 min readEA link

Be a Stoic and build bet­ter democ­ra­cies: an Aussie-as take on x-risks (re­view es­say)

Matt BoydNov 21, 2021, 4:30 AM
32 points
3 comments11 min readEA link

Cos­mic rays could cause ma­jor elec­tronic dis­rup­tion and pose a small ex­is­ten­tial risk

M_AllcockAug 12, 2022, 3:30 AM
12 points
0 comments12 min readEA link

What per­centage of things that could kill us all are “Other” risks?

PCO MooreAug 10, 2022, 9:20 AM
7 points
0 comments4 min readEA link

The im­por­tance of get­ting digi­tal con­scious­ness right

Derek ShillerJun 13, 2022, 10:41 AM
68 points
13 comments8 min readEA link

[Question] What are ex­am­ples where ex­treme risk poli­cies have been suc­cess­fully im­ple­mented?

Joris 🔸May 16, 2022, 3:37 PM
32 points
14 comments2 min readEA link

The great en­ergy de­scent (short ver­sion) - An im­por­tant thing EA might have missed

CB🔸Aug 31, 2022, 9:50 PM
67 points
94 comments10 min readEA link

Why say ‘longter­mism’ and not just ‘ex­tinc­tion risk’?

tcelferactAug 10, 2022, 11:05 PM
5 points
4 comments1 min readEA link

The Map of Im­pact Risks and As­teroid Defense

turchinNov 3, 2016, 3:34 PM
7 points
8 comments4 min readEA link

Ex­is­ten­tial Risks: Hu­man Rights

Chiharu SaruwatariDec 15, 2022, 12:35 PM
4 points
0 comments6 min readEA link

Will longter­mists self-efface

Noah ScalesAug 12, 2022, 2:32 AM
−7 points
23 comments6 min readEA link

Cli­mate change dona­tion recommendations

SanjayJul 16, 2020, 9:17 PM
46 points
7 comments14 min readEA link

Shal­low Re­port on Nu­clear War (Abol­ish­ment)

Joel Tan🔸Oct 18, 2022, 7:36 AM
35 points
14 comments18 min readEA link

Ap­ply for Stan­ford Ex­is­ten­tial Risks Ini­ti­a­tive (SERI) Postdoc

Vael GatesDec 14, 2021, 9:50 PM
28 points
2 comments1 min readEA link

How to PhD

ecaMar 28, 2021, 7:56 PM
118 points
28 comments11 min readEA link

Beg­ging, Plead­ing AI Orgs to Com­ment on NIST AI Risk Man­age­ment Framework

BridgesApr 15, 2022, 7:35 PM
87 points
3 comments2 min readEA link

Why do we post our AI safety plans on the In­ter­net?

Peter S. ParkOct 31, 2022, 4:27 PM
15 points
22 comments11 min readEA link

New 3-hour pod­cast with An­ders Sand­berg about Grand Futures

Gus DockerOct 6, 2020, 10:47 AM
21 points
1 comment1 min readEA link

Ap­ply to be a Stan­ford HAI Ju­nior Fel­low (As­sis­tant Pro­fes­sor- Re­search) by Nov. 15, 2021

Vael GatesOct 31, 2021, 2:21 AM
15 points
0 comments1 min readEA link

Mul­ti­ple high-im­pact PhD stu­dent positions

Denkenberger🔸Nov 19, 2022, 12:02 AM
32 points
0 comments3 min readEA link

‘Ex­is­ten­tial Risk and Growth’ Deep Dive #3 - Ex­ten­sions and Variations

Alex HTDec 20, 2020, 12:39 PM
5 points
0 comments12 min readEA link

A Cri­tique of AI Takeover Scenarios

James FodorAug 31, 2022, 1:49 PM
53 points
4 comments12 min readEA link

(p-)Zom­bie Uni­verse: an­other X-risk

Toby Tremlett🔹Jul 28, 2022, 9:34 PM
21 points
5 comments4 min readEA link

Toby Ord’s new re­port on les­sons from the de­vel­op­ment of the atomic bomb

Ishan MukherjeeNov 22, 2022, 10:37 AM
65 points
3 comments1 min readEA link
(www.governance.ai)

How much dona­tions are needed to neu­tral­ise the an­nual x-risk foot­print of the mean hu­man?

Vasco Grilo🔸Sep 22, 2022, 6:41 AM
8 points
2 comments1 min readEA link

In­ter­na­tional co­op­er­a­tion as a tool to re­duce two ex­is­ten­tial risks.

johl@umich.eduApr 19, 2021, 4:51 PM
28 points
4 comments23 min readEA link

[Question] How much does cli­mate change & the de­cline of liberal democ­racy in­di­rectly in­crease the prob­a­bil­ity of an x-risk?

EarthlingSep 1, 2022, 6:33 PM
7 points
7 comments1 min readEA link

X-Risk, An­throp­ics, & Peter Thiel’s In­vest­ment Thesis

Jackson WagnerOct 26, 2021, 6:38 PM
50 points
1 comment19 min readEA link

Cu­rated con­ver­sa­tions with brilli­ant effec­tive altruists

spencergApr 11, 2022, 3:32 PM
37 points
0 comments22 min readEA link

Ex­is­ten­tial risk from a Thomist Chris­tian perspective

Global Priorities InstituteDec 31, 2020, 2:27 PM
6 points
0 comments4 min readEA link
(globalprioritiesinstitute.org)

[Creative Non­fic­tion] The Toba Su­per­vol­canic Eruption

Jackson WagnerOct 29, 2021, 5:02 PM
55 points
3 comments6 min readEA link

Tough enough? Ro­bust satis­fic­ing as a de­ci­sion norm for long-term policy analysis

Global Priorities InstituteOct 31, 2020, 1:28 PM
5 points
0 comments3 min readEA link
(globalprioritiesinstitute.org)

[Question] Is there an or­ga­ni­za­tion or in­di­vi­d­u­als work­ing on how to boot­strap in­dus­trial civ­i­liza­tion?

steve6320Oct 21, 2022, 3:36 AM
15 points
8 comments1 min readEA link

Toby Ord at EA Global: Reconnect

EA GlobalMar 20, 2021, 7:00 AM
11 points
0 comments1 min readEA link
(www.youtube.com)

Po­ten­tial Risks from Ad­vanced Ar­tifi­cial In­tel­li­gence: The Philan­thropic Opportunity

Holden KarnofskyMay 6, 2016, 12:55 PM
2 points
0 comments23 min readEA link
(www.openphilanthropy.org)

Which of these ar­gu­ments for x-risk do you think we should test?

WimAug 9, 2022, 1:43 PM
3 points
2 comments1 min readEA link

A toy model for tech­nolog­i­cal ex­is­ten­tial risk

RobertHarlingNov 28, 2020, 11:55 AM
10 points
2 comments4 min readEA link

Op­tion Value, an In­tro­duc­tory Guide

CalebMarescaFeb 21, 2020, 2:45 PM
31 points
3 comments6 min readEA link

Pod­cast: Mag­nus Vind­ing on re­duc­ing suffer­ing, why AI progress is likely to be grad­ual and dis­tributed and how to rea­son about poli­tics

Gus DockerNov 21, 2021, 3:29 PM
26 points
0 comments1 min readEA link
(www.utilitarianpodcast.com)

Strate­gic Risks and Un­likely Benefits

Anthony RepettoDec 4, 2021, 6:01 AM
1 point
0 comments4 min readEA link

[Link] Sean Car­roll in­ter­views Aus­tralian poli­ti­cian An­drew Leigh on ex­is­ten­tial risks

Aryeh EnglanderMar 8, 2022, 1:29 AM
15 points
1 comment1 min readEA link

EA has got­ten it very wrong on cli­mate change—a Cana­dian case study

Stephen BeardOct 29, 2022, 7:30 PM
10 points
8 comments14 min readEA link

PIBBSS Fel­low­ship: Bounty for Refer­rals & Dead­line Extension

Anna_GajdovaJan 17, 2022, 4:23 PM
17 points
7 comments1 min readEA link

Tran­scripts of in­ter­views with AI researchers

Vael GatesMay 9, 2022, 6:03 AM
140 points
14 comments2 min readEA link

Cause pro­file: Cog­ni­tive En­hance­ment Re­search

George AltmanMar 27, 2022, 1:43 PM
63 points
6 comments22 min readEA link

What are the “no free lunch” the­o­rems?

Vishakha AgrawalFeb 4, 2025, 2:02 AM
3 points
0 comments1 min readEA link
(aisafety.info)

How might we al­ign trans­for­ma­tive AI if it’s de­vel­oped very soon?

Holden KarnofskyAug 29, 2022, 3:48 PM
163 points
17 comments44 min readEA link

In­tel­li­gence failures and a the­ory of change for fore­cast­ing

Nathan_BarnardAug 31, 2022, 2:05 AM
12 points
1 comment10 min readEA link

2017 AI Safety Liter­a­ture Re­view and Char­ity Comparison

LarksDec 20, 2017, 9:54 PM
43 points
17 comments23 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 1

James FodorDec 13, 2018, 5:10 AM
22 points
13 comments8 min readEA link

Low-key Longtermism

Jonathan RystromJul 25, 2022, 1:39 PM
26 points
6 comments8 min readEA link

[Job]: AI Stan­dards Devel­op­ment Re­search Assistant

Tony BarrettOct 14, 2022, 8:18 PM
13 points
0 comments2 min readEA link

Seek­ing so­cial sci­ence stu­dents /​ col­lab­o­ra­tors in­ter­ested in AI ex­is­ten­tial risks

Vael GatesSep 24, 2021, 9:56 PM
58 points
7 comments3 min readEA link

[Feed­back Re­quest] Hyper­text Fic­tion Piece on Ex­is­ten­tial Hope

Miranda_ZhangMay 30, 2021, 3:44 PM
35 points
2 comments1 min readEA link

Ex­pected im­pact of a ca­reer in AI safety un­der differ­ent opinions

Jordan TaylorJun 14, 2022, 2:25 PM
42 points
16 comments11 min readEA link

Policy and re­search ideas to re­duce ex­is­ten­tial risk

80000_HoursApr 27, 2020, 8:46 AM
3 points
0 comments4 min readEA link
(80000hours.org)

Ur­gency vs. Pa­tience—a Toy Model

Alex HTAug 19, 2020, 2:13 PM
39 points
4 comments4 min readEA link

[Question] Peo­ple work­ing on x-risks: what emo­tion­ally mo­ti­vates you?

Vael GatesJul 5, 2021, 3:16 AM
16 points
8 comments1 min readEA link

An­nounc­ing the Founders Pledge Global Catas­trophic Risks Fund

christian.rOct 26, 2022, 1:39 PM
49 points
1 comment3 min readEA link

[Link] Thiel on GCRs

Milan GriffesJul 22, 2019, 8:47 PM
28 points
11 comments1 min readEA link

[Question] Track­ing Com­pute Stocks and Flows: Case Stud­ies?

Cullen 🔸Oct 5, 2022, 5:54 PM
34 points
1 comment1 min readEA link

Com­po­nents of Strate­gic Clar­ity [Strate­gic Per­spec­tives on Long-term AI Gover­nance, #2]

MMMaasJul 2, 2022, 11:22 AM
66 points
0 comments6 min readEA link

Com­piling re­sources com­par­ing AI mi­suse, mis­al­ign­ment, and in­com­pe­tence risk and tractability

Peter4444May 5, 2022, 4:16 PM
3 points
2 comments1 min readEA link

U.S. EAs Should Con­sider Ap­ply­ing to Join U.S. Diplomacy

abiolveraMay 17, 2022, 5:14 PM
115 points
22 comments8 min readEA link

Longter­mists should take cli­mate change very seriously

Nir EyalOct 3, 2022, 6:33 PM
29 points
10 comments8 min readEA link

The Threat of Cli­mate Change Is Exaggerated

Samrin SaleemSep 29, 2023, 6:49 PM
13 points
16 comments14 min readEA link

Mauhn Re­leases AI Safety Documentation

Berg SeverensJul 2, 2021, 12:19 PM
4 points
2 comments1 min readEA link

The fu­ture of humanity

Dem0sthenesSep 1, 2022, 10:34 PM
1 point
0 comments8 min readEA link

Things usu­ally end slowly

OllieBaseJun 7, 2022, 5:00 PM
76 points
14 comments7 min readEA link

Where are the red lines for AI?

Karl von WendtAug 5, 2022, 9:41 AM
13 points
3 comments6 min readEA link

Help with the Fo­rum; wiki edit­ing, giv­ing feed­back, mod­er­a­tion, and more

LizkaApr 20, 2022, 12:58 PM
88 points
6 comments3 min readEA link

[Link post] Promis­ing Paths to Align­ment—Con­nor Leahy | Talk

frances_lorenzMay 14, 2022, 3:58 PM
17 points
0 comments1 min readEA link

Carl Shul­man on the com­mon-sense case for ex­is­ten­tial risk work and its prac­ti­cal implications

80000_HoursOct 8, 2021, 1:43 PM
41 points
2 comments149 min readEA link

Why EAs are skep­ti­cal about AI Safety

Lukas Trötzmüller🔸Jul 18, 2022, 7:01 PM
290 points
31 comments29 min readEA link

Sum­mary: the Global Catas­trophic Risk Man­age­ment Act of 2022

Anthony FlemingSep 23, 2022, 3:19 AM
35 points
8 comments2 min readEA link

[Question] How many EA 2021 $s would you trade off against a 0.01% chance of ex­is­ten­tial catas­tro­phe?

LinchNov 27, 2021, 11:46 PM
55 points
87 comments1 min readEA link

Cruxes for nu­clear risk re­duc­tion efforts—A proposal

Sarah WeilerNov 16, 2022, 6:03 AM
38 points
0 comments24 min readEA link

The fu­ture of nu­clear war

turchinMay 21, 2022, 8:00 AM
37 points
2 comments34 min readEA link

Shel­ter­ing hu­man­ity against x-risk: re­port from the SHELTER weekend

Janne M. KorhonenOct 10, 2022, 3:09 PM
76 points
3 comments5 min readEA link

The EA com­mu­ni­ties that emerged from the Chicx­u­lub crater

Silvia FernándezNov 14, 2022, 7:46 PM
16 points
1 comment8 min readEA link

“Nor­mal ac­ci­dents” and AI sys­tems

Eleni_AAug 8, 2022, 6:43 PM
5 points
1 comment1 min readEA link
(www.achan.ca)

Ques­tion about ter­minol­ogy for lesser X-risks and S-risks

Laura LeightonAug 8, 2022, 4:39 AM
9 points
3 comments1 min readEA link

High Im­pact Ca­reers in For­mal Ver­ifi­ca­tion: Ar­tifi­cial Intelligence

quinnJun 5, 2021, 2:45 PM
28 points
7 comments16 min readEA link

Biose­cu­rity challenges posed by Dual-Use Re­search of Con­cern (DURC)

Byron CohenSep 1, 2022, 7:33 AM
12 points
0 comments7 min readEA link
(raisinghealth.substack.com)

Anal­y­sis of Global AI Gover­nance Strategies

SammyDMartinDec 11, 2024, 11:08 AM
23 points
0 comments1 min readEA link
(www.lesswrong.com)

[Creative Writ­ing Con­test] The Puppy Problem

LouisOct 13, 2021, 2:01 PM
13 points
0 comments7 min readEA link

Sixty years af­ter the Cuban Mis­sile Cri­sis, a new era of global catas­trophic risks

christian.rOct 13, 2022, 11:25 AM
31 points
0 comments1 min readEA link
(thebulletin.org)

ISYP Third Nu­clear Age Con­fer­ence New Age, New Think­ing: Challenges of a Third Nu­clear Age 31 Oc­to­ber-2 Novem­ber 2022, in Ber­lin, Ger­many

Daniel AjudeonuAug 11, 2022, 9:43 AM
4 points
0 comments5 min readEA link

Eli’s re­view of “Is power-seek­ing AI an ex­is­ten­tial risk?”

eliflandSep 30, 2022, 12:21 PM
58 points
3 comments1 min readEA link

Seek­ing Mechanism De­signer for Re­search into In­ter­nal­iz­ing Catas­trophic Externalities

c.troutSep 11, 2024, 3:09 PM
11 points
0 comments1 min readEA link

[Question] What “pivotal” and use­ful re­search … would you like to see as­sessed? (Bounty for sug­ges­tions)

david_reinsteinApr 28, 2022, 3:49 PM
37 points
21 comments7 min readEA link

[Question] How bi­nary is longterm value?

Vasco Grilo🔸Nov 1, 2022, 3:21 PM
13 points
15 comments1 min readEA link

[Question] Put­ting Peo­ple First in a Cul­ture of De­hu­man­iza­tion

jhealyJul 22, 2020, 3:31 AM
16 points
3 comments1 min readEA link

How to en­gage with AI 4 So­cial Jus­tice ac­tors

TomWestgarthApr 26, 2022, 8:39 AM
12 points
5 comments1 min readEA link

[Question] Look­ing for col­lab­o­ra­tors af­ter last 80k pod­cast with Tris­tan Harris

Jan-WillemDec 7, 2020, 10:23 PM
19 points
7 comments2 min readEA link

2016 AI Risk Liter­a­ture Re­view and Char­ity Comparison

LarksDec 13, 2016, 4:36 AM
57 points
12 comments28 min readEA link

Overview of Trans­for­ma­tive AI Mi­suse Risks

SammyDMartinDec 11, 2024, 11:04 AM
12 points
0 comments2 min readEA link
(longtermrisk.org)

Are we already past the precipice?

Dem0sthenesAug 10, 2022, 4:01 AM
1 point
5 comments2 min readEA link

[Question] What would “do­ing enough” to safe­guard the long-term fu­ture look like?

HStencilApr 22, 2020, 9:47 PM
20 points
0 comments1 min readEA link

On The Rel­a­tive Long-Term Fu­ture Im­por­tance of In­vest­ments in Eco­nomic Growth and Global Catas­trophic Risk Reduction

poliboniMar 30, 2020, 8:11 PM
33 points
1 comment1 min readEA link

The great en­ergy de­scent—Part 1: Can re­new­ables re­place fos­sil fuels?

CB🔸Aug 31, 2022, 9:51 PM
46 points
2 comments22 min readEA link

The es­tab­lished nuke risk field de­serves more engagement

IlverinJul 4, 2022, 7:39 PM
17 points
12 comments1 min readEA link

W-Risk and the Tech­nolog­i­cal Wavefront (Nell Wat­son)

Aaron Gertler 🔸Nov 11, 2018, 11:22 PM
9 points
1 comment1 min readEA link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan ArelDec 6, 2022, 10:36 PM
5 points
4 comments3 min readEA link

Against GDP as a met­ric for timelines and take­off speeds

kokotajlodDec 29, 2020, 5:50 PM
47 points
6 comments14 min readEA link

An­nounc­ing the AIPoli­cyIdeas.com Database

abiolveraJun 23, 2023, 4:09 PM
50 points
3 comments2 min readEA link
(www.aipolicyideas.com)

[Question] Odds of re­cov­er­ing val­ues af­ter col­lapse?

Will AldredJul 24, 2022, 6:20 PM
66 points
13 comments3 min readEA link

[Question] Why al­tru­ism at all?

SingletonJul 12, 2020, 10:04 PM
−2 points
1 comment1 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

KiliankMay 9, 2022, 7:53 PM
17 points
2 comments8 min readEA link

Is Civ­i­liza­tion on the Brink of Col­lapse? - Kurzgesagt

GabeMAug 16, 2022, 8:06 PM
33 points
5 comments1 min readEA link
(www.youtube.com)

Ex­am­ple syl­labus “Ex­is­ten­tial Risks”

simonfriederichJul 3, 2021, 9:23 AM
15 points
2 comments10 min readEA link

Steer­ing AI to care for an­i­mals, and soon

Andrew CritchJun 14, 2022, 1:13 AM
230 points
37 comments1 min readEA link

Good v. Op­ti­mal Futures

RobertHarlingDec 11, 2020, 4:38 PM
38 points
10 comments6 min readEA link

EA is be­com­ing in­creas­ingly in­ac­cessible, at the worst pos­si­ble time

Ann Garth 🔸Jul 22, 2022, 3:40 PM
78 points
13 comments15 min readEA link

Open Cli­mate Data as a pos­si­ble cause area, Open Philanthropy

Ben YeohJul 3, 2022, 12:47 PM
4 points
0 comments12 min readEA link

Is space coloniza­tion de­sir­able? Re­view of Dark Sk­ies: Space Ex­pan­sion­ism, Plane­tary Geopoli­tics, and the Ends of Humanity

sphorOct 7, 2022, 12:26 PM
13 points
3 comments3 min readEA link
(bostonreview.net)

Cen­tre for Ex­plo­ra­tory Altru­ism Re­search (CEARCH)

Joel Tan🔸Oct 18, 2022, 7:23 AM
125 points
15 comments5 min readEA link

Im­prov­ing long-run civil­i­sa­tional robustness

RyanCareyMay 10, 2016, 11:14 AM
9 points
6 comments3 min readEA link

Nu­clear Es­pi­onage and AI Governance

GAAOct 4, 2021, 6:21 PM
32 points
3 comments24 min readEA link

Talk - ‘Car­ing for the Far Fu­ture’

YadavDec 9, 2022, 4:58 PM
13 points
0 comments1 min readEA link
(youtu.be)

How to re­con­sider a prediction

Noah ScalesOct 25, 2022, 9:28 PM
2 points
2 comments4 min readEA link

Neil Sin­hab­abu on metaethics and world gov­ern­ment for re­duc­ing ex­is­ten­tial risk

Gus DockerFeb 2, 2022, 8:23 PM
7 points
0 comments83 min readEA link
(www.utilitarianpodcast.com)

In­ter­re­lat­ed­ness of x-risks and sys­temic fragilities

NaryanSep 4, 2022, 9:36 PM
26 points
7 comments2 min readEA link

A cri­tique of strong longtermism

Pablo RosadoAug 28, 2022, 7:33 PM
15 points
11 comments14 min readEA link

Why those who care about catas­trophic and ex­is­ten­tial risk should care about au­tonomous weapons

aaguirreNov 11, 2020, 5:27 PM
103 points
31 comments15 min readEA link

Ques­tions for Jaan Tal­linn’s fireside chat in EAGxAPAC this weekend

BrianTanNov 17, 2020, 2:12 AM
13 points
8 comments1 min readEA link

Alien coloniza­tion of Earth’s im­pact the the rel­a­tive im­por­tance of re­duc­ing differ­ent ex­is­ten­tial risks

EviraSep 5, 2019, 12:27 AM
10 points
10 comments1 min readEA link

AGI and Lock-In

Lukas FinnvedenOct 29, 2022, 1:56 AM
153 points
20 comments10 min readEA link
(docs.google.com)

The Vi­talik Bu­terin Fel­low­ship in AI Ex­is­ten­tial Safety is open for ap­pli­ca­tions!

Cynthia ChenOct 14, 2022, 3:23 AM
38 points
0 comments2 min readEA link

Is Tech­nol­ogy Ac­tu­ally Mak­ing Things Bet­ter? – Pairagraph

Eevee🔹Oct 1, 2020, 4:06 PM
16 points
1 comment1 min readEA link
(www.pairagraph.com)

In­tro­duc­tion to suffer­ing-fo­cused ethics

Center for Reducing SufferingAug 30, 2024, 4:55 PM
56 points
2 comments22 min readEA link

11 Re­cent Publi­ca­tions on Ex­is­ten­tial Risk (June 2020 up­date)

HaydnBelfieldJul 2, 2020, 1:09 PM
14 points
0 comments6 min readEA link
(www.cser.ac.uk)

AI Risk In­tro 1: Ad­vanced AI Might Be Very Bad

L Rudolf LSep 11, 2022, 10:57 AM
22 points
0 comments30 min readEA link

Ques­tions for Nick Beck­stead’s fireside chat in EAGxAPAC this weekend

BrianTanNov 17, 2020, 3:05 PM
12 points
15 comments3 min readEA link

Ver­ti­cal farm­ing to lessen our re­li­ance on the Sun

TyMay 5, 2022, 5:57 AM
12 points
3 comments2 min readEA link

Re­fut­ing longter­mism with Fer­mat’s Last Theorem

astuppleAug 16, 2022, 12:26 PM
3 points
32 comments3 min readEA link

[Question] Is there any re­search on in­ter­nal­iz­ing x-risks or global catas­trophic risks into economies?

RamiroJul 6, 2022, 5:08 PM
19 points
3 comments1 min readEA link

The most good sys­tem vi­sual and sta­bi­liza­tion steps

brb243Mar 14, 2022, 11:54 PM
3 points
0 comments1 min readEA link

The great en­ergy de­scent—Part 2: Limits to growth and why we prob­a­bly won’t reach the stars

CB🔸Aug 31, 2022, 9:51 PM
19 points
0 comments25 min readEA link

Re­sults of a Span­ish-speak­ing es­say con­test about Global Catas­trophic Risk

Jaime SevillaJul 15, 2022, 4:53 PM
86 points
7 comments6 min readEA link

Leopold Aschen­bren­ner re­turns to X-risk and growth

nickwhitakerOct 20, 2020, 11:24 PM
25 points
3 comments1 min readEA link

4 Key As­sump­tions in AI Safety

PrometheusNov 7, 2022, 10:50 AM
5 points
0 comments1 min readEA link

For­mal­iz­ing Space-Far­ing Civ­i­liza­tions Sat­u­ra­tion con­cepts and metrics

Maxime Riché 🔸Mar 13, 2025, 9:44 AM
13 points
0 comments8 min readEA link

What 80000 Hours gets wrong about so­lar geoengineering

Gideon FutermanAug 29, 2022, 1:24 PM
26 points
4 comments22 min readEA link

More to ex­plore on ‘Our Fi­nal Cen­tury’

EA HandbookJul 15, 2022, 11:00 PM
6 points
2 comments2 min readEA link

Sum­mary of and thoughts on “Dark Sk­ies” by Daniel Deudney

Cody_FenwickDec 31, 2022, 8:28 PM
38 points
1 comment5 min readEA link

[Question] How have shorter AI timelines been af­fect­ing you, and how have you been re­spond­ing to them?

Liav.KorenJan 3, 2023, 4:20 AM
35 points
15 comments1 min readEA link

[Question] What are the strate­gic im­pli­ca­tions if aliens and Earth civ­i­liza­tions pro­duce similar util­ities?

Maxime Riché 🔸Aug 6, 2024, 9:21 PM
6 points
1 comment1 min readEA link

Dist­in­guish­ing Between Ideal­ism and Real­ism in In­ter­na­tional Relations

Siya SawhneyJul 18, 2024, 4:23 PM
5 points
2 comments3 min readEA link

In­tro­duc­tion: Bias in Eval­u­at­ing AGI X-Risks

RemmeltDec 27, 2022, 10:27 AM
4 points
0 comments1 min readEA link

Tech­nolog­i­cal Bot­tle­necks for PCR, LAMP, and Me­tage­nomics Sequencing

Ziyue ZengJan 9, 2023, 6:05 AM
39 points
0 comments17 min readEA link

Overview of the Pathogen Bio­surveillance Land­scape

Brianna GopaulJan 9, 2023, 6:05 AM
54 points
4 comments20 min readEA link

[Question] What AI Take-Over Movies or Books Will Scare Me Into Tak­ing AI Se­ri­ously?

Jordan ArelJan 10, 2023, 8:30 AM
11 points
8 comments1 min readEA link

ea.do­mains—Do­mains Free to a Good Home

plexJan 12, 2023, 1:32 PM
48 points
8 comments4 min readEA link

McGill EA x Law Pre­sents: Ex­is­ten­tial Ad­vo­cacy with Prof. John Bliss

McGill EA x LawJan 10, 2023, 11:56 PM
3 points
0 comments1 min readEA link

What is time se­ries fore­cast­ing tool?

Jack KevinJan 12, 2023, 10:48 AM
−5 points
0 comments1 min readEA link

EA rele­vant Fore­sight In­sti­tute Work­shops in 2023: WBE & AI safety, Cryp­tog­ra­phy & AI safety, XHope, Space, and Atom­i­cally Pre­cise Manufacturing

elteerkersJan 16, 2023, 2:02 PM
20 points
1 comment3 min readEA link

Jan Kirch­ner on AI Alignment

birtesJan 17, 2023, 3:11 PM
5 points
0 comments1 min readEA link

The Jour­nal of Danger­ous Ideas

rogersbacon1Feb 3, 2024, 3:43 PM
−26 points
1 comment5 min readEA link
(www.secretorum.life)

Help me to un­der­stand AI al­ign­ment!

britomartJan 18, 2023, 9:13 AM
3 points
12 comments1 min readEA link

A re­view of how nu­cleic acid (or DNA) syn­the­sis is cur­rently reg­u­lated across the world, and some ideas about re­form (sum­mary of and link to Law dis­ser­ta­tion)

Isaac HeronFeb 5, 2024, 10:37 AM
53 points
4 comments16 min readEA link
(acrobat.adobe.com)

Ex­is­ten­tial Risk of Misal­igned In­tel­li­gence Aug­men­ta­tion (Par­tic­u­larly Us­ing High-Band­width BCI Im­plants)

Damian GorskiJan 24, 2023, 5:02 PM
1 point
0 comments9 min readEA link

Mili­tary sup­port in a global catastrophe

Tom GardinerJan 24, 2023, 4:30 PM
37 points
0 comments3 min readEA link

Call me, maybe? Hotlines and Global Catas­trophic Risk [Founders Pledge]

christian.rJan 24, 2023, 4:28 PM
83 points
10 comments26 min readEA link
(docs.google.com)

High­est pri­or­ity threat: in­finite tor­ture

KAraxJan 26, 2023, 8:51 AM
−39 points
1 comment9 min readEA link

Sum­mit on Ex­is­ten­tial Se­cu­rity 2023

Amy LabenzJan 27, 2023, 6:39 PM
120 points
6 comments2 min readEA link

“How to Es­cape from the Si­mu­la­tion”—Seeds of Science call for reviewers

rogersbacon1Jan 26, 2023, 3:12 PM
7 points
0 comments1 min readEA link

Biose­cu­rity newslet­ters you should sub­scribe to

Swan 🔸Jan 29, 2023, 5:00 PM
104 points
14 comments1 min readEA link

Spec­u­la­tive sce­nar­ios for cli­mate-caused ex­is­ten­tial catastrophes

vincentzhJan 27, 2023, 5:01 PM
26 points
2 comments4 min readEA link

Vacuum De­cay: Ex­pert Sur­vey Results

Jess_RiedelMar 13, 2025, 6:31 PM
68 points
3 comments13 min readEA link

[Question] Why are we not talk­ing more about the metacrisis per­spec­tive on ex­is­ten­tial risk?

Alexander Herwix 🔸Jan 29, 2023, 9:35 AM
52 points
44 comments1 min readEA link

Post-Mortem: McGill EA x Law Pre­sents: Ex­is­ten­tial Ad­vo­cacy with Prof. John Bliss

McGill EA x LawJan 31, 2023, 6:57 PM
11 points
0 comments4 min readEA link

What Are The Biggest Threats To Hu­man­ity? (A Hap­pier World video)

Jeroen Willems🔸Jan 31, 2023, 7:50 PM
17 points
1 comment15 min readEA link

[Linkpost] Hu­man-nar­rated au­dio ver­sion of “Is Power-Seek­ing AI an Ex­is­ten­tial Risk?”

Joe_CarlsmithJan 31, 2023, 7:19 PM
9 points
0 comments1 min readEA link

Im­pact Academy is hiring an AI Gover­nance Lead—more in­for­ma­tion, up­com­ing Q&A and $500 bounty

Lowe LundinAug 29, 2023, 6:42 PM
9 points
1 comment1 min readEA link

Prometheus Un­leashed: Mak­ing sense of in­for­ma­tion hazards

basil.iciousFeb 15, 2023, 6:44 AM
0 points
0 comments4 min readEA link
(basil08.github.io)

Scal­able longter­mist pro­jects: Speedrun se­ries – In­tro­duc­tion

BuhlFeb 7, 2023, 6:43 PM
63 points
2 comments5 min readEA link

Space coloniza­tion and the closed ma­te­rial economy

Arturo MaciasFeb 2, 2023, 3:37 PM
2 points
0 comments2 min readEA link

The Ex­is­ten­tial Risk Alli­ance is hiring mul­ti­ple Cause Area Leads

Rethink PrioritiesFeb 2, 2023, 5:10 PM
20 points
0 comments4 min readEA link
(careers.rethinkpriorities.org)

Why Billion­aires Will Not Sur­vive an AGI Ex­tinc­tion Event

funnyfrancoMar 13, 2025, 7:03 PM
1 point
0 comments14 min readEA link

Does the US pub­lic sup­port ul­tra­vi­o­let ger­mi­ci­dal ir­ra­di­a­tion tech­nol­ogy for re­duc­ing risks from pathogens?

Jam KraprayoonFeb 3, 2023, 2:10 PM
111 points
3 comments10 min readEA link

RESILIENCER Work­shop Re­port on So­lar Ra­di­a­tion Mod­ifi­ca­tion Re­search and Ex­is­ten­tial Risk Released

Gideon FutermanFeb 3, 2023, 6:58 PM
24 points
0 comments3 min readEA link

I No Longer Feel Com­fortable in EA

disgruntled_eaFeb 5, 2023, 8:45 PM
2 points
29 comments1 min readEA link

Cri­tiques of promi­nent AI safety labs: Red­wood Research

OmegaMar 31, 2023, 8:58 AM
339 points
91 comments20 min readEA link

Launch­ing The Col­lec­tive In­tel­li­gence Pro­ject: Whitepa­per and Pilots

jasmine_wangFeb 6, 2023, 5:00 PM
38 points
8 comments2 min readEA link
(cip.org)

Tech­nol­ogy is Power: Rais­ing Aware­ness Of Tech­nolog­i­cal Risks

Marc WongFeb 9, 2023, 3:13 PM
3 points
0 comments2 min readEA link

Vol­canic win­ters have hap­pened be­fore—should we pre­pare for the next one?

Stan PinsentAug 7, 2024, 11:08 AM
18 points
1 comment3 min readEA link

[Question] What are the best ex­am­ples of ob­ject-level work that was done by (or at least in­spired by) the longter­mist EA com­mu­nity that con­cretely and leg­ibly re­duced ex­is­ten­tial risk?

Ben SnodinFeb 11, 2023, 1:49 PM
118 points
18 comments1 min readEA link

EA on nu­clear war and expertise

beanAug 28, 2022, 4:59 AM
154 points
17 comments4 min readEA link

FYI there is a Ger­man in­sti­tute study­ing so­ciolog­i­cal as­pects of ex­is­ten­tial risk

Max GörlitzFeb 12, 2023, 5:35 PM
77 points
10 comments1 min readEA link

Speedrun: De­mon­strate the abil­ity to rapidly scale food pro­duc­tion in the case of nu­clear winter

BuhlFeb 13, 2023, 7:00 PM
39 points
2 comments16 min readEA link

Philan­thropy to the Right of Boom [Founders Pledge]

christian.rFeb 14, 2023, 5:08 PM
83 points
11 comments20 min readEA link

The Im­por­tance of AI Align­ment, ex­plained in 5 points

Daniel_EthFeb 11, 2023, 2:56 AM
50 points
4 comments13 min readEA link

[Question] Huh. Bing thing got me real anx­ious about AI. Re­sources to help with that please?

ArvinFeb 15, 2023, 4:55 PM
2 points
7 comments1 min readEA link

Select Challenges with Crit­i­cism & Eval­u­a­tion Around EA

Ozzie GooenFeb 10, 2023, 11:36 PM
111 points
5 comments6 min readEA link
(quri.substack.com)

In­ter­view with Ro­man Yam­polskiy about AGI on The Real­ity Check

Darren McKeeFeb 18, 2023, 11:29 PM
27 points
0 comments1 min readEA link
(www.trcpodcast.com)

Shal­low Re­port on Nu­clear War (Arse­nal Limi­ta­tion)

Joel Tan🔸Feb 21, 2023, 4:57 AM
44 points
13 comments29 min readEA link

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk8Nov 30, 2023, 10:15 AM
−9 points
0 comments12 min readEA link

[Question] Can we es­ti­mate the ex­pected value of hu­man’s fu­ture life(in 500 years)

jackchang110Feb 25, 2023, 3:13 PM
5 points
5 comments1 min readEA link

[Question] Which is more im­por­tant for re­duc­ing s-risks, re­search­ing on AI sen­tience or an­i­mal welfare?

jackchang110Feb 25, 2023, 2:20 AM
9 points
0 comments1 min readEA link

In­sects raised for food and feed — global scale, prac­tices, and policy

abrahamroweJun 29, 2020, 1:57 PM
95 points
13 comments29 min readEA link

French 2d ex­plainer videos on longter­mism (en­glish sub­ti­tles)

Gaetan_Selle 🔷Feb 27, 2023, 9:00 AM
20 points
0 comments1 min readEA link

Seek­ing in­put on a list of AI books for broader audience

Darren McKeeFeb 27, 2023, 10:40 PM
49 points
14 comments5 min readEA link

Safe Sta­sis Fallacy

DavidmanheimFeb 5, 2024, 10:54 AM
23 points
4 comments1 min readEA link

ChatGPT not so clever or not so ar­tifi­cial as hyped to be?

Haris ShekerisMar 2, 2023, 6:16 AM
−7 points
2 comments1 min readEA link

[Question] What are some sources re­lated to big-pic­ture AI strat­egy?

Jacob Watts🔸Mar 2, 2023, 5:04 AM
9 points
4 comments1 min readEA link

Joscha Bach on Syn­thetic In­tel­li­gence [an­no­tated]

Roman LeventovMar 2, 2023, 11:21 AM
8 points
0 comments9 min readEA link
(www.jimruttshow.com)

Send funds to earth­quake sur­vivors in Turkey via GiveDirectly

GiveDirectlyMar 2, 2023, 1:19 PM
38 points
1 comment3 min readEA link

Distil­la­tion of The Offense-Defense Balance of Scien­tific Knowledge

Arjun YadavAug 12, 2022, 7:01 AM
17 points
0 comments2 min readEA link

Ad­vice on com­mu­ni­cat­ing in and around the biose­cu­rity policy community

ESMar 2, 2023, 9:32 PM
225 points
27 comments6 min readEA link

[Question] Re­cent pa­per on cli­mate tip­ping points

jackvaMar 2, 2023, 11:11 PM
22 points
7 comments1 min readEA link

New Ar­tifi­cial In­tel­li­gence quiz: can you beat ChatGPT?

AndreFerrettiMar 3, 2023, 3:46 PM
29 points
3 comments1 min readEA link

[Question] Math­e­mat­i­cal mod­els of Ethics

Victor-SBMar 8, 2023, 10:50 AM
6 points
1 comment1 min readEA link

We can’t put num­bers on ev­ery­thing and try­ing to weak­ens our col­lec­tive epistemics

ConcernedEAsMar 8, 2023, 3:09 PM
9 points
0 comments11 min readEA link

Fake Meat and Real Talk 1 - Are We All Gonna Die? Yud­kowsky and the Dangers of AI (Please RSVP)

David NMar 8, 2023, 8:40 PM
11 points
2 comments1 min readEA link

A Roundtable for Safe AI (RSAI)?

Lara_THMar 9, 2023, 12:11 PM
9 points
0 comments4 min readEA link

An­thropic: Core Views on AI Safety: When, Why, What, and How

jonmenasterMar 9, 2023, 5:30 PM
107 points
6 comments22 min readEA link
(www.anthropic.com)

Ja­pan AI Align­ment Conference

ChrisScammellMar 10, 2023, 9:23 AM
17 points
2 comments1 min readEA link
(www.conjecture.dev)

How to make cli­mate ac­tivists care for other ex­is­ten­tial risks

ExponentialDragonMar 12, 2023, 9:05 AM
22 points
7 comments2 min readEA link

Two po­si­tions at Non-Triv­ial: En­able young peo­ple to tackle the world’s most press­ing problems

Peter McIntyreOct 17, 2023, 11:46 AM
24 points
4 comments5 min readEA link
(www.non-trivial.org)

The Silent War: AGI-on-AGI War­fare and What It Means For Us

funnyfrancoMar 15, 2025, 3:32 PM
4 points
0 comments22 min readEA link

Other Civ­i­liza­tions Would Re­cover 84+% of Our Cos­mic Re­sources—A Challenge to Ex­tinc­tion Risk Prioritization

Maxime Riché 🔸Mar 17, 2025, 1:11 PM
17 points
0 comments12 min readEA link

Longter­mist Im­pli­ca­tions of the Ex­is­tence Neu­tral­ity Hypothesis

Maxime Riché 🔸Mar 20, 2025, 12:20 PM
19 points
0 comments21 min readEA link

[Question] If an ex­is­ten­tial catas­tro­phe oc­curs, how likely is it to wipe out all an­i­mal sen­tience?

JoA🔸Mar 16, 2025, 10:30 PM
11 points
2 comments2 min readEA link

The Con­ver­gent Path to the Stars—Similar Utility Across Civ­i­liza­tions Challenges Ex­tinc­tion Prioritization

Maxime Riché 🔸Mar 18, 2025, 5:09 PM
6 points
1 comment20 min readEA link

Bruce Kent (1929–2022)

technicalitiesJun 10, 2022, 2:03 PM
47 points
3 comments2 min readEA link

On AI Weapons

kbogNov 13, 2019, 12:48 PM
76 points
10 comments30 min readEA link

Shal­low eval­u­a­tions of longter­mist organizations

NunoSempereJun 24, 2021, 3:31 PM
192 points
34 comments34 min readEA link

2022 ALLFED highlights

Ross_TiemanNov 28, 2022, 5:37 AM
85 points
1 comment18 min readEA link

Sum­mary of Ma­jor En­vi­ron­men­tal Im­pacts of Nu­clear Winter

IsabelJul 9, 2022, 6:23 AM
7 points
0 comments23 min readEA link

EA re­silience to catas­tro­phes & ALLFED’s case study

Sonia_CassidyMar 23, 2022, 7:03 AM
91 points
10 comments13 min readEA link

How you can save ex­pected lives for $0.20-$400 each and re­duce X risk

Denkenberger🔸Nov 27, 2017, 2:23 AM
24 points
5 comments8 min readEA link

Ja­cob Cates and Aron Mill: Scal­ing in­dus­trial food pro­duc­tion in nu­clear winter

EA GlobalOct 18, 2019, 6:05 PM
9 points
0 comments1 min readEA link
(www.youtube.com)

Book re­view: The Dooms­day Machine

L Rudolf LAug 18, 2021, 10:15 PM
21 points
0 comments16 min readEA link
(strataoftheworld.blogspot.com)

Up­dated es­ti­mates of the sever­ity of a nu­clear war

Luisa_RodriguezDec 19, 2019, 3:11 PM
76 points
2 comments5 min readEA link

Sav­ing ex­pected lives at $10 apiece?

Denkenberger🔸Dec 14, 2016, 3:38 PM
15 points
23 comments2 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_HarrisOct 18, 2021, 2:07 PM
30 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

S-risk In­tro Fellowship

stefan.torgesDec 20, 2021, 5:26 PM
52 points
1 comment1 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

ChiFeb 1, 2022, 10:24 PM
62 points
5 comments10 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonYOct 16, 2021, 11:33 AM
77 points
22 comments24 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo AjantaivalMay 23, 2022, 7:17 PM
62 points
14 comments29 min readEA link

CLR’s An­nual Re­port 2021

stefan.torgesFeb 26, 2022, 12:47 PM
79 points
0 comments12 min readEA link

S-risk FAQ

Tobias_BaumannSep 18, 2017, 8:05 AM
29 points
8 comments8 min readEA link

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

BabelJul 8, 2009, 12:42 PM
12 points
0 comments1 min readEA link
(longtermrisk.org)

Cur­ing past suffer­ings and pre­vent­ing s-risks via in­dex­i­cal uncertainty

turchinSep 27, 2018, 10:48 AM
1 point
18 comments4 min readEA link

Pro­mot­ing com­pas­sion­ate longtermism

jonleightonDec 7, 2022, 2:26 PM
117 points
5 comments12 min readEA link

Avoid­ing Group­think in In­tro Fel­low­ships (and Diver­sify­ing Longter­mism)

seanrsonSep 14, 2021, 9:00 PM
67 points
10 comments1 min readEA link

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torgesDec 9, 2022, 6:03 PM
169 points
4 comments13 min readEA link

[Question] Where should I donate?

Eevee🔹Nov 22, 2021, 8:56 PM
29 points
10 comments1 min readEA link

First S-Risk In­tro Seminar

stefan.torgesDec 8, 2020, 9:23 AM
70 points
2 comments1 min readEA link

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus DockerMay 7, 2021, 2:32 PM
8 points
0 comments1 min readEA link
(anchor.fm)

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

FaiMar 4, 2022, 7:59 PM
134 points
29 comments16 min readEA link

The His­tory of AI Rights Research

Jamie_HarrisAug 27, 2022, 8:14 AM
48 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

Si­mu­la­tors and Mindcrime

𝕮𝖎𝖓𝖊𝖗𝖆Dec 9, 2022, 3:20 PM
1 point
0 comments1 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_GloorJan 31, 2018, 2:47 PM
76 points
11 comments48 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torgesJan 17, 2020, 1:28 PM
64 points
0 comments1 min readEA link

New Book: “Rea­soned Poli­tics” + Why I have writ­ten a book about politics

Magnus VindingMar 3, 2022, 11:31 AM
95 points
9 comments5 min readEA link

Longter­mism and An­i­mal Farm­ing Trajectories

MichaelDelloDec 27, 2022, 12:58 AM
51 points
8 comments17 min readEA link
(www.sentienceinstitute.org)

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_GloorFeb 15, 2023, 1:01 PM
79 points
5 comments13 min readEA link

‘Cru­cial Con­sid­er­a­tions and Wise Philan­thropy’, by Nick Bostrom

PabloMar 17, 2017, 6:48 AM
35 points
4 comments24 min readEA link
(www.stafforini.com)

7 es­says on Build­ing a Bet­ter Future

Jamie_HarrisJun 24, 2022, 2:28 PM
21 points
0 comments2 min readEA link

[Link post] Op­ti­mistic “Longter­mism” Is Ter­rible For Animals

BrianKSep 6, 2022, 10:38 PM
47 points
6 comments1 min readEA link
(www.forbes.com)

21 crit­i­cisms of EA I’m think­ing about

Peter WildefordSep 1, 2022, 7:28 PM
210 points
26 comments9 min readEA link

The Case for Short­ter­mism—by Robert Wright

Miquel Banchs-Piqué (prev. mikbp)Aug 16, 2022, 8:00 PM
24 points
0 comments1 min readEA link
(nonzero.substack.com)

Thread on LT/​ut’s prefer­ence for billions of im­mi­nent deaths

Peter_LaymanSep 14, 2022, 3:44 PM
−16 points
1 comment1 min readEA link
(twitter.com)

St. Peters­burg De­mon – a thought ex­per­i­ment that makes me doubt Longter­mism

wuschelMay 23, 2022, 11:49 AM
48 points
39 comments2 min readEA link

“Prona­tal­ists” may look to co-opt effec­tive al­tru­ism or longtermism

pseudonymNov 17, 2022, 9:04 PM
34 points
25 comments4 min readEA link
(www.businessinsider.com)

Paper sum­mary: Stak­ing our fu­ture: de­on­tic long-ter­mism and the non-iden­tity prob­lem (An­dreas Mo­gensen)

Global Priorities InstituteJun 7, 2022, 1:14 PM
25 points
6 comments6 min readEA link
(globalprioritiesinstitute.org)

Phil Tor­res on Against Longtermism

Group OrganizerJan 13, 2022, 6:04 AM
1 point
5 comments1 min readEA link

EA is more than longtermism

frances_lorenzMay 3, 2022, 3:18 PM
160 points
99 comments5 min readEA link

Against longtermism

Brian LuiAug 11, 2022, 5:37 AM
38 points
30 comments6 min readEA link

The Base Rate of Longter­mism Is Bad

ColdButtonIssuesSep 5, 2022, 1:29 PM
225 points
27 comments7 min readEA link

Against Longter­mism: I wel­come our robot over­lords, and you should too!

MattBallJul 2, 2022, 2:05 AM
5 points
6 comments6 min readEA link

Against longter­mism: a care-cen­tric ap­proach?

Aron POct 2, 2022, 5:00 AM
21 points
2 comments1 min readEA link

Phil Tor­res’ ar­ti­cle: “The Danger­ous Ideas of ‘Longter­mism’ and ‘Ex­is­ten­tial Risk’”

Ben_EisenpressAug 6, 2021, 7:19 AM
6 points
13 comments1 min readEA link

Op­ti­mism, AI risk, and EA blind spots

JustisSep 28, 2022, 5:21 PM
87 points
21 comments8 min readEA link

Re­sponse to re­cent crit­i­cisms of EA “longter­mist” thinking

kbogJan 6, 2020, 4:31 AM
27 points
46 comments11 min readEA link

The Cred­i­bil­ity of Apoca­lyp­tic Claims: A Cri­tique of Techno-Fu­tur­ism within Ex­is­ten­tial Risk

EmberAug 16, 2022, 7:48 PM
25 points
35 comments17 min readEA link

[Question] Why doesn’t WWOTF men­tion the Bronze Age Col­lapse?

Eevee🔹Sep 19, 2022, 6:29 AM
16 points
4 comments1 min readEA link

A Se­quence Against Strong Longter­mism

vadmasJul 22, 2021, 8:07 PM
20 points
14 comments1 min readEA link

Win­ners of the EA Crit­i­cism and Red Team­ing Contest

LizkaOct 1, 2022, 1:50 AM
226 points
41 comments19 min readEA link

[Linkpost] Dan Luu: Fu­tur­ist pre­dic­tion meth­ods and accuracy

LinchSep 15, 2022, 9:20 PM
64 points
7 comments4 min readEA link
(danluu.com)

The rea­son­able­ness of spe­cial concerns

jwtAug 29, 2022, 12:10 AM
3 points
0 comments3 min readEA link

[Linkpost] Eric Sch­witzgebel: Against Longtermism

ag4000Jan 6, 2022, 2:15 PM
41 points
4 comments1 min readEA link

Four Con­cerns Re­gard­ing Longtermism

Pat AndriolaJun 6, 2022, 5:42 AM
82 points
15 comments7 min readEA link

Re­marks about Longter­mism in­spired by Tor­res’s ‘Against Longter­mism’

carboniferous_umbraculumFeb 2, 2022, 4:20 PM
43 points
0 comments24 min readEA link

The Long Reflec­tion as the Great Stag­na­tion

LarksSep 1, 2022, 8:55 PM
43 points
2 comments8 min readEA link

Should strong longter­mists re­ally want to min­i­mize ex­is­ten­tial risk?

tobycrisford 🔸Dec 4, 2022, 4:56 PM
38 points
9 comments4 min readEA link

Con­cerns/​Thoughts over in­ter­na­tional aid, longter­mism and philo­soph­i­cal notes on speak­ing with Larry Temkin.

Ben YeohJul 27, 2022, 7:51 PM
35 points
1 comment12 min readEA link

For­mal­is­ing the “Wash­ing Out Hy­poth­e­sis”

dwebbMar 25, 2021, 11:40 AM
101 points
27 comments12 min readEA link

Prevent­ing a US-China war as a policy priority

Matthew_BarnettJun 22, 2022, 6:07 PM
64 points
22 comments8 min readEA link

[Question] EA views on the AUKUS se­cu­rity pact?

DavidZhangSep 29, 2021, 8:24 AM
28 points
14 comments1 min readEA link

We in­ter­viewed 15 China-fo­cused re­searchers on how to do good research

gabriel_wagnerDec 19, 2022, 7:08 PM
49 points
4 comments23 min readEA link

Dani Nedal: Risks from great-power competition

EA GlobalFeb 13, 2020, 10:10 PM
20 points
0 comments16 min readEA link
(www.youtube.com)

[Question] Books /​ book re­views on nu­clear risk, WMDs, great power war?

MichaelA🔸Dec 15, 2020, 1:40 AM
16 points
16 comments1 min readEA link

[Links post] Economists Chris Blattman and Noah Smith on China, Taiwan, and the like­li­hood of war

Stephen ClareNov 7, 2022, 12:22 PM
74 points
1 comment2 min readEA link
(chrisblattman.com)

Prob­lem ar­eas be­yond 80,000 Hours’ cur­rent pri­ori­ties

Arden KoehlerJun 22, 2020, 12:49 PM
281 points
62 comments15 min readEA link

Brian Tse: Risks from Great Power Conflicts

EA GlobalMar 11, 2019, 3:02 PM
23 points
2 comments13 min readEA link
(www.youtube.com)

[Question] Will the next global con­flict be more like World War I?

FJehnMar 26, 2022, 2:57 PM
7 points
5 comments2 min readEA link

Alli­ance to Feed the Earth in Disasters (ALLFED) Progress Re­port & Giv­ing Tues­day Appeal

Denkenberger🔸Nov 21, 2018, 5:20 AM
21 points
3 comments8 min readEA link

Should Effec­tive Altru­ism be at war with North Korea?

BenHoffmanMay 5, 2019, 1:44 AM
−14 points
8 comments5 min readEA link
(benjaminrosshoffman.com)

Linkpost: The Scien­tists, the States­men, and the Bomb

Lauro LangoscoJul 8, 2022, 10:46 AM
13 points
5 comments3 min readEA link
(www.bismarckanalysis.com)

Bu­gout Bags for Disasters

FinMar 8, 2022, 5:03 PM
10 points
0 comments4 min readEA link

Why Don’t We Use Chem­i­cal Weapons Any­more?

DaleApr 23, 2020, 1:25 AM
28 points
4 comments3 min readEA link
(acoup.blog)

The Mys­tery of the Cuban mis­sile crisis

Nathan_BarnardMay 5, 2022, 10:51 PM
10 points
4 comments9 min readEA link

[Question] What are the best ways to en­courage de-es­ca­la­tion in re­gards to Ukraine?

oh54321Oct 9, 2022, 11:15 AM
13 points
4 comments1 min readEA link

Peter Wilde­ford on Fore­cast­ing Nu­clear Risk and why EA should fund scal­able non-profits

Michaël TrazziApr 13, 2022, 4:29 PM
9 points
1 comment3 min readEA link
(theinsideview.github.io)

[Question] What is the strongest case for nu­clear weapons?

GarrisonApr 12, 2022, 7:32 PM
6 points
3 comments1 min readEA link

The chance of ac­ci­den­tal nu­clear war has been go­ing down

Peter WildefordMay 31, 2022, 2:48 PM
66 points
5 comments1 min readEA link
(www.pasteurscube.com)

AMA: Joan Rohlfing, Pres­i­dent and COO of the Nu­clear Threat Initiative

Joan RohlfingDec 6, 2021, 8:58 PM
74 points
35 comments1 min readEA link

Notes on “The Bomb: Pres­i­dents, Gen­er­als, and the Se­cret His­tory of Nu­clear War” (2020)

MichaelA🔸Feb 6, 2021, 11:10 AM
18 points
5 comments8 min readEA link

Early Reflec­tions and Re­sources on the Rus­sian In­va­sion of Ukraine

SethBaumMar 18, 2022, 2:54 PM
57 points
3 comments8 min readEA link

Risks from the UK’s planned in­crease in nu­clear warheads

Matt GoodmanAug 15, 2021, 8:14 PM
23 points
8 comments2 min readEA link

Samotsvety Nu­clear Risk Fore­casts — March 2022

NunoSempereMar 10, 2022, 6:52 PM
155 points
54 comments6 min readEA link

Notes on “The Myth of the Nu­clear Revolu­tion” (Lie­ber & Press, 2020)

imp4rtial 🔸May 24, 2022, 3:02 PM
42 points
2 comments20 min readEA link

Con­flict and poverty (or should we tackle poverty in nu­clear con­texts more?)

SanjayMar 6, 2020, 9:59 PM
13 points
0 comments7 min readEA link

Have we un­der­es­ti­mated the risk of a NATO-Rus­sia nu­clear war? Can we do any­thing about it?

TopherHallquistJul 9, 2015, 4:09 PM
8 points
20 comments1 min readEA link

What’s the big deal about hy­per­sonic mis­siles?

jiaMay 18, 2020, 7:17 AM
40 points
9 comments5 min readEA link

Be­ing the per­son who doesn’t launch nukes: new EA cause?

MichaelDickensAug 6, 2022, 3:44 AM
9 points
3 comments1 min readEA link

Ask a Nu­clear Expert

Group OrganizerMar 3, 2022, 11:28 AM
5 points
0 comments1 min readEA link

The Threat of Nu­clear Ter­ror­ism MOOC [link]

RyanCareyOct 19, 2017, 12:31 PM
8 points
1 comment1 min readEA link

Some AI Gover­nance Re­search Ideas

MarkusAnderljungJun 3, 2021, 10:51 AM
102 points
5 comments2 min readEA link

Notes on ‘Atomic Ob­ses­sion’ (2009)

lukeprogOct 26, 2019, 12:30 AM
62 points
16 comments8 min readEA link

Overview of Re­think Pri­ori­ties’ work on risks from nu­clear weapons

MichaelA🔸Jun 10, 2021, 6:48 PM
43 points
1 comment3 min readEA link

Is it eth­i­cal to ex­pand nu­clear en­ergy use?

simonfriederichNov 5, 2022, 10:38 AM
12 points
5 comments3 min readEA link

Op­por­tu­ni­ties that sur­prised us dur­ing our Clearer Think­ing Re­grants program

spencergNov 7, 2022, 1:09 PM
116 points
5 comments9 min readEA link

Rus­sia-Ukraine Con­flict: Fore­cast­ing Nu­clear Risk in 2022

MetaculusMar 24, 2022, 9:03 PM
23 points
1 comment12 min readEA link

Re­duc­ing Nu­clear Risk Through Im­proved US-China Relations

MetaculusMar 21, 2022, 11:50 AM
31 points
19 comments5 min readEA link

[Question] How many times would nu­clear weapons have been used if ev­ery state had them since 1950?

ecaMay 4, 2021, 3:34 PM
16 points
13 comments1 min readEA link

Why I think there’s a one-in-six chance of an im­mi­nent global nu­clear war

TegmarkOct 8, 2022, 11:25 PM
53 points
24 comments1 min readEA link

Model­ing re­sponses to changes in nu­clear risk

Nathan_BarnardJun 23, 2022, 12:50 PM
7 points
0 comments5 min readEA link

An­nounc­ing the first is­sue of Asterisk

Clara CollierNov 21, 2022, 6:51 PM
275 points
47 comments1 min readEA link

Does the US nu­clear policy still tar­get cities?

Jeffrey LadishOct 2, 2019, 5:46 PM
32 points
0 comments10 min readEA link

[EAG talk] The like­li­hood and sever­ity of a US-Rus­sia nu­clear ex­change (Ro­driguez, 2019)

Will AldredJul 3, 2022, 1:53 PM
32 points
0 comments2 min readEA link
(www.youtube.com)

Get­ting Nu­clear Policy Right Is Hard

GentzelSep 19, 2017, 1:00 AM
16 points
4 comments1 min readEA link

The Nu­clear Threat Ini­ti­a­tive is not only nu­clear—notes from a call with NTI

SanjayJun 26, 2020, 5:29 PM
29 points
2 comments6 min readEA link

[Question] I’m in­ter­view­ing some­times EA critic Jeffrey Lewis (AKA Arms Con­trol Wonk) about what we get right and wrong when it comes to nu­clear weapons and nu­clear se­cu­rity. What should I ask him?

Robert_WiblinAug 26, 2022, 6:06 PM
33 points
8 comments1 min readEA link

Book re­view: The Dooms­day Machine

eukaryoteSep 10, 2018, 1:43 AM
49 points
6 comments5 min readEA link

China’s Z-Ma­chine, a test fa­cil­ity for nu­clear weapons

EdoAradDec 13, 2018, 7:03 AM
11 points
0 comments1 min readEA link
(www.scmp.com)

An­nounc­ing In­sights for Impact

Christian PearsonJan 4, 2023, 7:00 AM
80 points
6 comments1 min readEA link

Off-Earth Governance

EdoAradSep 6, 2019, 7:26 PM
18 points
3 comments2 min readEA link

Stu­art Arm­strong: The far fu­ture of in­tel­li­gent life across the universe

EA GlobalJun 8, 2018, 7:15 AM
19 points
0 comments12 min readEA link
(www.youtube.com)

An­nounc­ing the Cen­ter for Space Governance

Space GovernanceJul 10, 2022, 1:53 PM
73 points
6 comments1 min readEA link

Leav­ing Earth

Arjun KhemaniJul 6, 2022, 10:45 AM
5 points
0 comments6 min readEA link
(arjunkhemani.com)

All Pos­si­ble Views About Hu­man­ity’s Fu­ture Are Wild

Holden KarnofskyJul 13, 2021, 4:57 PM
217 points
47 comments8 min readEA link
(www.cold-takes.com)

An­nounc­ing the Space Fu­tures Initiative

Carson EzellSep 12, 2022, 12:37 PM
71 points
3 comments2 min readEA link

[Question] What anal­y­sis has been done of space coloniza­tion as a cause area?

Eli RoseOct 9, 2019, 8:33 PM
14 points
8 comments1 min readEA link

Space Ex­plo­ra­tion & Satel­lites on Our World in Data

EdMathieuJun 14, 2022, 12:05 PM
57 points
2 comments1 min readEA link
(ourworldindata.org)

Will we even­tu­ally be able to colonize other stars? Notes from a pre­limi­nary review

Nick_BecksteadJun 22, 2014, 6:19 PM
30 points
7 comments32 min readEA link

Lu­nar Colony

purplepeopleDec 19, 2016, 4:43 PM
2 points
26 comments1 min readEA link

Kurzge­sagt’s most re­cent video pro­mot­ing the in­tro­duc­ing of wild life to other planets is un­eth­i­cal and irresponsible

David van BeverenDec 11, 2022, 8:43 PM
101 points
33 comments2 min readEA link

Save the Date: EAGxMars

OllieBaseApr 1, 2022, 11:44 AM
148 points
15 comments1 min readEA link

[Pod­cast] Ajeya Co­tra on wor­ld­view di­ver­sifi­ca­tion and how big the fu­ture could be

Eevee🔹Jan 22, 2021, 11:57 PM
57 points
20 comments1 min readEA link
(80000hours.org)

When to di­ver­sify? Break­ing down mis­sion-cor­re­lated investing

jhNov 29, 2022, 11:18 AM
33 points
2 comments8 min readEA link

“Far Co­or­di­na­tion”

𝕮𝖎𝖓𝖊𝖗𝖆Nov 23, 2022, 5:14 PM
5 points
0 comments1 min readEA link

Test Your Knowl­edge of the Long-Term Future

AndreFerrettiDec 10, 2022, 11:01 AM
22 points
0 comments1 min readEA link

Five Areas I Wish EAs Gave More Focus

PrometheusOct 27, 2022, 6:13 AM
8 points
14 comments4 min readEA link

[Question] Does Utili­tar­ian Longter­mism Im­ply Directed Pansper­mia?

AhrenbachApr 24, 2020, 6:15 PM
4 points
17 comments1 min readEA link

Power Laws of Value

tylermjohnMar 17, 2025, 10:10 AM
44 points
16 comments13 min readEA link

In­for­ma­tion se­cu­rity con­sid­er­a­tions for AI and the long term future

Jeffrey LadishMay 2, 2022, 8:53 PM
134 points
8 comments11 min readEA link

Nar­ra­tion: Re­duc­ing long-term risks from malev­olent actors

D0TheMathJul 15, 2021, 4:26 PM
23 points
0 comments1 min readEA link
(anchor.fm)

Bulk­ing in­for­ma­tion ad­di­tion­al­ities in global de­vel­op­ment for medium-term lo­cal prosperity

brb243Apr 11, 2022, 5:52 PM
4 points
0 comments4 min readEA link

How big are risks from non-state ac­tors? Base rates for ter­ror­ist attacks

rosehadsharFeb 16, 2022, 10:20 AM
54 points
3 comments19 min readEA link

[Question] Are there highly lev­er­aged dona­tion op­por­tu­ni­ties to pre­vent wars and dic­ta­tor­ships?

Dawn DrescherFeb 26, 2022, 3:31 AM
58 points
8 comments1 min readEA link

Kel­sey Piper’s re­cent in­ter­view of SBF

Agustín Covarrubias 🔸Nov 16, 2022, 8:30 PM
292 points
155 comments2 min readEA link
(www.vox.com)

[Question] Most harm­ful peo­ple in his­tory?

SiebeRozendalSep 11, 2022, 3:04 AM
17 points
9 comments1 min readEA link

Case for emer­gency re­sponse teams

technicalitiesApr 5, 2022, 11:08 AM
249 points
50 comments5 min readEA link

An ar­gu­ment that EA should fo­cus more on cli­mate change

Ann Garth 🔸Dec 8, 2020, 2:48 AM
30 points
3 comments10 min readEA link

Per­sua­sion Tools: AI takeover with­out AGI or agency?

kokotajlodNov 20, 2020, 4:56 PM
15 points
5 comments10 min readEA link

Robert Wiblin: Mak­ing sense of long-term in­di­rect effects

EA GlobalAug 6, 2016, 12:40 AM
14 points
0 comments17 min readEA link
(www.youtube.com)

What can we learn from a short pre­view of a su­per-erup­tion and what are some tractable ways of miti­gat­ing it

Mike CassidyFeb 3, 2022, 11:26 AM
53 points
0 comments6 min readEA link

On the Vuln­er­a­ble World Hypothesis

Catherine BrewerAug 1, 2022, 12:55 PM
44 points
12 comments14 min readEA link

AGI in a vuln­er­a­ble world

AI ImpactsApr 2, 2020, 3:43 AM
17 points
0 comments1 min readEA link
(aiimpacts.org)

“The Vuln­er­a­ble World Hy­poth­e­sis” (Nick Bostrom’s new pa­per)

Hauke HillebrandtNov 9, 2018, 11:20 AM
24 points
6 comments1 min readEA link
(nickbostrom.com)

Civ­i­liza­tional vulnerabilities

Vasco Grilo🔸Apr 22, 2022, 9:37 AM
7 points
0 comments3 min readEA link

In­fluenc­ing United Na­tions Space Governance

Carson EzellMay 9, 2022, 5:44 PM
30 points
0 comments11 min readEA link

William Mar­shall: Lu­nar colony

EA GlobalAug 11, 2017, 8:19 AM
7 points
0 comments1 min readEA link
(www.youtube.com)

[Question] Is there a sub­field of eco­nomics de­voted to “frag­ility vs re­silience”?

steve6320Jul 21, 2020, 2:21 AM
23 points
5 comments1 min readEA link

David Denken­berger: Loss of In­dus­trial Civ­i­liza­tion and Re­cov­ery (Work­shop)

Denkenberger🔸Feb 19, 2019, 3:58 PM
27 points
1 comment15 min readEA link

Should we buy coal mines?

John G. HalsteadMay 4, 2022, 7:28 AM
216 points
31 comments7 min readEA link

What is the like­li­hood that civ­i­liza­tional col­lapse would cause tech­nolog­i­cal stag­na­tion? (out­dated re­search)

Luisa_RodriguezOct 19, 2022, 5:35 PM
83 points
13 comments32 min readEA link

[Question] Has any­one done an anal­y­sis on the im­por­tance, tractabil­ity, and ne­glect­ed­ness of keep­ing hu­man-di­gestible calories in the ocean in case we need it af­ter some global catas­tro­phe?

Mati_RoyFeb 17, 2020, 7:47 AM
9 points
5 comments1 min readEA link

The Do­mes­ti­ca­tion of Zebras

Further or AlternativelySep 9, 2022, 10:58 AM
15 points
20 comments2 min readEA link

Ground­wa­ter De­ple­tion: con­trib­u­tor to global civ­i­liza­tion col­lapse.

RickJSDec 3, 2022, 7:09 AM
11 points
6 comments3 min readEA link
(drive.google.com)

Hu­man­i­ties Re­search Ideas for Longtermists

LizkaJun 9, 2021, 4:39 AM
151 points
13 comments13 min readEA link

User-Friendly In­tro Post

James Odene [User-Friendly]Jun 23, 2022, 11:26 AM
117 points
7 comments6 min readEA link

[Question] Book on Civil­i­sa­tional Col­lapse?

MiltonOct 7, 2020, 8:51 AM
9 points
6 comments1 min readEA link

AGI safety and los­ing elec­tric­ity/​in­dus­try re­silience cost-effectiveness

Ross_TiemanNov 17, 2019, 8:42 AM
31 points
10 comments37 min readEA link

The Case for a Strate­gic U.S. Coal Re­serve for Cli­mate and Catastrophes

ColdButtonIssuesMay 5, 2022, 1:24 AM
30 points
3 comments5 min readEA link

Mar­i­time ca­pa­bil­ity and post-catas­tro­phe re­silience.

Tom GardinerJul 14, 2022, 11:29 AM
32 points
7 comments6 min readEA link

Ad­vice Wanted on Ex­pand­ing an EA Project

Denkenberger🔸Apr 23, 2016, 11:20 PM
4 points
3 comments2 min readEA link

Some his­tory top­ics it might be very valuable to investigate

MichaelA🔸Jul 8, 2020, 2:40 AM
91 points
34 comments6 min readEA link

Notes on Hen­rich’s “The WEIRDest Peo­ple in the World” (2020)

MichaelA🔸Mar 25, 2021, 5:04 AM
44 points
4 comments3 min readEA link

Luisa Ro­driguez: Do­ing em­piri­cal global pri­ori­ties re­search — the ques­tion of civ­i­liza­tional col­lapse and recovery

EA GlobalOct 25, 2020, 5:48 AM
11 points
0 comments1 min readEA link
(www.youtube.com)

Notes on Schel­ling’s “Strat­egy of Con­flict” (1960)

MichaelA🔸Jan 29, 2021, 8:56 AM
20 points
4 comments8 min readEA link

Re­search ex­er­cise: 5-minute in­side view on how to re­duce risk of nu­clear war

EmrikOct 23, 2022, 12:42 PM
16 points
2 comments6 min readEA link

Pod­cast: Samo Burja on the war in Ukraine, avoid­ing nu­clear war and the longer term implications

Gus DockerMar 11, 2022, 6:50 PM
4 points
6 comments14 min readEA link
(www.utilitarianpodcast.com)

Re­quest for pro­pos­als: Help Open Philan­thropy quan­tify biolog­i­cal risk

djbinderMay 12, 2022, 9:28 PM
137 points
10 comments7 min readEA link

“A Creepy Feel­ing”: Nixon’s De­ci­sion to Disavow Biolog­i­cal Weapons

TW123Sep 30, 2022, 3:17 PM
48 points
3 comments17 min readEA link

Should some­one start a grass­roots cam­paign for USA to recog­nise the State of Pales­tine?

freedomandutilityMay 11, 2021, 3:29 PM
−2 points
4 comments1 min readEA link

The Germy Para­dox – The empty sky: A his­tory of state biolog­i­cal weapons programs

eukaryoteSep 24, 2019, 5:26 AM
24 points
0 comments1 min readEA link
(eukaryotewritesblog.com)

US House Vote on Sup­port for Ye­men War

Radical Empath IsmamDec 12, 2022, 2:13 AM
−4 points
0 comments1 min readEA link
(theintercept.com)

Open Philan­thropy Shal­low In­ves­ti­ga­tion: Civil Con­flict Reduction

Lauren GilbertApr 12, 2022, 6:18 PM
122 points
12 comments24 min readEA link

The Germy Para­dox: An Introduction

eukaryoteSep 24, 2019, 5:18 AM
48 points
4 comments3 min readEA link
(eukaryotewritesblog.com)

Ground­wa­ter crisis: a threat of civ­i­liza­tion collapse

RickJSDec 24, 2022, 9:21 PM
0 points
0 comments3 min readEA link
(drive.google.com)

An Overview of Poli­ti­cal Science (Policy and In­ter­na­tional Re­la­tions Primer for EA, Part 3)

DavidmanheimJan 5, 2020, 12:54 PM
22 points
4 comments10 min readEA link

EA and the Pos­si­ble De­cline of the US: Very Rough Thoughts

Cullen 🔸Jan 8, 2021, 7:30 AM
56 points
19 comments4 min readEA link

[Question] Ukraine: How a reg­u­lar per­son can effec­tively help their coun­try dur­ing war?

ValmothyFeb 26, 2022, 10:58 AM
49 points
19 comments1 min readEA link

[Cause Ex­plo­ra­tion Prizes] Pocket Parks

Open PhilanthropyAug 29, 2022, 11:01 AM
7 points
0 comments11 min readEA link

A coun­ter­fac­tual QALY for USD 2.60–28.94?

brb243Sep 6, 2020, 9:45 PM
37 points
6 comments5 min readEA link

900+ Fore­cast­ers on Whether Rus­sia Will In­vade Ukraine

MetaculusFeb 19, 2022, 1:29 PM
51 points
0 comments4 min readEA link
(metaculus.medium.com)

The Germy Para­dox – Filters: A taboo

eukaryoteOct 19, 2019, 12:14 AM
17 points
2 comments9 min readEA link
(eukaryotewritesblog.com)

Rough at­tempt to pro­file char­i­ties which sup­port Ukrainian war re­lief in terms of their cost-effec­tive­ness.

MichaelFeb 27, 2022, 12:51 AM
29 points
5 comments4 min readEA link

Eval­u­at­ing Com­mu­nal Violence from an Effec­tive Altru­ist Perspective

Frank FredericksAug 13, 2019, 7:38 PM
16 points
4 comments8 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngoJan 19, 2021, 12:07 AM
36 points
2 comments8 min readEA link

The case for not in­vad­ing Crimea

kbogJan 19, 2023, 6:37 AM
12 points
16 comments19 min readEA link

On­shore al­gae farms could feed the world

TynerOct 10, 2022, 5:44 PM
11 points
0 comments1 min readEA link
(tos.org)

EA and the cur­rent fund­ing situation

William_MacAskillMay 10, 2022, 2:26 AM
567 points
185 comments24 min readEA link

Po­ta­toes: A Crit­i­cal Review

Pablo VillalobosMay 10, 2022, 3:27 PM
120 points
27 comments9 min readEA link
(docs.google.com)

On famines, food tech­nolo­gies and global shocks

RamiroOct 12, 2021, 2:28 PM
16 points
2 comments4 min readEA link

Res­lab Re­quest for In­for­ma­tion: EA hard­ware projects

Joel BeckerOct 26, 2022, 11:38 AM
46 points
15 comments1 min readEA link

Safety Sells: For-profit in­vest­ing into civ­i­liza­tional re­silience (food se­cu­rity, biose­cu­rity)

FGHJan 3, 2023, 12:24 PM
30 points
4 comments6 min readEA link

U.S. Ex­ec­u­tive branch ap­point­ments: why you may want to pur­sue one and tips for how to do so

Demosthenes_USANov 28, 2020, 7:20 PM
65 points
6 comments12 min readEA link

[Re­view and notes] How Democ­racy Ends—David Runciman

BenFeb 13, 2020, 10:30 PM
31 points
1 comment5 min readEA link

An Ar­gu­ment for Why the Fu­ture May Be Good

Ben_West🔸Jul 19, 2017, 10:03 PM
51 points
30 comments4 min readEA link

[Question] Where the QALY’s at in poli­ti­cal sci­ence?

Timothy_LiptrotAug 5, 2020, 5:04 AM
7 points
7 comments1 min readEA link

EA read­ing list: suffer­ing-fo­cused ethics

richard_ngoAug 3, 2020, 9:40 AM
43 points
3 comments1 min readEA link

Ben Garfinkel: The fu­ture of surveillance

EA GlobalJun 8, 2018, 7:51 AM
19 points
0 comments11 min readEA link
(www.youtube.com)

[Question] Books on au­thor­i­tar­i­anism, Rus­sia, China, NK, demo­cratic back­slid­ing, etc.?

MichaelA🔸Feb 2, 2021, 3:52 AM
20 points
21 comments1 min readEA link

Ide­olog­i­cal en­g­ineer­ing and so­cial con­trol: A ne­glected topic in AI safety re­search?

Geoffrey MillerSep 1, 2017, 6:52 PM
17 points
8 comments2 min readEA link

Cause Area: Hu­man Rights in North Korea

Dawn DrescherNov 20, 2017, 8:52 PM
64 points
12 comments20 min readEA link

What is a ‘broad in­ter­ven­tion’ and what is a ‘nar­row in­ter­ven­tion’? Are we con­fus­ing our­selves?

Robert_WiblinDec 19, 2015, 4:12 PM
20 points
3 comments2 min readEA link

We Should Give Ex­tinc­tion Risk an Acronym

Charlie_GuthmannOct 19, 2022, 7:16 AM
21 points
15 comments1 min readEA link