RSS

S-risk

TagLast edit: 16 Jun 2022 19:20 UTC by Pablo

An s-risk, or suffering risk, is a risk involving the creation of suffering on an astronomical scale.

Evaluation

80,000 Hours rates s-risks a “potential highest priority area”: an issue that, if more thoroughly examined, could rank as a top global challenge.[1]

Further reading

Althaus, David & Lukas Gloor (2019) Reducing risks of astronomical suffering: a neglected priority, Center on Long-Term Risk, August.

Baumann, Tobias (2017) S-risks: an introduction, Center for Reducing Suffering, August 15.

Tomasik, Brian (2019) Risks of astronomical future suffering, Center on Long-Term Risk, July 2.

Related entries

Center for Reducing Suffering | Center on Long-Term Risk | ethics of existential risk | hellish existential catastrophe | pain and suffering | suffering-focused ethics

  1. ^

    80,000 Hours (2022) Our current list of pressing world problems, 80,000 Hours.

Begin­ner’s guide to re­duc­ing s-risks [link-post]

Center on Long-Term Risk17 Oct 2023 0:51 UTC
130 points
3 comments3 min readEA link
(longtermrisk.org)

New book on s-risks

Tobias_Baumann26 Oct 2022 12:04 UTC
294 points
27 comments1 min readEA link

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
26 points
1 comment1 min readEA link
(s-risks.org)

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_Gloor15 Feb 2023 13:01 UTC
79 points
5 comments13 min readEA link

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_Daniel2 Jun 2017 8:48 UTC
13 points
1 comment22 min readEA link
(www.youtube.com)

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
352 points
96 comments37 min readEA link

S-risk FAQ

Tobias_Baumann18 Sep 2017 8:05 UTC
29 points
8 comments8 min readEA link

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
43 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Ad­dress­ing challenges for s-risk re­duc­tion: Toward pos­i­tive com­mon-ground proxies

Teo Ajantaival22 Mar 2025 17:50 UTC
52 points
1 comment17 min readEA link

Why Donate to the Cen­ter for Re­duc­ing Suffer­ing?

Center for Reducing Suffering21 Nov 2025 15:45 UTC
27 points
1 comment3 min readEA link

Un­der­stand­ing Sadism

Jim Buhler18 Aug 2025 13:25 UTC
20 points
2 comments8 min readEA link

[Question] Look­ing for com­mu­ni­ties fo­cused on s-risks

Bulat25 Sep 2025 18:28 UTC
15 points
5 comments1 min readEA link

S-Risks: Fates Worse Than Ex­tinc­tion

A.G.G. Liu4 May 2024 15:30 UTC
104 points
9 comments6 min readEA link
(www.lesswrong.com)

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
145 points
119 comments34 min readEA link
(www.sentienceinstitute.org)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanni1 Jul 2021 21:01 UTC
148 points
10 comments32 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
78 points
11 comments48 min readEA link

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
41 points
3 comments1 min readEA link
(s-risks.org)

An­nounc­ing the CLR Foun­da­tions Course and CLR S-Risk Seminars

James Faville19 Nov 2024 1:18 UTC
52 points
2 comments3 min readEA link

Who is pro­tect­ing an­i­mals in the long-term fu­ture?

alene21 Mar 2022 16:49 UTC
169 points
33 comments3 min readEA link

[Question] De­bates on re­duc­ing long-term s-risks?

jackchang1106 Apr 2023 1:26 UTC
13 points
2 comments1 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

The Odyssean Process

Odyssean Institute24 Nov 2023 13:48 UTC
25 points
6 comments1 min readEA link
(www.odysseaninstitute.org)

What can we do now to pre­pare for AI sen­tience, in or­der to pro­tect them from the global scale of hu­man sadism?

rime18 Apr 2023 9:58 UTC
44 points
0 comments2 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

Winston24 Oct 2023 19:40 UTC
138 points
7 comments6 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonY16 Oct 2021 11:33 UTC
80 points
22 comments24 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

Chi1 Feb 2022 22:24 UTC
62 points
5 comments10 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA🔸15 Jul 2020 12:28 UTC
81 points
7 comments7 min readEA link

Sen­tience In­sti­tute 2021 End of Year Summary

Ali26 Nov 2021 14:40 UTC
66 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

On­line Work­ing /​ Com­mu­nity Meetup for the Abo­li­tion of Suffering

Ruth_Seleo31 May 2022 9:16 UTC
7 points
5 comments1 min readEA link

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

Chi13 Feb 2024 22:24 UTC
95 points
7 comments2 min readEA link

Rea­sons for op­ti­mism about mea­sur­ing malev­olence to tackle x- and s-risks

Jamie_Harris2 Apr 2024 10:26 UTC
85 points
12 comments8 min readEA link

S-risks, X-risks, and Ideal Futures

OscarD🔸18 Jun 2024 15:12 UTC
15 points
6 comments1 min readEA link

Pod­cast epi­sode with Michael St. Jules

Elijah Whipple8 May 2025 18:54 UTC
48 points
3 comments1 min readEA link

Ques­tion about ter­minol­ogy for lesser X-risks and S-risks

Laura Leighton8 Aug 2022 4:39 UTC
9 points
3 comments1 min readEA link

Should we ex­pect the fu­ture to be good?

Neil Crawford30 Apr 2025 0:45 UTC
38 points
1 comment14 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

Clas­sify­ing sources of AI x-risk

Sam Clarke8 Aug 2022 18:18 UTC
41 points
4 comments3 min readEA link

The His­tory of AI Rights Research

Jamie_Harris27 Aug 2022 8:14 UTC
48 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

Babel8 Jul 2009 12:42 UTC
12 points
0 comments1 min readEA link
(longtermrisk.org)

The Big Slurp could eat the Boltz­mann brains

Mark McDonald25 Feb 2025 1:13 UTC
13 points
2 comments4 min readEA link

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

Fai4 Mar 2022 19:59 UTC
144 points
29 comments16 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo Ajantaival23 May 2022 19:17 UTC
62 points
14 comments29 min readEA link

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus Docker7 May 2021 14:32 UTC
8 points
0 comments1 min readEA link
(anchor.fm)

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

[Question] Where should I donate?

Eevee🔹22 Nov 2021 20:56 UTC
29 points
10 comments1 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

Eevee🔹26 Jul 2022 5:04 UTC
38 points
3 comments5 min readEA link
(sunyshore.substack.com)

The Case for An­i­mal-In­clu­sive Longtermism

Eevee🔹17 Feb 2024 0:07 UTC
68 points
7 comments30 min readEA link
(brill.com)

Plan­ning ‘re­sis­tance’ to illiber­al­ism and authoritarianism

david_reinstein16 Jun 2024 17:21 UTC
29 points
2 comments2 min readEA link
(www.nytimes.com)

Me­diocre AI safety as ex­is­ten­tial risk

technicalities16 Mar 2022 11:50 UTC
52 points
12 comments3 min readEA link

Cos­mic AI safety

Magnus Vinding6 Dec 2024 22:32 UTC
24 points
5 comments6 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
30 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

Michael St Jules 🔸3 Oct 2021 16:55 UTC
48 points
19 comments1 min readEA link

New eBook: Es­says on UFOs and Re­lated Conjectures

Magnus Vinding4 Aug 2024 7:34 UTC
27 points
3 comments7 min readEA link

Perché i rischi di sofferenza sono i rischi es­isten­ziali peg­giori e come pos­si­amo prevenirli

EA Italy17 Jan 2023 11:14 UTC
1 point
0 comments1 min readEA link

Pro­mot­ing com­pas­sion­ate longtermism

jonleighton7 Dec 2022 14:26 UTC
117 points
5 comments12 min readEA link

We Prob­a­bly Shouldn’t Solve Consciousness

Silica10 Feb 2024 7:12 UTC
33 points
5 comments16 min readEA link

[Question] [Seek­ing Ad­vice] 19y/​o de­cid­ing whether to drop den­tistry dou­ble ma­jor for sin­gle CS ma­jor to save 4 years and fo­cus on AI risks

jackchang11022 Nov 2025 15:32 UTC
22 points
4 comments4 min readEA link

[Linkpost] My moral view: Re­duc­ing suffer­ing, ‘how to be’ as fun­da­men­tal to moral­ity, no pos­i­tive value, cons of grand the­ory, and more—By Si­mon Knutsson

Alistair Webster25 Aug 2023 12:53 UTC
35 points
2 comments2 min readEA link
(centerforreducingsuffering.org)

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum Hinchcliffe14 Oct 2023 21:18 UTC
28 points
4 comments9 min readEA link

Longter­mism and An­i­mal Farm­ing Trajectories

MichaelDello27 Dec 2022 0:58 UTC
51 points
8 comments17 min readEA link
(www.sentienceinstitute.org)

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey Miller23 Aug 2022 17:15 UTC
39 points
17 comments1 min readEA link

A se­lec­tion of some writ­ings and con­sid­er­a­tions on the cause of ar­tifi­cial sentience

Raphaël_Pesah10 Aug 2023 18:23 UTC
49 points
1 comment10 min readEA link

[Question] What is the best way to ex­plain that s-risks are im­por­tant—ba­si­cally, why ex­is­tence is not in­her­ently bet­ter than non ex­is­tence? In­tend­ing this for some­one mostly un­fa­mil­iar with EA, like some­one in an in­tro program

shepardriley8 Nov 2024 18:12 UTC
2 points
0 comments1 min readEA link

Cen­ter for Re­duc­ing Suffer­ing (CRS) S-Risk In­tro­duc­tory Fel­low­ship ap­pli­ca­tions are open!

Zoé Roy-Stang (CRS)3 Dec 2025 21:32 UTC
6 points
0 comments1 min readEA link

The­ory: “WAW might be of higher im­pact than x-risk pre­ven­tion based on util­i­tar­i­anism”

Jens Aslaug 🔸12 Sep 2023 13:11 UTC
51 points
20 comments17 min readEA link

Rewil­d­ing Is Ex­tremely Bad

Bentham's Bulldog18 Nov 2025 17:44 UTC
8 points
11 comments7 min readEA link

New s-risks au­dio­book available now

Alistair Webster24 May 2023 20:27 UTC
87 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Digi­tal Minds Must Be Pro­tected from Torture

bcforstadt27 Nov 2025 20:05 UTC
5 points
0 comments2 min readEA link

The Light­cone solu­tion to the trans­mit­ter room problem

OGTutzauer🔸29 Jan 2025 10:03 UTC
10 points
6 comments3 min readEA link

New Book: “Rea­soned Poli­tics” + Why I have writ­ten a book about politics

Magnus Vinding3 Mar 2022 11:31 UTC
99 points
9 comments5 min readEA link

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term Risk15 Feb 2024 18:26 UTC
89 points
2 comments8 min readEA link

Cen­ter on Long-Term Risk: Sum­mer Re­search Fel­low­ship 2025

Center on Long-Term Risk26 Mar 2025 17:28 UTC
44 points
0 comments1 min readEA link
(longtermrisk.org)

‘Es­says on Longter­mism’ Com­pe­ti­tion Winners

Toby Tremlett🔹13 Nov 2025 9:43 UTC
65 points
5 comments2 min readEA link

Will Sen­tience Make AI’s Mo­ral­ity Bet­ter?

Ronen Bar18 May 2025 4:34 UTC
27 points
4 comments10 min readEA link

Against Mak­ing Up Our Con­scious Minds

Silica10 Feb 2024 7:12 UTC
13 points
0 comments5 min readEA link

New Book: “Min­i­mal­ist Ax­iolo­gies: Alter­na­tives to ‘Good Minus Bad’ Views of Value”

Teo Ajantaival19 Jul 2024 13:00 UTC
60 points
8 comments5 min readEA link

Cen­ter on Long-Term Risk: An­nual re­view and fundraiser 2023

Center on Long-Term Risk13 Dec 2023 16:42 UTC
79 points
3 comments4 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac King15 Oct 2024 1:40 UTC
12 points
17 comments4 min readEA link

Mo­ral Spillover in Hu­man-AI Interaction

Katerina Manoli5 Jun 2023 15:20 UTC
17 points
1 comment13 min readEA link

In­tro­duc­tion to suffer­ing-fo­cused ethics

Center for Reducing Suffering30 Aug 2024 16:55 UTC
57 points
2 comments22 min readEA link

Ar­gu­ments for Why Prevent­ing Hu­man Ex­tinc­tion is Wrong

Anthony Fleming21 May 2022 7:17 UTC
32 points
48 comments3 min readEA link

80k pod­cast epi­sode on sen­tience in AI systems

rgb15 Mar 2023 20:19 UTC
85 points
2 comments13 min readEA link
(80000hours.org)

3 Stages of Com­pe­ti­tion for the Long-Term Future

JordanStone30 Nov 2025 21:55 UTC
25 points
4 comments25 min readEA link

Could this be an un­usu­ally good time to Earn To Give?

Tom Gardiner 🔸3 Mar 2025 23:00 UTC
60 points
15 comments3 min readEA link

The Time of Mo­ral Cir­cle Cal­ibra­tion, Not Expansion

guneyulasturker 🔸20 Oct 2025 7:29 UTC
30 points
1 comment4 min readEA link

An­nounc­ing a New S-Risk In­tro­duc­tory Fellowship

Alistair Webster1 Jul 2024 14:37 UTC
54 points
5 comments1 min readEA link

Eth­i­cal anal­y­sis of pur­ported risks and dis­asters in­volv­ing suffer­ing, ex­tinc­tion, or a lack of pos­i­tive value

JoA🔸17 Mar 2025 13:36 UTC
20 points
0 comments1 min readEA link
(jeet.ieet.org)

Three Cruxes for Ex­is­ten­tial Choices Presentation

wallower24 Mar 2025 5:24 UTC
6 points
0 comments1 min readEA link
(drive.google.com)

Cur­ing past suffer­ings and pre­vent­ing s-risks via in­dex­i­cal uncertainty

turchin27 Sep 2018 10:48 UTC
1 point
18 comments4 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
65 points
31 comments8 min readEA link

Life of GPT

Odd anon8 Nov 2023 22:31 UTC
−1 points
0 comments5 min readEA link

Part 1/​4: A Case for Abolition

Dhruv Makwana11 Jan 2023 13:46 UTC
33 points
7 comments3 min readEA link

High­est pri­or­ity threat: in­finite tor­ture

KArax26 Jan 2023 8:51 UTC
−39 points
1 comment9 min readEA link

AI and Non-Existence

Blue1131 Jan 2025 13:19 UTC
4 points
0 comments2 min readEA link

Where I Am Donat­ing in 2024

MichaelDickens19 Nov 2024 0:09 UTC
181 points
73 comments46 min readEA link

CLR’s An­nual Re­port 2021

stefan.torges26 Feb 2022 12:47 UTC
79 points
0 comments12 min readEA link

[Question] Ask­ing for on­line calls on AI s-risks discussions

jackchang11014 May 2023 13:58 UTC
26 points
3 comments1 min readEA link

What is malev­olence? On the na­ture, mea­sure­ment, and dis­tri­bu­tion of dark traits

David_Althaus23 Oct 2024 8:41 UTC
107 points
6 comments52 min readEA link

Time to Think about ASI Con­sti­tu­tions?

ukc1001427 Jan 2025 9:28 UTC
22 points
0 comments12 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

AI Welfare Risks

Adrià Moret2 May 2025 17:41 UTC
27 points
0 comments1 min readEA link
(philpapers.org)

[Question] What are the pos­si­ble sce­nar­ios of AI simu­lat­ing biolog­i­cal suffer­ing to cause s-risks?

jackchang11030 Oct 2025 13:42 UTC
6 points
1 comment1 min readEA link

Prob­lem: Guaran­tee­ing the right to life for ev­ery­one, in the in­finitely long term (part 1)

lamparita18 Aug 2024 12:13 UTC
2 points
2 comments8 min readEA link

Briefly how I’ve up­dated since ChatGPT

rime25 Apr 2023 19:39 UTC
29 points
8 comments2 min readEA link
(www.lesswrong.com)

Fu­ture tech­nolog­i­cal progress does NOT cor­re­late with meth­ods that in­volve less suffering

Jim Buhler1 Aug 2023 9:30 UTC
64 points
12 comments4 min readEA link

When Has the World Ever Been Net Pos­i­tive?

Krimsey18 Sep 2025 14:17 UTC
10 points
2 comments3 min readEA link

What failure looks like for animals

Alistair Stewart3 Sep 2025 17:55 UTC
69 points
5 comments5 min readEA link

The prob­lem of ar­tifi­cial suffering

mlsbt24 Sep 2021 14:43 UTC
52 points
3 comments9 min readEA link

Prevent­ing An­i­mal Suffer­ing Lock-in: Why Eco­nomic Tran­si­tions Matter

Karen Singleton28 Jul 2025 21:55 UTC
43 points
4 comments10 min readEA link

[Question] Is con­tri­bu­tion to open-source ca­pa­bil­ities re­search so­cially benefi­cial? - my reasoning

damc430 Oct 2025 15:11 UTC
2 points
1 comment5 min readEA link

Some gov­er­nance re­search ideas to pre­vent malev­olent con­trol over AGI and why this might mat­ter a hell of a lot

Jim Buhler23 May 2023 13:07 UTC
64 points
5 comments16 min readEA link

How I learned to stop wor­ry­ing and love X-risk

Monero11 Mar 2024 3:58 UTC
11 points
1 comment1 min readEA link

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

Amrit Sidhu-Brar 🔸25 Jan 2024 18:43 UTC
55 points
0 comments6 min readEA link

Lead­er­ship change at the Cen­ter on Long-Term Risk

JesseClifton31 Jan 2025 21:08 UTC
162 points
7 comments3 min readEA link

Sadism and s-risks from first principles

Jim Buhler22 Sep 2025 14:08 UTC
11 points
1 comment4 min readEA link

Sen­tience In­sti­tute 2022 End of Year Summary

MichaelDello25 Nov 2022 12:28 UTC
48 points
0 comments7 min readEA link
(www.sentienceinstitute.org)

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torges9 Dec 2022 18:03 UTC
170 points
4 comments13 min readEA link

Utili­tar­i­ans Should Ac­cept that Some Suffer­ing Can­not be “Offset”

Aaron Bergman5 Oct 2025 21:22 UTC
77 points
34 comments26 min readEA link

S-risk In­tro Fellowship

stefan.torges20 Dec 2021 17:26 UTC
52 points
1 comment1 min readEA link

Model­ing the (dis)value of hu­man sur­vival and ex­pan­sion

Jim Buhler1 Sep 2025 13:11 UTC
26 points
0 comments2 min readEA link

Dis­cus­sions of Longter­mism should fo­cus on the prob­lem of Unawareness

Jim Buhler20 Oct 2025 13:17 UTC
34 points
1 comment34 min readEA link

How of­ten does One Per­son suc­ceed?

Maynk0228 Oct 2022 19:32 UTC
6 points
0 comments3 min readEA link

Sen­tience In­sti­tute 2023 End of Year Summary

MichaelDello27 Nov 2023 12:11 UTC
29 points
0 comments5 min readEA link
(www.sentienceinstitute.org)

Sen­si­tive as­sump­tions in longter­mist modeling

Owen Murphy18 Sep 2024 1:39 UTC
82 points
12 comments7 min readEA link
(ohmurphy.substack.com)

2024 S-risk In­tro Fellowship

Center on Long-Term Risk12 Oct 2023 19:14 UTC
89 points
2 comments1 min readEA link

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret1 Dec 2023 18:01 UTC
43 points
2 comments42 min readEA link

Dra­co­nian mea­sures can in­crease the risk of ir­re­vo­ca­ble catastrophe

dsj23 Sep 2025 21:40 UTC
8 points
1 comment2 min readEA link
(thedavidsj.substack.com)

Quan­tum im­mor­tal­ity and AI risk – the fate of a lonely survivor

turchin16 Oct 2025 11:40 UTC
5 points
0 comments1 min readEA link

Con­flict­ing Effects of Ex­is­ten­tial Risk Miti­ga­tion Interventions

Pete Rowlett10 May 2023 22:20 UTC
10 points
0 comments8 min readEA link

S-risk for Christians

Monero31 Mar 2024 20:34 UTC
−1 points
5 comments1 min readEA link

Brain Farm­ing: The Case for a Global Ban

Novel Minds Project27 Sep 2025 17:31 UTC
48 points
3 comments3 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphire20 Aug 2020 20:23 UTC
52 points
0 comments3 min readEA link
No comments.