RSS

S-risk

TagLast edit: Jun 16, 2022, 7:20 PM by Pablo

An s-risk, or suffering risk, is a risk involving the creation of suffering on an astronomical scale.

Evaluation

80,000 Hours rates s-risks a “potential highest priority area”: an issue that, if more thoroughly examined, could rank as a top global challenge.[1]

Further reading

Althaus, David & Lukas Gloor (2019) Reducing risks of astronomical suffering: a neglected priority, Center on Long-Term Risk, August.

Baumann, Tobias (2017) S-risks: an introduction, Center for Reducing Suffering, August 15.

Tomasik, Brian (2019) Risks of astronomical future suffering, Center on Long-Term Risk, July 2.

Related entries

Center for Reducing Suffering | Center on Long-Term Risk | ethics of existential risk | hellish existential catastrophe | pain and suffering | suffering-focused ethics

  1. ^

    80,000 Hours (2022) Our current list of pressing world problems, 80,000 Hours.

Begin­ner’s guide to re­duc­ing s-risks [link-post]

Center on Long-Term RiskOct 17, 2023, 12:51 AM
129 points
3 comments3 min readEA link
(longtermrisk.org)

New book on s-risks

Tobias_BaumannOct 26, 2022, 12:04 PM
293 points
27 comments1 min readEA link

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_GloorFeb 15, 2023, 1:01 PM
79 points
5 comments13 min readEA link

A ty­pol­ogy of s-risks

Tobias_BaumannDec 21, 2018, 6:23 PM
26 points
1 comment1 min readEA link
(s-risks.org)

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_DanielJun 2, 2017, 8:48 AM
10 points
1 comment22 min readEA link
(www.youtube.com)

S-risk FAQ

Tobias_BaumannSep 18, 2017, 8:05 AM
29 points
8 comments8 min readEA link

How can we re­duce s-risks?

Tobias_BaumannJan 29, 2021, 3:46 PM
42 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Re­duc­ing long-term risks from malev­olent actors

David_AlthausApr 29, 2020, 8:55 AM
344 points
93 comments37 min readEA link

S-Risks: Fates Worse Than Ex­tinc­tion

A.G.G. LiuMay 4, 2024, 3:30 PM
104 points
9 comments6 min readEA link
(www.lesswrong.com)

The Fu­ture Might Not Be So Great

JacyJun 30, 2022, 1:01 PM
145 points
118 comments34 min readEA link
(www.sentienceinstitute.org)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanniJul 1, 2021, 9:01 PM
145 points
10 comments32 min readEA link

Risk fac­tors for s-risks

Tobias_BaumannFeb 13, 2019, 5:51 PM
40 points
3 comments1 min readEA link
(s-risks.org)

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_GloorJan 31, 2018, 2:47 PM
76 points
11 comments48 min readEA link

An­nounc­ing the CLR Foun­da­tions Course and CLR S-Risk Seminars

James FavilleNov 19, 2024, 1:18 AM
52 points
2 comments3 min readEA link

[Question] De­bates on re­duc­ing long-term s-risks?

jackchang110Apr 6, 2023, 1:26 AM
13 points
2 comments1 min readEA link

Who is pro­tect­ing an­i­mals in the long-term fu­ture?

aleneMar 21, 2022, 4:49 PM
169 points
33 comments3 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

WinstonOct 24, 2023, 7:40 PM
131 points
6 comments6 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonYOct 16, 2021, 11:33 AM
77 points
22 comments24 min readEA link

What can we do now to pre­pare for AI sen­tience, in or­der to pro­tect them from the global scale of hu­man sadism?

rimeApr 18, 2023, 9:58 AM
44 points
0 comments2 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

ChiFeb 1, 2022, 10:24 PM
62 points
5 comments10 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA🔸Jul 15, 2020, 12:28 PM
81 points
7 comments7 min readEA link

The Odyssean Process

Odyssean InstituteNov 24, 2023, 1:48 PM
25 points
6 comments1 min readEA link
(www.odysseaninstitute.org)

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹Apr 3, 2023, 2:11 AM
12 points
0 comments4 min readEA link

Cos­mic AI safety

Magnus VindingDec 6, 2024, 10:32 PM
23 points
5 comments6 min readEA link

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus DockerMay 7, 2021, 2:32 PM
8 points
0 comments1 min readEA link
(anchor.fm)

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

FaiMar 4, 2022, 7:59 PM
134 points
29 comments16 min readEA link

The His­tory of AI Rights Research

Jamie_HarrisAug 27, 2022, 8:14 AM
48 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torgesJan 17, 2020, 1:28 PM
64 points
0 comments1 min readEA link

Ques­tion about ter­minol­ogy for lesser X-risks and S-risks

Laura LeightonAug 8, 2022, 4:39 AM
9 points
3 comments1 min readEA link

Me­diocre AI safety as ex­is­ten­tial risk

technicalitiesMar 16, 2022, 11:50 AM
52 points
12 comments3 min readEA link

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

BabelJul 8, 2009, 12:42 PM
12 points
0 comments1 min readEA link
(longtermrisk.org)

On­line Work­ing /​ Com­mu­nity Meetup for the Abo­li­tion of Suffering

Ruth_SeleoMay 31, 2022, 9:16 AM
7 points
5 comments1 min readEA link

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

ChiFeb 13, 2024, 10:24 PM
95 points
7 comments2 min readEA link

Plan­ning ‘re­sis­tance’ to illiber­al­ism and authoritarianism

david_reinsteinJun 16, 2024, 5:21 PM
29 points
2 comments2 min readEA link
(www.nytimes.com)

The Case for An­i­mal-In­clu­sive Longtermism

Eevee🔹Feb 17, 2024, 12:07 AM
66 points
7 comments30 min readEA link
(brill.com)

S-risks, X-risks, and Ideal Futures

OscarD🔸Jun 18, 2024, 3:12 PM
15 points
6 comments1 min readEA link

Clas­sify­ing sources of AI x-risk

Sam ClarkeAug 8, 2022, 6:18 PM
41 points
4 comments3 min readEA link

Sen­tience In­sti­tute 2021 End of Year Summary

AliNov 26, 2021, 2:40 PM
66 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

New eBook: Es­says on UFOs and Re­lated Conjectures

Magnus VindingAug 4, 2024, 7:34 AM
27 points
3 comments7 min readEA link

Rea­sons for op­ti­mism about mea­sur­ing malev­olence to tackle x- and s-risks

Jamie_HarrisApr 2, 2024, 10:26 AM
85 points
12 comments8 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

Eevee🔹Jul 26, 2022, 5:04 AM
38 points
3 comments5 min readEA link
(sunyshore.substack.com)

The Big Slurp could eat the Boltz­mann brains

Mark McDonaldFeb 25, 2025, 1:13 AM
11 points
2 comments4 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo AjantaivalMay 23, 2022, 7:17 PM
62 points
14 comments29 min readEA link

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJulesOct 3, 2021, 4:55 PM
48 points
19 comments1 min readEA link

Launch­ing the EAF Fund

stefan.torgesNov 28, 2018, 5:13 PM
60 points
14 comments4 min readEA link

[Question] Where should I donate?

Eevee🔹Nov 22, 2021, 8:56 PM
29 points
10 comments1 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_HarrisOct 18, 2021, 2:07 PM
30 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

How I learned to stop wor­ry­ing and love X-risk

MoneroMar 11, 2024, 3:58 AM
11 points
1 comment1 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac KingOct 15, 2024, 1:40 AM
12 points
17 comments4 min readEA link

Prob­lem: Guaran­tee­ing the right to life for ev­ery­one, in the in­finitely long term (part 1)

lamparitaAug 18, 2024, 12:13 PM
2 points
2 comments8 min readEA link

S-risk for Christians

MoneroMar 31, 2024, 8:34 PM
−1 points
5 comments1 min readEA link

An­nounc­ing a New S-Risk In­tro­duc­tory Fellowship

Alistair WebsterJul 1, 2024, 2:37 PM
54 points
5 comments1 min readEA link

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

Amrit Sidhu-Brar 🔸Jan 25, 2024, 6:43 PM
55 points
0 comments6 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphireAug 20, 2020, 8:23 PM
51 points
0 comments3 min readEA link

Sen­tience In­sti­tute 2022 End of Year Summary

MichaelDelloNov 25, 2022, 12:28 PM
48 points
0 comments7 min readEA link
(www.sentienceinstitute.org)

Time to Think about ASI Con­sti­tu­tions?

ukc10014Jan 27, 2025, 9:28 AM
20 points
0 comments12 min readEA link

[Question] What is the best way to ex­plain that s-risks are im­por­tant—ba­si­cally, why ex­is­tence is not in­her­ently bet­ter than non ex­is­tence? In­tend­ing this for some­one mostly un­fa­mil­iar with EA, like some­one in an in­tro program

shepardrileyNov 8, 2024, 6:12 PM
2 points
0 comments1 min readEA link

The Light­cone solu­tion to the trans­mit­ter room problem

OGTutzauer🔸Jan 29, 2025, 10:03 AM
10 points
6 comments3 min readEA link

Could this be an un­usu­ally good time to Earn To Give?

Tom GardinerMar 3, 2025, 11:00 PM
59 points
12 comments3 min readEA link

AI and Non-Existence

Blue11Jan 31, 2025, 1:19 PM
4 points
0 comments2 min readEA link

Lead­er­ship change at the Cen­ter on Long-Term Risk

JesseCliftonJan 31, 2025, 9:08 PM
161 points
7 comments3 min readEA link

Where I Am Donat­ing in 2024

MichaelDickensNov 19, 2024, 12:09 AM
181 points
73 comments46 min readEA link

What is malev­olence? On the na­ture, mea­sure­ment, and dis­tri­bu­tion of dark traits

David_AlthausOct 23, 2024, 8:41 AM
107 points
6 comments52 min readEA link

S-risk In­tro Fellowship

stefan.torgesDec 20, 2021, 5:26 PM
52 points
1 comment1 min readEA link

CLR’s An­nual Re­port 2021

stefan.torgesFeb 26, 2022, 12:47 PM
79 points
0 comments12 min readEA link

Cur­ing past suffer­ings and pre­vent­ing s-risks via in­dex­i­cal uncertainty

turchinSep 27, 2018, 10:48 AM
1 point
18 comments4 min readEA link

Ar­gu­ments for Why Prevent­ing Hu­man Ex­tinc­tion is Wrong

Anthony FlemingMay 21, 2022, 7:17 AM
32 points
48 comments3 min readEA link

Pro­mot­ing com­pas­sion­ate longtermism

jonleightonDec 7, 2022, 2:26 PM
117 points
5 comments12 min readEA link

How of­ten does One Per­son suc­ceed?

Maynk02Oct 28, 2022, 7:32 PM
6 points
0 comments3 min readEA link

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torgesDec 9, 2022, 6:03 PM
169 points
4 comments13 min readEA link

The prob­lem of ar­tifi­cial suffering

mlsbtSep 24, 2021, 2:43 PM
50 points
3 comments9 min readEA link

In­tro­duc­tion to suffer­ing-fo­cused ethics

Center for Reducing SufferingAug 30, 2024, 4:55 PM
56 points
2 comments22 min readEA link

First S-Risk In­tro Seminar

stefan.torgesDec 8, 2020, 9:23 AM
70 points
2 comments1 min readEA link

Si­mu­la­tors and Mindcrime

𝕮𝖎𝖓𝖊𝖗𝖆Dec 9, 2022, 3:20 PM
1 point
0 comments1 min readEA link

New Book: “Rea­soned Poli­tics” + Why I have writ­ten a book about politics

Magnus VindingMar 3, 2022, 11:31 AM
95 points
9 comments5 min readEA link

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey MillerAug 23, 2022, 5:15 PM
39 points
17 comments1 min readEA link

Longter­mism and An­i­mal Farm­ing Trajectories

MichaelDelloDec 27, 2022, 12:58 AM
51 points
8 comments17 min readEA link
(www.sentienceinstitute.org)

Part 1/​4: A Case for Abolition

Dhruv MakwanaJan 11, 2023, 1:46 PM
33 points
7 comments3 min readEA link

High­est pri­or­ity threat: in­finite tor­ture

KAraxJan 26, 2023, 8:51 AM
−39 points
1 comment9 min readEA link

Eth­i­cal anal­y­sis of pur­ported risks and dis­asters in­volv­ing suffer­ing, ex­tinc­tion, or a lack of pos­i­tive value

JoA🔸Mar 17, 2025, 1:36 PM
20 points
0 comments1 min readEA link
(jeet.ieet.org)

80k pod­cast epi­sode on sen­tience in AI systems

rgbMar 15, 2023, 8:19 PM
85 points
2 comments1 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim BuhlerJul 10, 2023, 1:54 PM
65 points
31 comments8 min readEA link

Sen­tience In­sti­tute 2023 End of Year Summary

MichaelDelloNov 27, 2023, 12:11 PM
29 points
0 comments5 min readEA link
(www.sentienceinstitute.org)

Fu­ture tech­nolog­i­cal progress does NOT cor­re­late with meth­ods that in­volve less suffering

Jim BuhlerAug 1, 2023, 9:30 AM
62 points
12 comments4 min readEA link

A se­lec­tion of some writ­ings and con­sid­er­a­tions on the cause of ar­tifi­cial sentience

Raphaël_PesahAug 10, 2023, 6:23 PM
48 points
1 comment10 min readEA link

Con­flict­ing Effects of Ex­is­ten­tial Risk Miti­ga­tion Interventions

Pete RowlettMay 10, 2023, 10:20 PM
10 points
0 comments8 min readEA link

[Question] Ask­ing for on­line calls on AI s-risks discussions

jackchang110May 14, 2023, 1:58 PM
26 points
3 comments1 min readEA link

Perché i rischi di sofferenza sono i rischi es­isten­ziali peg­giori e come pos­si­amo prevenirli

EA ItalyJan 17, 2023, 11:14 AM
1 point
0 comments1 min readEA link

The­ory: “WAW might be of higher im­pact than x-risk pre­ven­tion based on util­i­tar­i­anism”

Jens Aslaug 🔸Sep 12, 2023, 1:11 PM
51 points
20 comments17 min readEA link

Briefly how I’ve up­dated since ChatGPT

rimeApr 25, 2023, 7:39 PM
29 points
8 comments2 min readEA link
(www.lesswrong.com)

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià MoretDec 1, 2023, 6:01 PM
39 points
2 comments42 min readEA link

[Linkpost] My moral view: Re­duc­ing suffer­ing, ‘how to be’ as fun­da­men­tal to moral­ity, no pos­i­tive value, cons of grand the­ory, and more—By Si­mon Knutsson

Alistair WebsterAug 25, 2023, 12:53 PM
35 points
2 comments2 min readEA link
(centerforreducingsuffering.org)

2024 S-risk In­tro Fellowship

Center on Long-Term RiskOct 12, 2023, 7:14 PM
90 points
2 comments1 min readEA link

Some gov­er­nance re­search ideas to pre­vent malev­olent con­trol over AGI and why this might mat­ter a hell of a lot

Jim BuhlerMay 23, 2023, 1:07 PM
63 points
5 comments16 min readEA link

New s-risks au­dio­book available now

Alistair WebsterMay 24, 2023, 8:27 PM
87 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Cen­ter on Long-Term Risk: An­nual re­view and fundraiser 2023

Center on Long-Term RiskDec 13, 2023, 4:42 PM
78 points
3 comments4 min readEA link

Mo­ral Spillover in Hu­man-AI Interaction

Katerina ManoliJun 5, 2023, 3:20 PM
17 points
1 comment13 min readEA link

New Book: “Min­i­mal­ist Ax­iolo­gies: Alter­na­tives to ‘Good Minus Bad’ Views of Value”

Teo AjantaivalJul 19, 2024, 1:00 PM
60 points
8 comments5 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum HinchcliffeOct 14, 2023, 9:18 PM
28 points
4 comments9 min readEA link

Life of GPT

Odd anonNov 8, 2023, 10:31 PM
−1 points
0 comments5 min readEA link

We Prob­a­bly Shouldn’t Solve Consciousness

SilicaFeb 10, 2024, 7:12 AM
33 points
5 comments16 min readEA link

Against Mak­ing Up Our Con­scious Minds

SilicaFeb 10, 2024, 7:12 AM
13 points
0 comments5 min readEA link

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term RiskFeb 15, 2024, 6:26 PM
89 points
2 comments8 min readEA link

Sen­si­tive as­sump­tions in longter­mist modeling

Owen MurphySep 18, 2024, 1:39 AM
82 points
12 comments7 min readEA link
(ohmurphy.substack.com)
No comments.