RSS

S-risk

TagLast edit: 16 Jun 2022 19:20 UTC by Pablo

An s-risk, or suffering risk, is a risk involving the creation of suffering on an astronomical scale.

Evaluation

80,000 Hours rates s-risks a “potential highest priority area”: an issue that, if more thoroughly examined, could rank as a top global challenge.[1]

Further reading

Althaus, David & Lukas Gloor (2019) Reducing risks of astronomical suffering: a neglected priority, Center on Long-Term Risk, August.

Baumann, Tobias (2017) S-risks: an introduction, Center for Reducing Suffering, August 15.

Tomasik, Brian (2019) Risks of astronomical future suffering, Center on Long-Term Risk, July 2.

Related entries

Center for Reducing Suffering | Center on Long-Term Risk | ethics of existential risk | hellish existential catastrophe | pain and suffering | suffering-focused ethics

  1. ^

    80,000 Hours (2022) Our current list of pressing world problems, 80,000 Hours.

Begin­ner’s guide to re­duc­ing s-risks [link-post]

Center on Long-Term Risk17 Oct 2023 0:51 UTC
129 points
3 comments3 min readEA link
(longtermrisk.org)

New book on s-risks

Tobias_Baumann26 Oct 2022 12:04 UTC
292 points
27 comments1 min readEA link

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_Gloor15 Feb 2023 13:01 UTC
79 points
5 comments13 min readEA link

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_Daniel2 Jun 2017 8:48 UTC
9 points
1 comment22 min readEA link
(www.youtube.com)

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
26 points
1 comment1 min readEA link
(s-risks.org)

S-risk FAQ

Tobias_Baumann18 Sep 2017 8:05 UTC
29 points
8 comments8 min readEA link

Re­duc­ing long-term risks from malev­olent actors

David_Althaus29 Apr 2020 8:55 UTC
341 points
93 comments37 min readEA link

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
42 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

S-Risks: Fates Worse Than Ex­tinc­tion

A.G.G. Liu4 May 2024 15:30 UTC
104 points
9 comments6 min readEA link
(www.lesswrong.com)

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
142 points
118 comments34 min readEA link
(www.sentienceinstitute.org)

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
75 points
10 comments48 min readEA link

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
40 points
3 comments1 min readEA link
(s-risks.org)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanni1 Jul 2021 21:01 UTC
129 points
10 comments32 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

Winston24 Oct 2023 19:40 UTC
125 points
6 comments6 min readEA link

What can we do now to pre­pare for AI sen­tience, in or­der to pro­tect them from the global scale of hu­man sadism?

rime18 Apr 2023 9:58 UTC
44 points
0 comments2 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel🔹3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

Chi1 Feb 2022 22:24 UTC
62 points
5 comments10 min readEA link

The Odyssean Process

Odyssean Institute24 Nov 2023 13:48 UTC
25 points
6 comments1 min readEA link
(www.odysseaninstitute.org)

Who is pro­tect­ing an­i­mals in the long-term fu­ture?

alene21 Mar 2022 16:49 UTC
168 points
33 comments3 min readEA link

[Question] De­bates on re­duc­ing long-term s-risks?

jackchang1106 Apr 2023 1:26 UTC
13 points
2 comments1 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA🔸15 Jul 2020 12:28 UTC
81 points
7 comments7 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonY16 Oct 2021 11:33 UTC
77 points
22 comments24 min readEA link

Plan­ning ‘re­sis­tance’ to illiber­al­ism and authoritarianism

david_reinstein16 Jun 2024 17:21 UTC
29 points
2 comments2 min readEA link
(www.nytimes.com)

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJules3 Oct 2021 16:55 UTC
48 points
19 comments1 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
30 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

[Question] Where should I donate?

Eevee🔹22 Nov 2021 20:56 UTC
29 points
10 comments1 min readEA link

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus Docker7 May 2021 14:32 UTC
8 points
0 comments1 min readEA link
(anchor.fm)

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

Fai4 Mar 2022 19:59 UTC
134 points
29 comments16 min readEA link

The His­tory of AI Rights Research

Jamie_Harris27 Aug 2022 8:14 UTC
48 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

Me­diocre AI safety as ex­is­ten­tial risk

Gavin16 Mar 2022 11:50 UTC
52 points
12 comments3 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

Ques­tion about ter­minol­ogy for lesser X-risks and S-risks

Laura Leighton8 Aug 2022 4:39 UTC
9 points
3 comments1 min readEA link

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

Babel8 Jul 2009 12:42 UTC
12 points
0 comments1 min readEA link
(longtermrisk.org)

On­line Work­ing /​ Com­mu­nity Meetup for the Abo­li­tion of Suffering

Ruth_Fr.31 May 2022 9:16 UTC
7 points
5 comments1 min readEA link

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

Chi13 Feb 2024 22:24 UTC
95 points
7 comments2 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo Ajantaival23 May 2022 19:17 UTC
62 points
14 comments29 min readEA link

The Case for An­i­mal-In­clu­sive Longtermism

Eevee🔹17 Feb 2024 0:07 UTC
60 points
7 comments30 min readEA link
(brill.com)

S-risks, X-risks, and Ideal Futures

OscarD🔸18 Jun 2024 15:12 UTC
15 points
6 comments1 min readEA link

Clas­sify­ing sources of AI x-risk

Sam Clarke8 Aug 2022 18:18 UTC
41 points
4 comments3 min readEA link

Sen­tience In­sti­tute 2021 End of Year Summary

Ali26 Nov 2021 14:40 UTC
66 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

New eBook: Es­says on UFOs and Re­lated Conjectures

Magnus Vinding4 Aug 2024 7:34 UTC
23 points
3 comments7 min readEA link

Rea­sons for op­ti­mism about mea­sur­ing malev­olence to tackle x- and s-risks

Jamie_Harris2 Apr 2024 10:26 UTC
85 points
12 comments8 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

Eevee🔹26 Jul 2022 5:04 UTC
38 points
3 comments5 min readEA link
(sunyshore.substack.com)

We Prob­a­bly Shouldn’t Solve Consciousness

Silica10 Feb 2024 7:12 UTC
33 points
5 comments16 min readEA link

Against Mak­ing Up Our Con­scious Minds

Silica10 Feb 2024 7:12 UTC
13 points
0 comments5 min readEA link

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term Risk15 Feb 2024 18:26 UTC
89 points
2 comments8 min readEA link

Sen­si­tive as­sump­tions in longter­mist modeling

Owen Murphy18 Sep 2024 1:39 UTC
82 points
12 comments7 min readEA link
(ohmurphy.substack.com)

How I learned to stop wor­ry­ing and love X-risk

Monero11 Mar 2024 3:58 UTC
11 points
1 comment1 min readEA link

Value lock-in is hap­pen­ing *now*

Isaac King15 Oct 2024 1:40 UTC
12 points
17 comments4 min readEA link

Prob­lem: Guaran­tee­ing the right to life for ev­ery­one, in the in­finitely long term (part 1)

lamparita18 Aug 2024 12:13 UTC
1 point
1 comment8 min readEA link

S-risk for Christians

Monero31 Mar 2024 20:34 UTC
−1 points
5 comments1 min readEA link

An­nounc­ing a New S-Risk In­tro­duc­tory Fellowship

Alistair Webster1 Jul 2024 14:37 UTC
54 points
5 comments1 min readEA link

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

Amrit Sidhu-Brar 🔸25 Jan 2024 18:43 UTC
55 points
0 comments6 min readEA link

What is malev­olence? On the na­ture, mea­sure­ment, and dis­tri­bu­tion of dark traits

David_Althaus23 Oct 2024 8:41 UTC
91 points
5 comments52 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphire20 Aug 2020 20:23 UTC
51 points
0 comments3 min readEA link

[Question] What is the best way to ex­plain that s-risks are im­por­tant—ba­si­cally, why ex­is­tence is not in­her­ently bet­ter than non ex­is­tence? In­tend­ing this for some­one mostly un­fa­mil­iar with EA, like some­one in an in­tro program

shepardriley8 Nov 2024 18:12 UTC
2 points
0 comments1 min readEA link

S-risk In­tro Fellowship

stefan.torges20 Dec 2021 17:26 UTC
52 points
1 comment1 min readEA link

CLR’s An­nual Re­port 2021

stefan.torges26 Feb 2022 12:47 UTC
79 points
0 comments12 min readEA link

Cur­ing past suffer­ings and pre­vent­ing s-risks via in­dex­i­cal uncertainty

turchin27 Sep 2018 10:48 UTC
1 point
18 comments4 min readEA link

Ar­gu­ments for Why Prevent­ing Hu­man Ex­tinc­tion is Wrong

Anthony Fleming21 May 2022 7:17 UTC
30 points
48 comments3 min readEA link

Pro­mot­ing com­pas­sion­ate longtermism

jonleighton7 Dec 2022 14:26 UTC
117 points
5 comments12 min readEA link

How of­ten does One Per­son suc­ceed?

Maynk0228 Oct 2022 19:32 UTC
6 points
0 comments3 min readEA link

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torges9 Dec 2022 18:03 UTC
169 points
4 comments13 min readEA link

Sen­tience In­sti­tute 2022 End of Year Summary

MichaelDello25 Nov 2022 12:28 UTC
48 points
0 comments7 min readEA link
(www.sentienceinstitute.org)

The prob­lem of ar­tifi­cial suffering

mlsbt24 Sep 2021 14:43 UTC
49 points
3 comments9 min readEA link

In­tro­duc­tion to suffer­ing-fo­cused ethics

Center for Reducing Suffering30 Aug 2024 16:55 UTC
56 points
2 comments22 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

Si­mu­la­tors and Mindcrime

𝕮𝖎𝖓𝖊𝖗𝖆9 Dec 2022 15:20 UTC
1 point
0 comments1 min readEA link

New Book: “Rea­soned Poli­tics” + Why I have writ­ten a book about politics

Magnus Vinding3 Mar 2022 11:31 UTC
95 points
9 comments5 min readEA link

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey Miller23 Aug 2022 17:15 UTC
39 points
17 comments1 min readEA link

Longter­mism and An­i­mal Farm­ing Trajectories

MichaelDello27 Dec 2022 0:58 UTC
51 points
8 comments17 min readEA link
(www.sentienceinstitute.org)

Part 1/​4: A Case for Abolition

Dhruv Makwana11 Jan 2023 13:46 UTC
33 points
7 comments3 min readEA link

High­est pri­or­ity threat: in­finite tor­ture

KArax26 Jan 2023 8:51 UTC
−39 points
1 comment9 min readEA link

80k pod­cast epi­sode on sen­tience in AI systems

rgb15 Mar 2023 20:19 UTC
85 points
2 comments1 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
63 points
31 comments8 min readEA link

Sen­tience In­sti­tute 2023 End of Year Summary

MichaelDello27 Nov 2023 12:11 UTC
25 points
0 comments5 min readEA link
(www.sentienceinstitute.org)

Fu­ture tech­nolog­i­cal progress does NOT cor­re­late with meth­ods that in­volve less suffering

Jim Buhler1 Aug 2023 9:30 UTC
60 points
12 comments4 min readEA link

A se­lec­tion of some writ­ings and con­sid­er­a­tions on the cause of ar­tifi­cial sentience

Raphaël_Pesah10 Aug 2023 18:23 UTC
48 points
1 comment10 min readEA link

Con­flict­ing Effects of Ex­is­ten­tial Risk Miti­ga­tion Interventions

Pete Rowlett10 May 2023 22:20 UTC
10 points
0 comments8 min readEA link

[Question] Ask­ing for on­line calls on AI s-risks discussions

jackchang11014 May 2023 13:58 UTC
26 points
3 comments1 min readEA link

Perché i rischi di sofferenza sono i rischi es­isten­ziali peg­giori e come pos­si­amo prevenirli

EA Italy17 Jan 2023 11:14 UTC
1 point
0 comments1 min readEA link

The­ory: “WAW might be of higher im­pact than x-risk pre­ven­tion based on util­i­tar­i­anism”

Jens Aslaug 🔸12 Sep 2023 13:11 UTC
51 points
20 comments17 min readEA link

Briefly how I’ve up­dated since ChatGPT

rime25 Apr 2023 19:39 UTC
29 points
7 comments2 min readEA link
(www.lesswrong.com)

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret1 Dec 2023 18:01 UTC
39 points
2 comments42 min readEA link

[Linkpost] My moral view: Re­duc­ing suffer­ing, ‘how to be’ as fun­da­men­tal to moral­ity, no pos­i­tive value, cons of grand the­ory, and more—By Si­mon Knutsson

Alistair Webster25 Aug 2023 12:53 UTC
35 points
2 comments2 min readEA link
(centerforreducingsuffering.org)

2024 S-risk In­tro Fellowship

Center on Long-Term Risk12 Oct 2023 19:14 UTC
89 points
2 comments1 min readEA link

Some gov­er­nance re­search ideas to pre­vent malev­olent con­trol over AGI and why this might mat­ter a hell of a lot

Jim Buhler23 May 2023 13:07 UTC
63 points
5 comments16 min readEA link

New s-risks au­dio­book available now

Alistair Webster24 May 2023 20:27 UTC
87 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

Cen­ter on Long-Term Risk: An­nual re­view and fundraiser 2023

Center on Long-Term Risk13 Dec 2023 16:42 UTC
78 points
3 comments4 min readEA link

Mo­ral Spillover in Hu­man-AI Interaction

Katerina Manoli5 Jun 2023 15:20 UTC
17 points
1 comment13 min readEA link

New Book: “Min­i­mal­ist Ax­iolo­gies: Alter­na­tives to ‘Good Minus Bad’ Views of Value”

Teo Ajantaival19 Jul 2024 13:00 UTC
60 points
8 comments5 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum Hinchcliffe14 Oct 2023 21:18 UTC
28 points
4 comments9 min readEA link

Life of GPT

Odd anon8 Nov 2023 22:31 UTC
−1 points
0 comments5 min readEA link
No comments.