RSS

S-risk

TagLast edit: 16 Jun 2022 19:20 UTC by Pablo

An s-risk, or suffering risk, is a risk involving the creation of suffering on an astronomical scale.

Evaluation

80,000 Hours rates s-risks a “potential highest priority area”: an issue that, if more thoroughly examined, could rank as a top global challenge.[1]

Further reading

Althaus, David & Lukas Gloor (2019) Reducing risks of astronomical suffering: a neglected priority, Center on Long-Term Risk, August.

Baumann, Tobias (2017) S-risks: an introduction, Center for Reducing Suffering, August 15.

Tomasik, Brian (2019) Risks of astronomical future suffering, Center on Long-Term Risk, July 2.

Related entries

Center for Reducing Suffering | Center on Long-Term Risk | ethics of existential risk | hellish existential catastrophe | pain and suffering | suffering-focused ethics

  1. ^

    80,000 Hours (2022) Our current list of pressing world problems, 80,000 Hours.

Begin­ner’s guide to re­duc­ing s-risks [link-post]

Center on Long-Term Risk17 Oct 2023 0:51 UTC
129 points
3 comments3 min readEA link
(longtermrisk.org)

New book on s-risks

Tobias_Baumann26 Oct 2022 12:04 UTC
295 points
27 comments1 min readEA link

Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

Max_Daniel2 Jun 2017 8:48 UTC
8 points
1 comment22 min readEA link
(www.youtube.com)

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
26 points
1 comment1 min readEA link
(s-risks.org)

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_Gloor15 Feb 2023 13:01 UTC
79 points
5 comments13 min readEA link

S-risk FAQ

Tobias_Baumann18 Sep 2017 8:05 UTC
29 points
8 comments8 min readEA link

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
42 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
140 points
118 comments32 min readEA link
(www.sentienceinstitute.org)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanni1 Jul 2021 21:01 UTC
125 points
10 comments32 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
75 points
10 comments49 min readEA link

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
40 points
3 comments1 min readEA link
(s-risks.org)

What can we do now to pre­pare for AI sen­tience, in or­der to pro­tect them from the global scale of hu­man sadism?

rime18 Apr 2023 9:58 UTC
40 points
0 comments2 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA15 Jul 2020 12:28 UTC
79 points
7 comments7 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonY16 Oct 2021 11:33 UTC
77 points
22 comments25 min readEA link

The op­tion value ar­gu­ment doesn’t work when it’s most needed

Winston24 Oct 2023 19:40 UTC
122 points
6 comments6 min readEA link

Ap­ply to CLR as a re­searcher or sum­mer re­search fel­low!

Chi1 Feb 2022 22:24 UTC
62 points
5 comments10 min readEA link

An as­pira­tionally com­pre­hen­sive ty­pol­ogy of fu­ture locked-in scenarios

Milan Weibel3 Apr 2023 2:11 UTC
12 points
0 comments4 min readEA link

[Question] De­bates on re­duc­ing long-term s-risks?

jackchang1106 Apr 2023 1:26 UTC
13 points
2 comments1 min readEA link

Who is pro­tect­ing an­i­mals in the long-term fu­ture?

alene21 Mar 2022 16:49 UTC
167 points
33 comments3 min readEA link

The Odyssean Process

Odyssean Institute24 Nov 2023 13:48 UTC
24 points
6 comments1 min readEA link
(www.odysseaninstitute.org)

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

Fai4 Mar 2022 19:59 UTC
125 points
29 comments16 min readEA link

The His­tory of AI Rights Research

Jamie_Harris27 Aug 2022 8:14 UTC
44 points
1 comment14 min readEA link
(www.sentienceinstitute.org)

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
26 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

[Question] Where should I donate?

BrownHairedEevee22 Nov 2021 20:56 UTC
29 points
10 comments1 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
64 points
0 comments1 min readEA link

Ques­tion about ter­minol­ogy for lesser X-risks and S-risks

Laura Leighton8 Aug 2022 4:39 UTC
9 points
4 comments1 min readEA link

Me­diocre AI safety as ex­is­ten­tial risk

Gavin16 Mar 2022 11:50 UTC
52 points
12 comments3 min readEA link

Hu­man­ity’s vast fu­ture and its im­pli­ca­tions for cause prioritization

BrownHairedEevee26 Jul 2022 5:04 UTC
36 points
3 comments4 min readEA link
(sunyshore.substack.com)

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

Babel8 Jul 2009 12:42 UTC
12 points
0 comments1 min readEA link
(longtermrisk.org)

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJules3 Oct 2021 16:55 UTC
48 points
19 comments1 min readEA link

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

Chi13 Feb 2024 22:24 UTC
95 points
7 comments2 min readEA link

On­line Work­ing /​ Com­mu­nity Meetup for the Abo­li­tion of Suffering

Ruth_Fr.31 May 2022 9:16 UTC
7 points
5 comments1 min readEA link

The Case for An­i­mal-In­clu­sive Longtermism

BrownHairedEevee17 Feb 2024 0:07 UTC
60 points
7 comments30 min readEA link
(brill.com)

Clas­sify­ing sources of AI x-risk

Sam Clarke8 Aug 2022 18:18 UTC
40 points
4 comments3 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo Ajantaival23 May 2022 19:17 UTC
61 points
14 comments28 min readEA link

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus Docker7 May 2021 14:32 UTC
8 points
0 comments1 min readEA link
(anchor.fm)

Sen­tience In­sti­tute 2021 End of Year Summary

Ali26 Nov 2021 14:40 UTC
66 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

Cen­ter on Long-Term Risk: An­nual re­view and fundraiser 2023

Center on Long-Term Risk13 Dec 2023 16:42 UTC
76 points
3 comments4 min readEA link

Mo­ral Spillover in Hu­man-AI Interaction

Katerina Manoli5 Jun 2023 15:20 UTC
17 points
1 comment13 min readEA link

Assess­ing the Danger­ous­ness of Malev­olent Ac­tors in AGI Gover­nance: A Pre­limi­nary Exploration

Callum Hinchcliffe14 Oct 2023 21:18 UTC
23 points
4 comments9 min readEA link

Life of GPT

Odd anon8 Nov 2023 22:31 UTC
−1 points
0 comments5 min readEA link

We Prob­a­bly Shouldn’t Solve Consciousness

Silica10 Feb 2024 7:12 UTC
33 points
5 comments16 min readEA link

Against Mak­ing Up Our Con­scious Minds

Silica10 Feb 2024 7:12 UTC
13 points
0 comments5 min readEA link

CLR Sum­mer Re­search Fel­low­ship 2024

Center on Long-Term Risk15 Feb 2024 18:26 UTC
89 points
2 comments8 min readEA link

How I learned to stop wor­ry­ing and love X-risk

Monero11 Mar 2024 3:58 UTC
9 points
0 comments1 min readEA link

Ex­pres­sion of In­ter­est: Direc­tor of Oper­a­tions at the Cen­ter on Long-term Risk

AmritSidhu-Brar25 Jan 2024 18:43 UTC
55 points
0 comments6 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

sapphire20 Aug 2020 20:23 UTC
51 points
0 comments3 min readEA link

S-risk In­tro Fellowship

stefan.torges20 Dec 2021 17:26 UTC
52 points
1 comment1 min readEA link

CLR’s An­nual Re­port 2021

stefan.torges26 Feb 2022 12:47 UTC
79 points
0 comments12 min readEA link

Cur­ing past suffer­ings and pre­vent­ing s-risks via in­dex­i­cal uncertainty

turchin27 Sep 2018 10:48 UTC
1 point
18 comments4 min readEA link

Ar­gu­ments for Why Prevent­ing Hu­man Ex­tinc­tion is Wrong

Anthony Fleming21 May 2022 7:17 UTC
34 points
48 comments3 min readEA link

Pro­mot­ing com­pas­sion­ate longtermism

jonleighton7 Dec 2022 14:26 UTC
117 points
5 comments12 min readEA link

How of­ten does One Per­son suc­ceed?

Maynk0228 Oct 2022 19:32 UTC
6 points
0 comments3 min readEA link

Cen­ter on Long-Term Risk: 2023 Fundraiser

stefan.torges9 Dec 2022 18:03 UTC
169 points
4 comments16 min readEA link

Sen­tience In­sti­tute 2022 End of Year Summary

MichaelDello25 Nov 2022 12:28 UTC
48 points
0 comments7 min readEA link
(www.sentienceinstitute.org)

The prob­lem of ar­tifi­cial suffering

mlsbt24 Sep 2021 14:43 UTC
49 points
3 comments9 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
70 points
2 comments1 min readEA link

Si­mu­la­tors and Mindcrime

𝕮𝖎𝖓𝖊𝖗𝖆9 Dec 2022 15:20 UTC
1 point
0 comments1 min readEA link

New Book: “Rea­soned Poli­tics” + Why I have writ­ten a book about politics

Magnus Vinding3 Mar 2022 11:31 UTC
90 points
9 comments5 min readEA link

Could a ‘per­ma­nent global to­tal­i­tar­ian state’ ever be per­ma­nent?

Geoffrey Miller23 Aug 2022 17:15 UTC
39 points
17 comments1 min readEA link

Longter­mism and An­i­mal Farm­ing Trajectories

MichaelDello27 Dec 2022 0:58 UTC
51 points
8 comments17 min readEA link
(www.sentienceinstitute.org)

Part 1/​4: A Case for Abolition

Dhruv Makwana11 Jan 2023 13:46 UTC
33 points
7 comments3 min readEA link

High­est pri­or­ity threat: in­finite tor­ture

KArax26 Jan 2023 8:51 UTC
−39 points
1 comment9 min readEA link

80k pod­cast epi­sode on sen­tience in AI systems

rgb15 Mar 2023 20:19 UTC
85 points
2 comments1 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
62 points
31 comments8 min readEA link

Sen­tience In­sti­tute 2023 End of Year Summary

MichaelDello27 Nov 2023 12:11 UTC
25 points
0 comments5 min readEA link
(www.sentienceinstitute.org)

Fu­ture tech­nolog­i­cal progress does NOT cor­re­late with meth­ods that in­volve less suffering

Jim Buhler1 Aug 2023 9:30 UTC
60 points
12 comments4 min readEA link

A se­lec­tion of some writ­ings and con­sid­er­a­tions on the cause of ar­tifi­cial sentience

Raphaël_Pesah10 Aug 2023 18:23 UTC
43 points
1 comment10 min readEA link

Con­flict­ing Effects of Ex­is­ten­tial Risk Miti­ga­tion Interventions

Pete Rowlett10 May 2023 22:20 UTC
10 points
0 comments8 min readEA link

[Question] Ask­ing for on­line calls on AI s-risks discussions

jackchang11014 May 2023 13:58 UTC
26 points
3 comments1 min readEA link

Perché i rischi di sofferenza sono i rischi es­isten­ziali peg­giori e come pos­si­amo prevenirli

EA Italy17 Jan 2023 11:14 UTC
1 point
0 comments1 min readEA link

The­ory: “WAW might be of higher im­pact than x-risk pre­ven­tion based on util­i­tar­i­anism”

Jens Aslaug12 Sep 2023 13:11 UTC
51 points
20 comments17 min readEA link

Briefly how I’ve up­dated since ChatGPT

rime25 Apr 2023 19:39 UTC
29 points
7 comments2 min readEA link
(www.lesswrong.com)

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret1 Dec 2023 18:01 UTC
38 points
2 comments42 min readEA link

[Linkpost] My moral view: Re­duc­ing suffer­ing, ‘how to be’ as fun­da­men­tal to moral­ity, no pos­i­tive value, cons of grand the­ory, and more—By Si­mon Knutsson

Alistair Webster25 Aug 2023 12:53 UTC
35 points
2 comments2 min readEA link
(centerforreducingsuffering.org)

2024 S-risk In­tro Fellowship

Center on Long-Term Risk12 Oct 2023 19:14 UTC
88 points
2 comments1 min readEA link

Some gov­er­nance re­search ideas to pre­vent malev­olent con­trol over AGI and why this might mat­ter a hell of a lot

Jim Buhler23 May 2023 13:07 UTC
62 points
5 comments16 min readEA link

New s-risks au­dio­book available now

Alistair Webster24 May 2023 20:27 UTC
87 points
3 comments1 min readEA link
(centerforreducingsuffering.org)
No comments.