RSS

S-risk

TagLast edit: 30 Oct 2021 21:06 UTC by MichaelA

An s-risk, or suffering risk, is a risk involving the creation of suffering on an astronomical scale.

Related entries

Center for Reducing Suffering | Center on Long-Term Risk | ethics of existential risk | pain and suffering | suffering-focused ethics

[Cross­post] Re­duc­ing Risks of Astro­nom­i­cal Suffer­ing: A Ne­glected Priority

Bob Jacobs14 Sep 2016 15:21 UTC
44 points
1 comment12 min readEA link
(longtermrisk.org)

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
40 points
3 comments1 min readEA link
(s-risks.org)

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
25 points
1 comment1 min readEA link
(s-risks.org)

Sen­tience In­sti­tute 2021 End of Year Summary

Ali26 Nov 2021 14:40 UTC
60 points
5 comments6 min readEA link
(www.sentienceinstitute.org)

Max Daniel: Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

EA Global2 Jun 2017 8:48 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
37 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

antimonyanthony1 Jul 2021 21:01 UTC
80 points
8 comments46 min readEA link

S-risk FAQ

Tobias_Baumann18 Sep 2017 8:05 UTC
29 points
8 commentsEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA15 Jul 2020 12:28 UTC
67 points
2 comments7 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonY16 Oct 2021 11:33 UTC
62 points
23 comments24 min readEA link

[Question] Where should I donate?

evelynciara22 Nov 2021 20:56 UTC
29 points
9 comments1 min readEA link

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
63 points
0 comments1 min readEA link

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus Docker7 May 2021 14:32 UTC
8 points
0 comments1 min readEA link
(anchor.fm)

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
70 points
10 comments48 min readEA link

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJules3 Oct 2021 16:55 UTC
29 points
18 comments1 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
22 points
2 comments8 min readEA link
(www.sentienceinstitute.org)

Brian To­masik – The Im­por­tance of Wild-An­i­mal Suffering

TianyiQ8 Jul 2009 12:42 UTC
11 points
0 comments1 min readEA link
(longtermrisk.org)

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
68 points
2 comments1 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

deluks91720 Aug 2020 20:23 UTC
42 points
0 comments3 min readEA link

Avoid­ing Group­think in In­tro Fel­low­ships (and Diver­sify­ing Longter­mism)

seanrson14 Sep 2021 21:00 UTC
66 points
10 comments1 min readEA link

The prob­lem of ar­tifi­cial suffering

Martin Trouilloud24 Sep 2021 14:43 UTC
40 points
3 comments9 min readEA link

[Creative Writ­ing Con­test] The Le­gend of the Goldseeker

aman-patel21 Oct 2021 21:31 UTC
8 points
5 comments6 min readEA link
(amanjpatel.notion.site)

S-risk In­tro Fellowship

stefan.torges20 Dec 2021 17:26 UTC
51 points
0 comments1 min readEA link
No comments.