RSS

S-risk

TagLast edit: 5 Jun 2021 8:39 UTC by EA Wiki assistant

An s-risk, or suffering risk, is a risk involving the creation of suffering on an astronomical scale.

Related entries

Center for Reducing Suffering | Center on Long-Term Risk | ethics of existential risk

Max Daniel: Why s-risks are the worst ex­is­ten­tial risks, and how to pre­vent them

EA Global2 Jun 2017 8:48 UTC
6 points
0 comments1 min readEA link
(www.youtube.com)

S-risk FAQ

Tobias_Baumann18 Sep 2017 8:05 UTC
26 points
8 commentsEA link

Cause pri­ori­ti­za­tion for down­side-fo­cused value systems

Lukas_Gloor31 Jan 2018 14:47 UTC
65 points
10 comments48 min readEA link

Launch­ing the EAF Fund

stefan.torges28 Nov 2018 17:13 UTC
60 points
14 comments4 min readEA link

A ty­pol­ogy of s-risks

Tobias_Baumann21 Dec 2018 18:23 UTC
25 points
1 comment1 min readEA link
(s-risks.org)

Risk fac­tors for s-risks

Tobias_Baumann13 Feb 2019 17:51 UTC
38 points
3 comments1 min readEA link
(s-risks.org)

[Link] EAF Re­search agenda: “Co­op­er­a­tion, Con­flict, and Trans­for­ma­tive Ar­tifi­cial In­tel­li­gence”

stefan.torges17 Jan 2020 13:28 UTC
61 points
0 comments1 min readEA link

Venn di­a­grams of ex­is­ten­tial, global, and suffer­ing catastrophes

MichaelA15 Jul 2020 12:28 UTC
63 points
2 comments7 min readEA link

An­i­mal Rights, The Sin­gu­lar­ity, and Astro­nom­i­cal Suffering

deluks91720 Aug 2020 20:23 UTC
42 points
0 comments3 min readEA link

First S-Risk In­tro Seminar

stefan.torges8 Dec 2020 9:23 UTC
62 points
2 comments1 min readEA link

How can we re­duce s-risks?

Tobias_Baumann29 Jan 2021 15:46 UTC
37 points
3 comments1 min readEA link
(centerforreducingsuffering.org)

[3-hour pod­cast]: Milan Cirkovic on the ethics of aliens, as­tro­biol­ogy and civ­i­liza­tions el­se­where in the universe

Gus Docker7 May 2021 14:32 UTC
8 points
0 comments1 min readEA link
(anchor.fm)

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

antimonyanthony1 Jul 2021 21:01 UTC
76 points
8 comments46 min readEA link

Avoid­ing Group­think in In­tro Fel­low­ships (and Diver­sify­ing Longter­mism)

seanrson14 Sep 2021 21:00 UTC
66 points
10 comments1 min readEA link

The prob­lem of ar­tifi­cial suffering

Martin Trouilloud24 Sep 2021 14:43 UTC
38 points
3 comments9 min readEA link

[Question] Why does (any par­tic­u­lar) AI safety work re­duce s-risks more than it in­creases them?

MichaelStJules3 Oct 2021 16:55 UTC
26 points
18 comments1 min readEA link

Suffer­ing-Fo­cused Ethics (SFE) FAQ

EdisonY16 Oct 2021 11:33 UTC
55 points
20 comments24 min readEA link

Pri­ori­ti­za­tion Ques­tions for Ar­tifi­cial Sentience

Jamie_Harris18 Oct 2021 14:07 UTC
22 points
3 comments8 min readEA link
(www.sentienceinstitute.org)

[Creative Writ­ing Con­test] The Le­gend of the Goldseeker

aman-patel21 Oct 2021 21:31 UTC
1 point
1 comment6 min readEA link
(amanjpatel.notion.site)
No comments.