Edit: I just noticed that this post I’m commenting on is 2 years old (it came up in my feed and I thought it was new). So, the post wasn’t outdated at the time!
Suffering risks (also known as risks of astronomical suffering, or s-risks) are typically defined as “risks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far” (Daniel, 2017).[7]
That definition is outdated (at least with respect to how CLR thinks about it). The newer definition is the first sentence in the source you link to (it’s a commentary by CLR on the 2017 talk):
S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.
Reasons for the change: (1) Calling the future scenario “galaxy-wide utopia where people still suffer headaches every now and then” an “s-risk” may come with the connotation (always unintended) that this entire future scenario ought to be prevented. Over the years, my former colleagues at CLR and I received a lot of feedback (e.g., here and here) that this was off-putting about the older definition.
(2) Calling something an “s-risk” when it doesn’t constitute a plausible practical priority even for strongly suffering-focused longtermists may generate the impression that s-risks are generally unimportant. The new definition means they’re unlikely to be a rounding error for most longtermist views as they are defined* (except maybe if your normative views imply a 1:1 exchange rate between utopia and dystopia).
(*S-risks may still turn out to be negligible in practice for longtermist views that aren’t strongly focused on reducing suffering if particularly bad futures are really unlikely empirically or if we can’t find promising interventions. [Edit: FWIW, I think there are tractable interventions and s-risks don’t seem crazy unlikely to me.])
Thanks for flagging this! I’ve now updated my post to include this new definition (I still use the old one first, but have added an explicit update in the main text).
This definition does seem better to me, for the reasons you mention.
Edit: I just noticed that this post I’m commenting on is 2 years old (it came up in my feed and I thought it was new). So, the post wasn’t outdated at the time!
That definition is outdated (at least with respect to how CLR thinks about it). The newer definition is the first sentence in the source you link to (it’s a commentary by CLR on the 2017 talk):
Reasons for the change: (1) Calling the future scenario “galaxy-wide utopia where people still suffer headaches every now and then” an “s-risk” may come with the connotation (always unintended) that this entire future scenario ought to be prevented. Over the years, my former colleagues at CLR and I received a lot of feedback (e.g., here and here) that this was off-putting about the older definition.
(2) Calling something an “s-risk” when it doesn’t constitute a plausible practical priority even for strongly suffering-focused longtermists may generate the impression that s-risks are generally unimportant. The new definition means they’re unlikely to be a rounding error for most longtermist views as they are defined* (except maybe if your normative views imply a 1:1 exchange rate between utopia and dystopia).
(*S-risks may still turn out to be negligible in practice for longtermist views that aren’t strongly focused on reducing suffering if particularly bad futures are really unlikely empirically or if we can’t find promising interventions. [Edit: FWIW, I think there are tractable interventions and s-risks don’t seem crazy unlikely to me.])
Thanks for flagging this! I’ve now updated my post to include this new definition (I still use the old one first, but have added an explicit update in the main text).
This definition does seem better to me, for the reasons you mention.