I agree that S-risks are more neglected by EA than extinction risks, and I think the explanation that many people associate S-risks with negative utilitarianism is plausible. I’m a regular utilitarian and I’ve reached the conclusion that S-risks are quite important and neglected, and I hope this bucks the perception of those focused on S-risks.
Strong upvote. My personal intuitions are suffering focused, but I’m currently convinced that I ought to do whatever evidential cooperation in large worlds (ECL) implies. I don’t know exactly what that is, but I find it eminently plausible that it’ll imply that extinction and suffering are both really, really bad, and s-risks, especially according to some of the newer, more extreme definitions, even more so.
Before ECL, my thinking was basically: “I know of dozens of plausible models of ethics. They contradict each other in many ways. But none of them is in favor of suffering. In fact, a disapproval of many forms of suffering seems to be an unusually consistent theme in all of them, more consistent than any other theme that I can identify.[1] Methods to quantify tradeoffs between the models are imprecise (e.g., moral parliaments). Hence I should, for now, focus on alleviating the forms of suffering of which this is true.”
Reducing suffering – in all the many cases where doing so is unambiguously good across a wide range of ethical systems – still strikes me as at least as robust as reducing extinction risk.
I agree that S-risks are more neglected by EA than extinction risks, and I think the explanation that many people associate S-risks with negative utilitarianism is plausible. I’m a regular utilitarian and I’ve reached the conclusion that S-risks are quite important and neglected, and I hope this bucks the perception of those focused on S-risks.
Two recent, related articles by Magnus Vinding that I enjoyed reading:
Point-by-point critique of [Toby Ord’s] “Why I’m Not a Negative Utilitarian”
The dismal dismissal of suffering-focused views
Strong upvote. My personal intuitions are suffering focused, but I’m currently convinced that I ought to do whatever evidential cooperation in large worlds (ECL) implies. I don’t know exactly what that is, but I find it eminently plausible that it’ll imply that extinction and suffering are both really, really bad, and s-risks, especially according to some of the newer, more extreme definitions, even more so.
Before ECL, my thinking was basically: “I know of dozens of plausible models of ethics. They contradict each other in many ways. But none of them is in favor of suffering. In fact, a disapproval of many forms of suffering seems to be an unusually consistent theme in all of them, more consistent than any other theme that I can identify.[1] Methods to quantify tradeoffs between the models are imprecise (e.g., moral parliaments). Hence I should, for now, focus on alleviating the forms of suffering of which this is true.”
Reducing suffering – in all the many cases where doing so is unambiguously good across a wide range of ethical systems – still strikes me as at least as robust as reducing extinction risk.
Some variation on universalizability, broadly construed, may be a contender.