Avoiding futures with astronomical amounts of suffering (s-risks) is a plausible priority from the perspective of many value systems, particularly for suffering-focused views. But given the highly abstract and often speculative nature of such future scenarios, what can we actually do now to reduce s-risks?
In this post, I’ll give an overview of the priority areas that have been identified in suffering-focused cause prioritisation research to date. Of course, this is subject to great uncertainty, and it could be that the most effective ways to reduce s-risks are quite different from the interventions outlined in the following.
A comprehensive evaluation of each of the main priority areas is beyond the scope of this post, but in general, I have included interventions that seem sufficiently promising in terms of importance, tractability, and neglectedness. I have excluded candidate interventions that are too difficult to influence, or are likely to backfire by causing great controversy or backlash (e.g. trying to stop technological progress altogether). When reducing s-risks, we should seek to find common ground with other value systems; accordingly, many of the following interventions are worthwhile from many perspectives.
Great piece, thanks !
Since you devoted a subsection to moral circle expansion as a way of reducing s-risks, I guess you consider that its beneficial effects outweigh the backfire risks you mention (at least if MCE is done “in the right way”). CRS’ 2020 End-of-Year Fundraiser post also induces optimism regarding the impact of increasing moral consideration for artificial minds (the only remaining doubts seem to be about when and how to do it).
I wonder how confident we should be about this (the positiveness of MCE in reducing s-risks), at this point? Have you – or other researchers – made estimates confirming this, for instance? :)
EDIT: Your piece Arguments for and against moral advocacy (2017) already raises relevant considerations but perhaps your view on this issue is clearer now.
Thanks for the comment, this is raising a very important point.
I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of “in the right way” should be taken very seriously—I’m much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE.
This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE).
Thanks for writing this!
Do you think any of the interventions are particularly good from an importance, tractability, neglectedness point of view?
Do you have a favourite?