Some considerations for different ways to reduce x-risk

I believe the far future is a very important consideration in doing the most good, but I don’t focus on reducing extinction risks like those from unfriendly artificial intelligence. This post introduces and outlines some of the key considerations that went into that decision, and leaves discussion of the best answers to those considerations for future work.

The different types of x-risk

In “Astronomical Waste”, written by Nick Bostrom in 2003, an existential risk (commonly referred to as a x-risk) is described as “one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In effective altruism and rationalist circles, we have most commonly addressed x-risks of the former type, those that could “annihilate Earth-originating intelligent life.” Let’s call those population risks.

The other risks could “permanently and drastically curtail [Earth-originating intelligent life’s] potential.” Let’s assume our “potential” is to create a very large and very good civilization, with “very large” referencing the number of sentient beings and “very good” being up to the reader. (I’ll assume that you believe there are at least some scenarios with a very large civilization that wouldn’t be very good, such as those filled with more suffering than happiness.)

I think humanity could fail in either or both of these respects. In this post, I’ll focus on the risk of creating a very large but not very good civilization, but I think the other types are worth exploring some other time. We’ll use the term quality risk to mean a risk of having a very large civilization that’s not very good.

Note that considering quality risks explicitly isn’t a new idea at all. Nick Beckstead wrote about it in August 2014, referring to previous mentions of it in 2009, 2013, and another in 2014. Beckstead explicitly states, “Astronomical waste may involve changes in quality of life, rather than size of population.”

“I think we should focus our efforts almost entirely on improving the expected value of the far future. Should I work on quality risks or extinction risks?”

I think more people in the effective altruism and rationalist communities should be asking themselves this question. I think people often make the jump from “the far future is very important” to “I should work on the most important extinction risks” too quickly.

“But what if I’m not exclusively focused on the far future?”

There are reasonable justifications for not focusing entirely on the far future, such as concerns about the tractability of making a substantive difference, and wanting to give limited weight to linear arguments like that most commonly used to justify focusing on the far future. I personally give significant weight to both far future and near-term outcomes. But for this post, I’ll focus exclusively on far future impact.

“But why can’t I just reduce both sorts of x-risk?”

Some activities might reduce both quality and extinction risks. For example, getting more people involved in EA might increase the number of people working in each area. Also, research that increases the likelihood of friendly AI might not only reduce the risk of extinction, but also might affect the relative likelihoods of different non-extinction AI scenarios, some of which might be better than others. I think this is super interesting, but for this post, I’ll only consider the tradeoff itself.

Considerations

To better understand whether to focus on quality risks or extinction risks, I think there are a number of informative questions we can ask. Unfortunately, the questions that seem most useful if we had answers also seem quite intractable and little research has been done on them.

Because quality risk is such a diverse category, I’ll use widening our moral circles as an example of how we could increase the expected value of the far future given we continue to exist, mainly because it’s what I see as most promising and have considered most fully. By this, I mean, “Increasing the extent to which we accommodate the interests of all sentient individuals, rather than just those who are most similar to us or have the most power in society.” It seems that narrow moral circles and a lack of concern for all sentient beings could lead civilization to be much worse, given it continues to exist, posing a substantial quality risk.

Here’s a list of some particularly interesting considerations, in my opinion. Explanations are included for considerations with less obvious relevance, and I think there’s a good chance I’m leaving out a few important ones or not breaking them down optimally (e.g. “tractability” could be segmented into several different important considerations).

  • Quality risk tractability: How tractable is social change, such as widening moral circles? e.g. Does activism better society, or is it really just things like globalization and economic optimization that make moral change occur?

  • Extinction risk tractability: How tractable is extinction risk? e.g. Can determined individuals and small groups affect how humanity utilizes its technology, or will powerful and too-hard-to-affect governments determine our fate?

  • Scale: Are dystopian futures sufficiently likely that reducing quality risks has more potential impact, since these risks would constitute the difference between a far future much worse than nonexistence and a far future much better than nonexistence?

  • Neglectedness: How many resources are currently being spent on each sort of risk? Is one more neglected, either in terms of attention from the effective altruism community or in terms of society as a whole?

    (We could expect diminishing returns from investment in a specific cause area as more and more people take the “low hanging fruit.” Increasing returns might also apply in some situations.)

  • Inevitability of moral progress: Is the “moral fate” of the universe inevitable, given we continue to exist? Are we inexorably doomed because of the intrinsic evils of markets and evolution (i.e. systems that optimize for outcomes like survival and potentially lead to morally bad outcomes like mass suffering)? Or are we destined for utopia because we’ll implement a powerful AI that creates the best possible universe?

    (If moral fate is inevitable, then it’s not much use trying to change it.)

  • Moral value of the far future: In expectation, is humanity’s future existence a good thing? Will we create a utopian shockwave across the universe, or create cruel simulations of large numbers of sentient beings for our amusement or experimentation? [Edit: Apologize for the formatting issue.] (If it’s a neutral or bad outcome, then working on extinction risk seems less promising and maybe even harmful.)

  • Political power in the far future: Will all sentient beings of the future, if they exist, be politically powerful in the sense humans in the developed world (arguably) are today? Or will many of them be like the animals suffering in animal agriculture?

    (If they are powerful, then it seems widening our moral circles isn’t as important, but if the powerful beings of the future need to account for the interests of powerless individuals, then having wide moral circles seems very important.)

  • Quality of life getting ‘stuck’: If we continue to exist, will our quality of life get ‘stuck’ at some point, such as if an authoritarian government comes into power or we start sending probes out into space who are unable to communicate with others?

    (Quality improvements seem more urgent if it’s more likely to get stuck.)

Future research and discussion

Further analyses of these considerations and introduction of new ones could be quite valuable [Edit: As noted in the comments, some analyses do exist for some of these questions. I could have been clearer about that.]. My best guesses on these questions — which are largely just intuitions at this point — lead me to favor working on widening our moral circles instead of reducing extinction risk, but I could change my mind on that and I hope others would also be willing to do so. It’s also worth considering the prioritization of finding better answers to these questions, even if they seem mostly intractable.

I worry that many of us who focus on long-term impact haven’t given much thought to these considerations and mostly just went with the norms of our social circles. Upon considering them now, I think it’s tempting to just settle them in the direction that favors our current activities, and I hope people try to avoid doing so.