Existential risk as common cause

Summary: Why many different worldviews should prioritise reducing existential risk. Also an exhaustive list of people who can ignore this argument. (Writeup of an old argument I can’t find a source for.)

Confidence: 70%.

Crossposted from gleech.org.

---

Imagine someone who thought that art was the only thing that made life worth living. [1] What should they do? Binge on galleries?

Work to increase the amount of art and artistic experience, by going into finance to fund artists? Or by becoming an activist for government funding for the arts? Maybe. But there’s a case that they should pay attention to ways the world might end: after all, you can’t enjoy art if we’re all dead.

1. Aesthetic experience is good in itself: it’s a terminal goal.
2. The extinction of life would destroy all aesthetic experience & prevent future experiences.
3. So reducing existential risk is good, if only to protect the conditions for aesthetic experience.

And this generalises to a huge range of values:

1. [good] is good in itself: it’s a terminal goal.
2. The extinction of life would destroy [good], and prevent future [good].
3. So reducing existential risk is good, if only to protect the conditions for [good].

Caspar Oesterheld gives a few examples of what people can plug into those brackets:

Abundance, achievement, adventure, affiliation, altruism, apatheia, art, asceticism, austerity, autarky, authority, autonomy, beauty, benevolence, bodily integrity, challenge, collective property, commemoration, communism, community, compassion, competence, competition, competitiveness, complexity, comradery, conscientiousness, consciousness, contentment, cooperation, courage, [crab-mentality], creativity, crime, critical thinking, curiosity, democracy, determination, dignity, diligence, discipline, diversity, duties, education, emotion, envy, equality, equanimity, excellence, excitement, experience, fairness, faithfulness, family, fortitude, frankness, free will, freedom, friendship, frugality, fulfillment, fun, good intentions, greed, happiness, harmony, health, honesty, honor, humility, idealism, idolatry, imagination, improvement, incorruptibility, individuality, industriousness, intelligence, justice, knowledge, law abidance, life, love, loyalty, modesty, monogamy, mutual affection, nature, novelty, obedience, openness, optimism, order, organization, pain, parsimony, peace, peace of mind, pity, play, population size, preference fulfillment, privacy, progress, promises, property, prosperity, punctuality, punishment, purity, racism, rationality, reliability, religion, respect, restraint, rights, sadness, safety, sanctity, security, self-control, self-denial, self-determination, self-expression, self-pity, simplicity, sincerity, social parasitism, society, spirituality, stability, straightforwardness, strength, striving, subordination, suffering, surprise, technology, temperance, thought, tolerance, toughness, truth, tradition, transparency, valor, variety, veracity, wealth, welfare, wisdom.

So “from a huge variety of viewpoints, the end of the world is bad”? What a revelation!

: the above is only interesting if we get from “it’s good to reduce x-risk” to “it’s the most important thing to do” for these values. This would be the case if extinction was both 1) relatively likely relatively soon, and 2) we could do something about it. We can’t be that confident of either of these things, but there are good reasons to both worry and plan.

(If you think that we can only be radically uncertain about the future, note that this implies you should devote more attention to the worst scenarios, not less: ‘high uncertainty’ is not the same as ‘low probability’.)

It’s hard to say at what precise level of confidence and discount rate this argument overrides direct promotion of [good]; I’m claiming that it’s implausible that your one lifetime of direct promotion would outweigh all future instances, if you’re a consequentialist and place reasonable weigh on future lives.

When I first wrote this, I thought the argument had more force for people with high moral uncertainty - i.e. the more of Oesterheld’s list you think are plausibly actually terminal goods, the more you’d focus on x-risk. But I don’t think that follows, and anyway there are much stronger kinds of uncertainty, involving not just which terminal values you credit, but whether there are moral properties at all, whether maximisation is imperative, whether promotion or honouring counts as good. The above argument is about goal-independence (within consequentialism), and says nothing about framework-independence. So:


Who doesn’t have to work on reducing x-risk?

* People with incredibly high confidence that nothing can be done to affect extinction (that is, well above 99% confidence).

* Avowed egoists. (Though Scheffler argues that even they have to care here.)

* ‘Parochialists’: People who think that the responsibility to help those you’re close to outweighs your responsibility to any number of distant others.

* People with values that don’t depend on the world:

* Nihilists, or other people who think there are no moral properties.
* People with an ‘honouring’ kind of ethics—like Kantians, Aristotelians, or some religions. Philip Pettit makes a helpful distinction: when you act, you can either ‘honor’ a value (directly instantiate it) or ‘promote’ it (make more opportunities for it, make it more likely in future). This is a key difference between consequentialism and two of the other big moral theories (deontology and virtue ethics): the latter two only value honouring. This could get them off the logical hook because, unless “preventing extinction” was a duty or virtue itself, or fit easily into another duty or virtue, there’s no moral force against it. (You could try to construe reducing x-risk as “care for others” or “generosity”.) [2]


* People that disvalue life:

* Absolute negative utilitarians or antinatalists: people who think that life is generally negative in itself.
* People who think that human life has, and will continue to have, net-negative effects. Of course, a deep ecologist who sided with extinction would be hoping for a horrendously narrow event, between ‘one which ends all human life’ and ‘one which ends all life’. They’d still have to work against the latter, which covers the artificial x-risks.
* Ordinary utilitarians might also be committed to this view, in certain terrible contingencies (e.g. if we inexorably increased the number of suffering beings via colonisation or simulation).


* The end of the world is not the worst scenario: you might instead have a world with unimaginable amounts of suffering lasting a very long time, an ‘S-risk’. You might work on those instead. This strikes me as admirable and important, it just doesn’t have the complete value-independence that impressed me about the argument at the start of this piece.

* People who don’t think that probability estimates or expected value should be used for moral decisions. (‘Intuitionists’.)

* You might be ‘satisficing’ - you might view the Good as a piecewise function, where having some amount of the good is vitally important, but any more than that has no moral significance. This seems more implausible than maximisation.


Uncertainties

* We really don’t know how tractable these risks are: we haven’t acted, as a species, on unprecedented century-long projects with literally only one chance for success. (But again, this uncertainty doesn’t licence inactivity, because the downside is so large.)

* I previously had the following exempted:

People with incredibly high confidence that extinction will not happen (that is, well above 99% confidence). This is much higher confidence than most people who have looked hard at the matter.

But Ord argues that these people actually should prioritise x-risk, since extinction being very hard implies a long future, and so much greater future expected value. It’s not clear what assumptions his model makes, besides low discount rate and at least minimal returns to x-risk reduction. (h/​t makaea)


* There is some chance that our future will be negative—especially if we spread normal ecosystems to other planets, or if hyper-detailed simulations of people turn out to have moral weight. If the risk increased (if the moral circle stopped expanding, if research into phenomenal consciousness and moral weight stagnated), these could ‘flip the sign’ on extinction, for me.

* I was going to add ‘person-affecting’ people to the exemption list. But actually if the probability of extinction in the next 80 years (one lifetime) is high enough (1% ?) then they probably have to act too, even despite ignoring future generations.

* Most people are neither technical researchers nor willing to go into government. So, if x-risk organisation ran out of “room for more funding” then most people would be off the hook (back to maximising their terminal goal directly), until they had some.

* We don’t really know how common real deontologists are. (That one study is n=1000 about Sweden, probably an unusually consequentialist place.) As value-honourers, they can maybe duck most of the force of the argument.

* Convergence, for instance the above argument, is often suspicious, when humans are persuading themselves or others.

---

[1]: For example, Nietzsche saidWithout music, life would be a mistake.’ (Though strictly this is bluster: he certainly valued many other things.)

[2]: Pummer claims that all “minimally plausible” versions of the honouring ethics must include some promotion. But I don’t see how they can, without being just rule-utilitarians in disguise.

EDIT 8/​12/​18: Formatting. Also added Ord’s hazard rate argument, h/​t makaea.