Increasing existential hope as an effective cause?
In a recent report, Toby Ord and I introduce the idea of ‘existential hope’: roughly, the chance of something extremely good happening. Decreasing existential risk is a popular cause area among effective altruists who care about the far future. Could increasing existential hope be another useful area to consider?
Trying to increase existential hope amounts to identifying something which would be very good for the expected future value of the world, and then trying to achieve that. This could include getting more long-term focused governance (where perhaps the benefit is coming from reduced existential risk after you reach that state), or effecting a value-shift in society so that it is normal to care about avoiding suffering (where the benefit may come from much lower chances of large amounts of future suffering).
What other existential hopes could we aim for?
Technical note: the idea of increasing existential hope is similar to that of a trajectory change, as explained in section 1.1.2.3 of Nick Beckstead’s thesis. It is distinct in that it is extremely hard to tell when a trajectory change occurs, because we don’t know what the long-term future will look like. In contrast we can have a much better idea of expectations.
Thanks for the paper Owen.
Existential hope sounds like an opposite to existential despair, rather than to existential risk and could increase the already common confusion regarding that!! Of course, it’s only a private paper, but since it’s designed to establish terminologies, it’s something to think about.
When I first heard of Bostrom’s phrase “existential risk”, I felt it was overly philosophical because it sounded like a concept in existentialism. I agree with Owen+Toby’s paper that “extinction risk” is already adequate when talking about extinction.
Words are sticky, so it may be hard to ditch “existential risk”, but if we were doing it over again, I’d choose something else, like “astronomical risks” and “astronomical benefits”.
Thanks Ryan, I hadn’t actually spotted that issue with the term “existential hope”. I don’t think it’s necessarily enough to sink the term, but it’s worth being aware of.
English doesn’t have a work for the chance of a good thing, which makes it awkward to find the right term. Earlier drafts just stuck to “existential eucatastrophe”, which has the right meaning. However it was pointed out that eucatastrophe is very obscure and most people would see the word ‘catastrophe’ inside it and assume it meant something bad. We wanted a term which would give something of the right impression.
Perhaps you could use the phrase ‘existential reward’ instead of ‘existential eucatastrophe’?
That falls a bit flat for me—neither the right connotations (as ‘existential dream’ more or less has) nor the right actual meaning (as ‘existential eucatastrophe’ has).
I like “x-dream”!
“Existential dream” is an interesting alternative. It carries some helpful connotations. However, it doesn’t sound like the kind of thing you can increase or decrease, which makes it less good as the term mirroring “existential risk”.
You seem to be using it closer to the sense of “existential eucatastrophe”. I admit that it’s more grokkable than that!
Okay, I thought x-hope/eucatastrophe were the same thing. I was thinking of “existential dream” for the latter.
For a positive parallel to “risk” that can increase or decrease, I’d use “potentiality”. Potential/potentiality is generally used to refer to something good/beneficial, but I can’t say that “existential potentiality” exactly roles off the tongue!
‘Existential opportunity’?
Everything I can think of sounds mercantile: ‘Existential profit’ ‘Existential gain’
This is the kind of nugget I visit this forum for! I have x-dreams for veganism and EA to become the norm globally. Other x-dreams I can think of are making electricity out of nothing (or close to it); high yield, drought resistant crops; a technological breakthrough that will permit people to work less, or not at all; Islam to go the way of Christianity in giving up violence; a strong African Union that keeps peace on the continent; a way of preventing global warming or of cooling the earth; an end to the practice of girl killing in Asia responsible for huge gender imbalances; compassion replacing domination as the prevailing worldview/lifestyle choice; and an end put to anonymous shell companies and secret bank accounts that enable the corrupt.
I think all those are great. But I am more suspicious that taking work out of the equation will improve society—how will we ensure that surplus is distributed reasonably?
my x-hopes are really a kind of success criteria for the movement: a culture pinned around evidence, scientific reasoning and trial and error in policy, medicine and other important areas. A culture around prioritising what problems to solve based on suffering, human flourishing and equality (and that includes the brakes on trial and error in some areas such as new technologies). A global economic/political system that comes from/whose creation is the genesis of those two things that is immensely more effective in improving the human condition than we have currently.
Probably the most important “good things that can happen” after FAI are:
Whole brain emulation. It would allow eliminating death, pain and physical violence, not even mentioning ending discrimination and social stratification on the basis of appearance (although serious investment into cybersecurity would be required).
Full automation of the labor required to maintain a comfortable style of living for everyone. Avoiding a Malthusian catastrophe would still require reasonable reproduction culture (especially given immortality due to e.g. WBE).
It seems like the development of these would increase expected value massively in the medium term. I’m not sure what the effect on long term expected value would be (because we’d expect to develop these at some point anyway in the long term).
Good point. In the long run, the important thing is to reach the best attractor point / asymptotic trajectory. We need to develop much better understanding of the space of attractors (“possible ultimate fates”) and the factors leading to reaching one rather than another. I’d say that the things I mentioned are definitely on the wish list of the attractor we want to select.