One common issue with “existential risk” is that it’s so easy to conflate it with “extinction risk”. It seems that even you end up falling into this use of language. You say: “if there were 20 percentage points of near-term existential risk (so an 80 percent chance of survival)”. But human extinction is not necessary for something to be an existential risk, so 20 percentage points of near-term existential risk doesn’t entail an 80 percent chance of survival. (Human extinction may also not be sufficient for existential catastrophe either, depending on how one defines “humanity”))
Relatedly, “existential risk” blurs together two quite different ways of affecting the future. In your model: V=¯vτ. (That is: The value of humanity’s future is the average value of humanity’s future over time multiplied by the duration of humanity’s future.)
This naturally lends itself to the idea that there are two main ways of improving the future: increasing ¯v and increasing τ.
In What We Owe The Future I refer to the latter as “ensuring civilisational survival” and the former as “effecting a positive trajectory change”. (We’ll need to do a bit of syncing up on terminology.)
I think it’s important to keep these separate, because there are plausible views on which affecting one of these is much more important than affecting the other.
Some views on which increasing ¯v is more important:
If the future is of zero or net-negative value
If large drops in future population size are not of enormous importance (e.g. the average view, variable-value views)
If nonhuman-originating civilisation would use the resources that we would use, and is similarly good
Some views on which increasing τ is more important:
If there’s a “low” upper bound on value, which we expect almost all future civilisations to meet
If think that moral convergence, conditional on survival, is very likely
What’s more, changes to τ are plausible binary, but changes to ¯v are not. Plausibly, most probability mass is on τ being small (we go extinct in the next thousand years) or very large (we survive for billions of years or more). But, assuming for simplicity that there’s a “best possible” and “worst possible” future, ¯v could take any value between 100% and −100%. So focusing only on “drastic” changes, as the language of “existential risk” does, makes sense for changes to τ, but not for changes to ¯v .
Some views on which increasing ¯v is more important:
If the future is of zero or net-negative value
This is a good point, and it’s worth pointing out that increasing ¯v is always good whereas increasing τ is only good if the future is of positive value. So risk aversion reduces the value of increasing τ relative to increasing ¯v, provided we put some probability on a bad future.
Some views on which increasing τ is more important:
If there’s a “low” upper bound on value, which we expect almost all future civilisations to meet
What do you mean by civilisation? Maybe I’m nitpicking but it seems that even if there is a low upper bound on value for a civilisation, you may still be able to increase ¯v by creating a greater number of civilisations e.g. by spreading further in the universe or creating more “digital civilisations”.
This is a good point, and it’s worth pointing out that increasing ¯v is always good whereas increasing τ is only good if the future is of positive value. So risk aversion reduces the value of increasing τ relative to increasing ¯v, provided we put some probability on a bad future.
Agree this is worth pointing out! I’ve a draft paper that goes into some of this stuff in more detail, and I make this argument.
Another potential argument for trying to improve ¯v is that, plausibly at least, the value lost as a result of the gap between expected-¯v and best-possible-¯v is greater that the value lost as a result of the gap between expected-τ and best-possible-τ. So in that sense the problem that expected-¯v is not as high as it could be is more “important” (in the ITN sense) than the problem that the expected τ is not as high as it could be.
This naturally lends itself to the idea that there are two main ways of improving the future: increasing v̅ and increasing τ.
I think this is a useful two factor model, though I don’t quite think of avoiding existential risk just as increasing τ. I think of it more as increasing the probability that it doesn’t just end now, or at some other intermediate point. In my (unpublished) extensions of this model that I hint at in the chapter, I add a curve representing the probability of surviving to time t (or beyond), and then think of raising this curve as intervening on existential risk.
One common issue with “existential risk” is that it’s so easy to conflate it with “extinction risk”. It seems that even you end up falling into this use of language. You say: “if there were 20 percentage points of near-term existential risk (so an 80 percent chance of survival)”. But human extinction is not necessary for something to be an existential risk, so 20 percentage points of near-term existential risk doesn’t entail an 80 percent chance of survival.
In this case I meant ‘an 80 percent chance of surviving the threat with our potential intact’, or of ‘our potential surviving the threat’.
While this framework is slightly cleaner with extinction risk instead of existential risk (i.e. the curve may simply stop), it can also work with existential risk as while the curve continues after some existential catastrophes, it usually only sweeps out an a small area. This does raise a bigger issue if the existential catastrophe is that we end up with a vastly negative future, as then the curve may continue in very important ways after that point. (There are related challenges pointed out by another commenter where out impacts on the intrinsic value of other animals may also continue after our extinction.) These are genuine challenges (or limitations) for the current model. One definitely can overcome them, but the question would be the best way to do so while maintaining analytic tractability.
Existential risk, and an alternative framework
One common issue with “existential risk” is that it’s so easy to conflate it with “extinction risk”. It seems that even you end up falling into this use of language. You say: “if there were 20 percentage points of near-term existential risk (so an 80 percent chance of survival)”. But human extinction is not necessary for something to be an existential risk, so 20 percentage points of near-term existential risk doesn’t entail an 80 percent chance of survival. (Human extinction may also not be sufficient for existential catastrophe either, depending on how one defines “humanity”))
Relatedly, “existential risk” blurs together two quite different ways of affecting the future. In your model: V=¯vτ. (That is: The value of humanity’s future is the average value of humanity’s future over time multiplied by the duration of humanity’s future.)
This naturally lends itself to the idea that there are two main ways of improving the future: increasing ¯v and increasing τ.
In What We Owe The Future I refer to the latter as “ensuring civilisational survival” and the former as “effecting a positive trajectory change”. (We’ll need to do a bit of syncing up on terminology.)
I think it’s important to keep these separate, because there are plausible views on which affecting one of these is much more important than affecting the other.
Some views on which increasing ¯v is more important:
If the future is of zero or net-negative value
If large drops in future population size are not of enormous importance (e.g. the average view, variable-value views)
If nonhuman-originating civilisation would use the resources that we would use, and is similarly good
Some views on which increasing τ is more important:
If there’s a “low” upper bound on value, which we expect almost all future civilisations to meet
If think that moral convergence, conditional on survival, is very likely
What’s more, changes to τ are plausible binary, but changes to ¯v are not. Plausibly, most probability mass is on τ being small (we go extinct in the next thousand years) or very large (we survive for billions of years or more). But, assuming for simplicity that there’s a “best possible” and “worst possible” future, ¯v could take any value between 100% and −100%. So focusing only on “drastic” changes, as the language of “existential risk” does, makes sense for changes to τ, but not for changes to ¯v .
This is a good point, and it’s worth pointing out that increasing ¯v is always good whereas increasing τ is only good if the future is of positive value. So risk aversion reduces the value of increasing τ relative to increasing ¯v, provided we put some probability on a bad future.
What do you mean by civilisation? Maybe I’m nitpicking but it seems that even if there is a low upper bound on value for a civilisation, you may still be able to increase ¯v by creating a greater number of civilisations e.g. by spreading further in the universe or creating more “digital civilisations”.
Agree this is worth pointing out! I’ve a draft paper that goes into some of this stuff in more detail, and I make this argument.
Another potential argument for trying to improve ¯v is that, plausibly at least, the value lost as a result of the gap between expected-¯v and best-possible-¯v is greater that the value lost as a result of the gap between expected-τ and best-possible-τ. So in that sense the problem that expected-¯v is not as high as it could be is more “important” (in the ITN sense) than the problem that the expected τ is not as high as it could be.
I think this is a useful two factor model, though I don’t quite think of avoiding existential risk just as increasing τ. I think of it more as increasing the probability that it doesn’t just end now, or at some other intermediate point. In my (unpublished) extensions of this model that I hint at in the chapter, I add a curve representing the probability of surviving to time t (or beyond), and then think of raising this curve as intervening on existential risk.
In this case I meant ‘an 80 percent chance of surviving the threat with our potential intact’, or of ‘our potential surviving the threat’.
While this framework is slightly cleaner with extinction risk instead of existential risk (i.e. the curve may simply stop), it can also work with existential risk as while the curve continues after some existential catastrophes, it usually only sweeps out an a small area. This does raise a bigger issue if the existential catastrophe is that we end up with a vastly negative future, as then the curve may continue in very important ways after that point. (There are related challenges pointed out by another commenter where out impacts on the intrinsic value of other animals may also continue after our extinction.) These are genuine challenges (or limitations) for the current model. One definitely can overcome them, but the question would be the best way to do so while maintaining analytic tractability.