Then perhaps it’s good that I didn’t include my nonstandard definition of x-risk, and we can expect the respondents to be at least somewhat closer to Ord’s definition.
I do find it odd to say that ’40% of the future’s value is lost’ isn’t an x-catastrophe, and in my own experience it’s much more common that I’ve wanted to draw a clear line between ’40% of the future is lost’ and ‘0.4% of the future is lost’, than between 90% and 40%. I’d be interested to hear about cases where Toby or others found it illuminating to sharply distinguish 90% and 40%.
I have sometimes wanted to draw a sharp distinction between scenarios where 90% of humans die vs. ones where 40% of humans die; but that’s largely because the risk of subsequent extinction or permanent civilizational collapse seems much higher to me in the 90% case. I don’t currently see a similar discontinuity in ’90% of the future lost vs. 40% of the future lost’, either in ‘the practical upshot of such loss’ or in ‘the kinds of scenarios that tend to cause such loss’. But I’ve also spent a lot less time about Toby thinking about the full range of x-risk scenarios.
FWIW, I personally don’t necessarily think we should focus more on 90+% loss scenarios than 1-90% loss scenarios, or even than <1% loss scenarios (though I’d currently lean against that final focus). I see this as essentially an open question (i.e., the question of which kinds of trajectory changes to prioritise increasing/decreasing the likelihood).
I do think Ord thinks we should focus more on 90+% loss scenarios, though I’m not certain why. I think people like Beckstead and MacAskill are less confident about that. (I’m lazily not including links, but can add them on request.)
I have some messy, longwinded drafts on something like this topic from a year ago that I could share, if anyone is interested.
I was just talking about what people take x-risk to mean, rather than what I believe we should prioritise.
Some reasons I can imagine for focusing on 90+% loss scenarios:
You might just have the empirical view that very few things would cause ‘medium-sized’ losses of a lot of the future’s value. It could then be useful to define ‘existential risk’ to exclude medium-sized losses, so that when you talk about ‘x-risks’ people fully appreciate just how bad you think these outcomes would be.
‘Existential’ suggests a threat to the ‘existence’ of humanity, i.e., an outcome about as bad as human extinction. (Certainly a lot of EAs—myself included, when I first joined the community! -- misunderstand x-risk and think it’s equivalent to extinction risk.)
After googling a bit, I now think Nick Bostrom’s conception of existential risk (at least as of 2012) is similar to Toby’s. In https://www.existential-risk.org/concept.html, Nick divides up x-risks into the categories ”human extinction, permanent stagnation, flawed realization, and subsequent ruination”, and says that in a “flawed realization”, “humanity reaches technological maturity” but “the amount of value realized is but a small fraction of what could have been achieved”. This only makes sense as a partition of x-risks if all x-risks reduce value to “a small fraction of what could have been achieved” (or reduce the future’s value to zero).
I still think that the definition of x-risk I proposed is a bit more useful, and I think it’s a more natural interpretation of phrasings like “drastically curtail [Earth-originating intelligent life’s] potential” and “reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically”. Perhaps I should use a new term, like hyperastronomical catastrophe, when I want to refer to something like ‘catastrophes that would reduce the total value of the future by 5% or more’.
On the final paragraph, I don’t strongly disagree, but:
I think to me “drastically curtail” more naturally means “reduces to much less than 50%” (though that may be biased by me having also heard Ord’s operationalisation for the same term).
At first glance, I feel averse to introducing a new term for something like “reduces by 5-90%”
I think “non-existential trajectory change”, or just “trajectory change”, maybe does an ok job for what you want to say
Technically those things would also cover 0.0001% losses or the like. But it seems like you could just say “trajectory change” and then also talk about roughly how much loss you mean?
It seems like if we come up with a new term for the 5-90% bucket, we would also want a new term for other buckets?
Then perhaps it’s good that I didn’t include my nonstandard definition of x-risk, and we can expect the respondents to be at least somewhat closer to Ord’s definition.
I do find it odd to say that ’40% of the future’s value is lost’ isn’t an x-catastrophe, and in my own experience it’s much more common that I’ve wanted to draw a clear line between ’40% of the future is lost’ and ‘0.4% of the future is lost’, than between 90% and 40%. I’d be interested to hear about cases where Toby or others found it illuminating to sharply distinguish 90% and 40%.
I have sometimes wanted to draw a sharp distinction between scenarios where 90% of humans die vs. ones where 40% of humans die; but that’s largely because the risk of subsequent extinction or permanent civilizational collapse seems much higher to me in the 90% case. I don’t currently see a similar discontinuity in ’90% of the future lost vs. 40% of the future lost’, either in ‘the practical upshot of such loss’ or in ‘the kinds of scenarios that tend to cause such loss’. But I’ve also spent a lot less time about Toby thinking about the full range of x-risk scenarios.
FWIW, I personally don’t necessarily think we should focus more on 90+% loss scenarios than 1-90% loss scenarios, or even than <1% loss scenarios (though I’d currently lean against that final focus). I see this as essentially an open question (i.e., the question of which kinds of trajectory changes to prioritise increasing/decreasing the likelihood).
I do think Ord thinks we should focus more on 90+% loss scenarios, though I’m not certain why. I think people like Beckstead and MacAskill are less confident about that. (I’m lazily not including links, but can add them on request.)
I have some messy, longwinded drafts on something like this topic from a year ago that I could share, if anyone is interested.
I was just talking about what people take x-risk to mean, rather than what I believe we should prioritise.
Some reasons I can imagine for focusing on 90+% loss scenarios:
You might just have the empirical view that very few things would cause ‘medium-sized’ losses of a lot of the future’s value. It could then be useful to define ‘existential risk’ to exclude medium-sized losses, so that when you talk about ‘x-risks’ people fully appreciate just how bad you think these outcomes would be.
‘Existential’ suggests a threat to the ‘existence’ of humanity, i.e., an outcome about as bad as human extinction. (Certainly a lot of EAs—myself included, when I first joined the community! -- misunderstand x-risk and think it’s equivalent to extinction risk.)
After googling a bit, I now think Nick Bostrom’s conception of existential risk (at least as of 2012) is similar to Toby’s. In https://www.existential-risk.org/concept.html, Nick divides up x-risks into the categories ”human extinction, permanent stagnation, flawed realization, and subsequent ruination”, and says that in a “flawed realization”, “humanity reaches technological maturity” but “the amount of value realized is but a small fraction of what could have been achieved”. This only makes sense as a partition of x-risks if all x-risks reduce value to “a small fraction of what could have been achieved” (or reduce the future’s value to zero).
I still think that the definition of x-risk I proposed is a bit more useful, and I think it’s a more natural interpretation of phrasings like “drastically curtail [Earth-originating intelligent life’s] potential” and “reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically”. Perhaps I should use a new term, like hyperastronomical catastrophe, when I want to refer to something like ‘catastrophes that would reduce the total value of the future by 5% or more’.
I agree with everything but your final paragraph.
On the final paragraph, I don’t strongly disagree, but:
I think to me “drastically curtail” more naturally means “reduces to much less than 50%” (though that may be biased by me having also heard Ord’s operationalisation for the same term).
At first glance, I feel averse to introducing a new term for something like “reduces by 5-90%”
I think “non-existential trajectory change”, or just “trajectory change”, maybe does an ok job for what you want to say
Technically those things would also cover 0.0001% losses or the like. But it seems like you could just say “trajectory change” and then also talk about roughly how much loss you mean?
It seems like if we come up with a new term for the 5-90% bucket, we would also want a new term for other buckets?