I wrote a post last year basically trying to counter misconceptions about Ordâs definition and also somewhat operationalise it. Hereâs the âConclusionâ section:
To summarise:
Existential risks are distinct from existential catastrophes, extinction risks, and global catastrophic risks.
[Iâd say that] An existential catastrophe involves the destruction of the vast majority of humanityâs potentialânot necessarily all of humanityâs potential, but more than just some of it.
Existential catastrophes could be âslow-movingâ or not apparently âcatastrophicâ; at least in theory, our potential could be destroyed slowly, or without this being noticed.
That leaves ambiguity as to precisely what fraction is sufficient to count as âthe vast majorityâ, but I donât think thatâs a very important ambiguityâe.g., I doubt peopleâs estimates would change a lot if we set the bar at 75% of potential lost vs 99%.
I think the more important ambiguities are what our âpotentialâ is and what it means to âloseâ it. As Ord defines x-risk, thatâs partly a question of moral philosophyâi.e. itâs as if his definition contains a âpointerâ to whatever moral theories we have credence in, our credence in them, and our way of aggregating that, rather than baking a moral conclusion in. E.g., his definition deliberating avoids taking a stance on things like whether a future where we stay on Earth forever or a future with only strange but in some sense âhappyâ digital minds, or failing to reach such futures, would be an existential catastrophe.
This footnote from my post is also relevant:
I donât believe Bostrom makes explicit what he means by âpotentialâ in his definitions. Ord writes âIâm making a deliberate choice not to define the precise way in which the set of possible futures determines our potentialâ, and then discusses that point. Iâll discuss the matter of âpotentialâ more in an upcoming post.
Another approach would be to define existential catastrophes in terms of expected value rather than âpotentialâ. That approach is discussed by Cotton-Barratt and Ord (2015).
I wrote a post last year basically trying to counter misconceptions about Ordâs definition and also somewhat operationalise it. Hereâs the âConclusionâ section:
That leaves ambiguity as to precisely what fraction is sufficient to count as âthe vast majorityâ, but I donât think thatâs a very important ambiguityâe.g., I doubt peopleâs estimates would change a lot if we set the bar at 75% of potential lost vs 99%.
I think the more important ambiguities are what our âpotentialâ is and what it means to âloseâ it. As Ord defines x-risk, thatâs partly a question of moral philosophyâi.e. itâs as if his definition contains a âpointerâ to whatever moral theories we have credence in, our credence in them, and our way of aggregating that, rather than baking a moral conclusion in. E.g., his definition deliberating avoids taking a stance on things like whether a future where we stay on Earth forever or a future with only strange but in some sense âhappyâ digital minds, or failing to reach such futures, would be an existential catastrophe.
This footnote from my post is also relevant: