I wrote a post last year basically trying to counter misconceptions about Ord’s definition and also somewhat operationalise it. Here’s the “Conclusion” section:
To summarise:
Existential risks are distinct from existential catastrophes, extinction risks, and global catastrophic risks.
[I’d say that] An existential catastrophe involves the destruction of the vast majority of humanity’s potential—not necessarily all of humanity’s potential, but more than just some of it.
Existential catastrophes could be “slow-moving” or not apparently “catastrophic”; at least in theory, our potential could be destroyed slowly, or without this being noticed.
That leaves ambiguity as to precisely what fraction is sufficient to count as “the vast majority”, but I don’t think that’s a very important ambiguity—e.g., I doubt people’s estimates would change a lot if we set the bar at 75% of potential lost vs 99%.
I think the more important ambiguities are what our “potential” is and what it means to “lose” it. As Ord defines x-risk, that’s partly a question of moral philosophy—i.e. it’s as if his definition contains a “pointer” to whatever moral theories we have credence in, our credence in them, and our way of aggregating that, rather than baking a moral conclusion in. E.g., his definition deliberating avoids taking a stance on things like whether a future where we stay on Earth forever or a future with only strange but in some sense “happy” digital minds, or failing to reach such futures, would be an existential catastrophe.
This footnote from my post is also relevant:
I don’t believe Bostrom makes explicit what he means by “potential” in his definitions. Ord writes “I’m making a deliberate choice not to define the precise way in which the set of possible futures determines our potential”, and then discusses that point. I’ll discuss the matter of “potential” more in an upcoming post.
Another approach would be to define existential catastrophes in terms of expected value rather than “potential”. That approach is discussed by Cotton-Barratt and Ord (2015).
I wrote a post last year basically trying to counter misconceptions about Ord’s definition and also somewhat operationalise it. Here’s the “Conclusion” section:
That leaves ambiguity as to precisely what fraction is sufficient to count as “the vast majority”, but I don’t think that’s a very important ambiguity—e.g., I doubt people’s estimates would change a lot if we set the bar at 75% of potential lost vs 99%.
I think the more important ambiguities are what our “potential” is and what it means to “lose” it. As Ord defines x-risk, that’s partly a question of moral philosophy—i.e. it’s as if his definition contains a “pointer” to whatever moral theories we have credence in, our credence in them, and our way of aggregating that, rather than baking a moral conclusion in. E.g., his definition deliberating avoids taking a stance on things like whether a future where we stay on Earth forever or a future with only strange but in some sense “happy” digital minds, or failing to reach such futures, would be an existential catastrophe.
This footnote from my post is also relevant: