Considering the issue of how an engineered pandemic could lead to the extinction of humanity, I identify five separate things that must occur… [emphasis added]
Ord’s estimates are of existential risk from various sources, not extinction risk. Thus, at least part of the difference between your estimate and his, regarding engineered pandemics, can be explained by you estimating the risk of a narrower subset of very bad outcomes than he is.
I don’t think this explains a lot of the difference, because:
You already seem to be giving your estimate of the chance an engineered pandemic brings humanity below the minimum viable population, rather than the chance an engineered pandemic “directly”/”itself” reduces the population to literally 0
I get the impression that Ord is relatively optimistic (compared to many but not all other x-risk researchers) about humanity’s chance of recovery from collapse, and about our chance of being ok in the end as long as we avoid seriously extreme outcomes (e.g., he doesn’t seem very concerned by things like a catastrophe resulting in us having notably worse values, which end up persisting over time)
But I think the difference in what you’re estimating may explain some of the difference in your estimates.
And in any case, it seems worth noting, because it seems to me reasonable to be less optimistic than Ord about our chances of recovery or issues like catastrophes making our values worse in a persistent way. And that in turn could be a reason to end up with an existential risk estimate closer to Ord’s than to yours, even if one agrees with your views about the extinction risks.
Extinction risk ≠ existential risk
Ord’s estimates are of existential risk from various sources, not extinction risk. Thus, at least part of the difference between your estimate and his, regarding engineered pandemics, can be explained by you estimating the risk of a narrower subset of very bad outcomes than he is.
I don’t think this explains a lot of the difference, because:
You already seem to be giving your estimate of the chance an engineered pandemic brings humanity below the minimum viable population, rather than the chance an engineered pandemic “directly”/”itself” reduces the population to literally 0
I get the impression that Ord is relatively optimistic (compared to many but not all other x-risk researchers) about humanity’s chance of recovery from collapse, and about our chance of being ok in the end as long as we avoid seriously extreme outcomes (e.g., he doesn’t seem very concerned by things like a catastrophe resulting in us having notably worse values, which end up persisting over time)
But I think the difference in what you’re estimating may explain some of the difference in your estimates.
And in any case, it seems worth noting, because it seems to me reasonable to be less optimistic than Ord about our chances of recovery or issues like catastrophes making our values worse in a persistent way. And that in turn could be a reason to end up with an existential risk estimate closer to Ord’s than to yours, even if one agrees with your views about the extinction risks.