If we assume that natural risks are negligible, I would guess that this probably reduces to something like the question of what probability you put on extinction or existential catastrophe due to anthropogenic biorisk? Since biorisk is likely to leave much of the rest of Earthly life unscathed, it also hinges on what probability you assign to something like “human level intelligence” evolving anew. I find it reasonably plausible that a cumulative technological culture of the kind that characterizes human beings is unlikely to be a convergent evolutionary outcome (for the reasons given in Powell, Contingency and Convergence), and thus if human beings are wiped out, there is very little probability of similar traits emerging in other lineages. So human extinction due to a bioengineered pandemic strikes me as maybe the key scenario for the extinction of earth-originating intelligent life. Does that seem plausible?
I would add the (in my view far more likely) possibility of Yudkowskian* paperclipping via non-sentient AI, which given our currently incredibly low level of control of AI systems, and the fact that we don’t know how to create sentience, seems like the most likely default.
*) Specifically, the view that paperclipping occurs by default from any complex non-satiable implicit utility function, rather than the Bostromian paperclipping risk of accidentally giving a smart AI a dumb goal.
If we assume that natural risks are negligible, I would guess that this probably reduces to something like the question of what probability you put on extinction or existential catastrophe due to anthropogenic biorisk? Since biorisk is likely to leave much of the rest of Earthly life unscathed, it also hinges on what probability you assign to something like “human level intelligence” evolving anew. I find it reasonably plausible that a cumulative technological culture of the kind that characterizes human beings is unlikely to be a convergent evolutionary outcome (for the reasons given in Powell, Contingency and Convergence), and thus if human beings are wiped out, there is very little probability of similar traits emerging in other lineages. So human extinction due to a bioengineered pandemic strikes me as maybe the key scenario for the extinction of earth-originating intelligent life. Does that seem plausible?
I would add the (in my view far more likely) possibility of Yudkowskian* paperclipping via non-sentient AI, which given our currently incredibly low level of control of AI systems, and the fact that we don’t know how to create sentience, seems like the most likely default.
*) Specifically, the view that paperclipping occurs by default from any complex non-satiable implicit utility function, rather than the Bostromian paperclipping risk of accidentally giving a smart AI a dumb goal.