1. Artificial general intelligence, or an AI which is able to out-perform humans in essentially all human activities, is developed within the next century.
2. This artificial intelligence acquires the power to usurp humanity and achieve a position of dominance on Earth.
3. This artificial intelligence has a reason/motivation/purpose to usurp humanity and achieve a position of dominance on Earth.
4. This artificial intelligence either brings about the extinction of humanity, or otherwise retains permanent dominance over humanity in a manner so as to significantly diminish our long-term potential.
I think one problem here is phrasing 2.-4 as singular (“This artificial intelligence”), when the plural would be more appropriate. If the technological means are available, it is likely that many actors will create powerful AI systems. If the offense-defense balance is unfavorable (i.e., it is much easier for the AGI systems available at a specific time to do harm than to protect from harm), then a catastrophic event might be triggered by just one of very many AGI systems becoming unaligned (‘unilateralist curse’).
So I would rephrase your estimates like this:
1. Artificial general intelligence (AGI), or an AI which is able to out-perform humans in essentially all human activities, is developed within the next century.
2. AT LEAST ONE of a large number of AGI systems acquires the capability to usurp humanity and achieve a position of dominance on Earth.
3. AT LEAST ONE of those AGI systems has a reason/motivation/purpose to usurp humanity and achieve a position of dominance on Earth (unaligned AGI).
4. The offense-defense balance between AGI systems available at the time is unfavorable (i.e., defense from unaligned AGI through benevolent AGI is difficult)
5. The unaligned AGI either brings about the extinction of humanity, or otherwise retains permanent dominance over humanity in a manner so as to significantly diminish our long-term potential.
My own estimates when phrasing it this way would be 0,99 * 0,99 * 0,99 * 0,5 * 0,1 = roughly a 5% risk, with high uncertainty.
This would make risk of an unfavorable offense-defense balance (here estimated as 0,5) one of the major determining parameters in my estimate.
I think one problem here is phrasing 2.-4 as singular (“This artificial intelligence”), when the plural would be more appropriate. If the technological means are available, it is likely that many actors will create powerful AI systems. If the offense-defense balance is unfavorable (i.e., it is much easier for the AGI systems available at a specific time to do harm than to protect from harm), then a catastrophic event might be triggered by just one of very many AGI systems becoming unaligned (‘unilateralist curse’).
So I would rephrase your estimates like this:
1. Artificial general intelligence (AGI), or an AI which is able to out-perform humans in essentially all human activities, is developed within the next century.
2. AT LEAST ONE of a large number of AGI systems acquires the capability to usurp humanity and achieve a position of dominance on Earth.
3. AT LEAST ONE of those AGI systems has a reason/motivation/purpose to usurp humanity and achieve a position of dominance on Earth (unaligned AGI).
4. The offense-defense balance between AGI systems available at the time is unfavorable (i.e., defense from unaligned AGI through benevolent AGI is difficult)
5. The unaligned AGI either brings about the extinction of humanity, or otherwise retains permanent dominance over humanity in a manner so as to significantly diminish our long-term potential.
My own estimates when phrasing it this way would be 0,99 * 0,99 * 0,99 * 0,5 * 0,1 = roughly a 5% risk, with high uncertainty.
This would make risk of an unfavorable offense-defense balance (here estimated as 0,5) one of the major determining parameters in my estimate.