Models of catastrophic risks can be conjunctive or disjunctive. A conjunctive risk model is one in which the disaster is caused by the co-occurrence of multiple conditions (). In a conjunctive model, the probability of the disaster is less than or equal to the probabilities of the individual conditions. By contrast, a disjunctive risk model is one in which the disaster occurs as a result of any of several conditions holding (). In a disjunctive model, the probability of the disaster is greater than or equal to the probabilities of the individual conditions.
Examples of conjunctive and disjunctive risk models of AI risk:
Joseph Carlsmith’s models existential risk from power-seeking AI conjunctively, i.e. as the intersection of six conditions, all of which must be true for the existential catastrophe to occur.[1]
By contrast, Nate Soares’s models AGI risk disjunctively, i.e. as the union of multiple conditions, any of which can cause existential catastrophe.[2]
Both types of models are simplifying assumptions. In reality, a disaster can be caused by multiple conditions that interact conjunctively and disjunctively. For example, a disaster could occur if conditions and are true, or if condition is true: .
Further reading
Soares, Nate (2021) Comments on Carlsmith’s “Is power-seeking AI an existential risk?”, LessWrong, November 13.
Related entries
compound existential risk | existential risk | existential risk factor | global catastrophic risk | models | expected value | forecasting | impact assessment | model uncertainty
- ^
Carlsmith, Joseph (2021) Draft report on existential risk from power-seeking AI, Effective Altruism Forum, April 28.
- ^
Soares, Nate (2022) AGI ruin scenarios are likely (and disjunctive), Effective Altruism Forum, July 27.