This is an interesting point, thanks! I tend not to distinguish between “hazards” and “risk factors” because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example:
An engineered pandemic could wipe out humanity either directly or indirectly by causing famine, war, etc.
Misaligned AI is usually thought of as a direct x-risk, but it can also be thought of a risk factor because it uses its knowledge of other hazards in order to drive humanity extinct as efficiently as possible (e.g. by infecting all humans with botox-producing nanoparticles).
Mathematically, you can speak of the probability of an existential catastrophe given a risk factor by summing up the probabilities of that risk factor indirectly causing a catastrophe by elevating the probability of a “direct” hazard:
Pr(extinction∣great power war)=Pr(extinction∣great power war,engineered pandemic)+Pr(extinction∣great power war,transformative AI)+…
You can do the same thing with direct risks. All that matters for prioritization is the overall probability of catastrophe given some combination of risk factors.
This is an interesting point, thanks! I tend not to distinguish between “hazards” and “risk factors” because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example:
An engineered pandemic could wipe out humanity either directly or indirectly by causing famine, war, etc.
Misaligned AI is usually thought of as a direct x-risk, but it can also be thought of a risk factor because it uses its knowledge of other hazards in order to drive humanity extinct as efficiently as possible (e.g. by infecting all humans with botox-producing nanoparticles).
Mathematically, you can speak of the probability of an existential catastrophe given a risk factor by summing up the probabilities of that risk factor indirectly causing a catastrophe by elevating the probability of a “direct” hazard:
Pr(extinction∣great power war)=Pr(extinction∣great power war,engineered pandemic)+Pr(extinction∣great power war,transformative AI)+…
You can do the same thing with direct risks. All that matters for prioritization is the overall probability of catastrophe given some combination of risk factors.