Great power conflict is generally considered an existential risk factor, rather than an existential risk per se – it increases the chance of existential risks like bioengineered pandemics, nuclear war, and transformative AI, or lock-in of bad values (Modelling great power conflict as an existential risk factor, The Precipice chapter 7).
I can define a new existential risk factor that could be as great as all existential risks combined – the fact that our society and the general populace do not sufficiently prioritize existential risks, for example. So no, I don’t think TAI is greater than all possible existential risk factors. But I think addressing this “risk” would involve thinking a lot about its impact mediated through more direct existential risks like TAI, and if TAI is the main one, then that would be a primary focus.
This passage from The Precipice may be helpful:
the threat of great-power war may (indirectly) pose a significant amount of existential risk. For example, it seems that the bulk of the existential risk last century was driven by the threat of great-power war. Consider your own estimate of how much existential risk there is over the next hundred years. How much of this would disappear if you knew that the great powers would not go to war with each other over that time? It is impossible to be precise, but I’d estimate an appreciable fraction would disappear—something like a tenth of the existential risk over that time. Since I think the existential risk over the next hundred years is about one in six, I am estimating that great power war effectively poses more than a percentage point of existential risk over the next century. This makes it a larger contributor to total existential risk than most of the specific risks we have examined.
While you should feel free to disagree with my particular estimates, I think a safe case can be made that the contribution of great-power war to existential risk is larger than the contribution of all natural risks combined. So a young person choosing their career, a philanthropist choosing their cause or a government looking to make a safer world may do better to focus on great-power war than on detecting asteroids or comets.
This is an interesting point, thanks! I tend not to distinguish between “hazards” and “risk factors” because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example:
An engineered pandemic could wipe out humanity either directly or indirectly by causing famine, war, etc.
Misaligned AI is usually thought of as a direct x-risk, but it can also be thought of a risk factor because it uses its knowledge of other hazards in order to drive humanity extinct as efficiently as possible (e.g. by infecting all humans with botox-producing nanoparticles).
Mathematically, you can speak of the probability of an existential catastrophe given a risk factor by summing up the probabilities of that risk factor indirectly causing a catastrophe by elevating the probability of a “direct” hazard:
Pr(extinction∣great power war)=Pr(extinction∣great power war,engineered pandemic)+Pr(extinction∣great power war,transformative AI)+…
You can do the same thing with direct risks. All that matters for prioritization is the overall probability of catastrophe given some combination of risk factors.
Great power conflict is generally considered an existential risk factor, rather than an existential risk per se – it increases the chance of existential risks like bioengineered pandemics, nuclear war, and transformative AI, or lock-in of bad values (Modelling great power conflict as an existential risk factor, The Precipice chapter 7).
I can define a new existential risk factor that could be as great as all existential risks combined – the fact that our society and the general populace do not sufficiently prioritize existential risks, for example. So no, I don’t think TAI is greater than all possible existential risk factors. But I think addressing this “risk” would involve thinking a lot about its impact mediated through more direct existential risks like TAI, and if TAI is the main one, then that would be a primary focus.
This passage from The Precipice may be helpful:
This is an interesting point, thanks! I tend not to distinguish between “hazards” and “risk factors” because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example:
An engineered pandemic could wipe out humanity either directly or indirectly by causing famine, war, etc.
Misaligned AI is usually thought of as a direct x-risk, but it can also be thought of a risk factor because it uses its knowledge of other hazards in order to drive humanity extinct as efficiently as possible (e.g. by infecting all humans with botox-producing nanoparticles).
Mathematically, you can speak of the probability of an existential catastrophe given a risk factor by summing up the probabilities of that risk factor indirectly causing a catastrophe by elevating the probability of a “direct” hazard:
Pr(extinction∣great power war)=Pr(extinction∣great power war,engineered pandemic)+Pr(extinction∣great power war,transformative AI)+…
You can do the same thing with direct risks. All that matters for prioritization is the overall probability of catastrophe given some combination of risk factors.