I agree that people should not focus on nuclear risk as a direct extinction risk (and have long argued this), see Tobyâs nuke extinction estimates as too high, and would assess measures to reduce damage from nuclear winter to developing neutral countries mainly in GiveWell-style or ordinary CBA terms, while considerations about future generations would favor focus on AI, and to a lesser extent bio.
Thanks for mentioning these points. Would you also rely on ordinary CBAs to assess interventions to decrease the direct damage of nuclear war? I think this would still make sense.
So the âcan nuclear war with current arsenals cause extinctionâ question misses most of the existential risk from nuclear weapons, which is indirect in contributing to other risks that could cause extinction or lock-in of permanent awful regimes.
At the same time, the nearterm extinction risk from AI also misses most of the existential risk from AI? I guess you are implying that the ratio between nearterm extinction risk and total existential risk is lower for nuclear war than for AI.
Related to your point above, I say that:
Interventions to decrease nuclear risk have indirect effects which will tend to make their cost-effectiveness more similar to that of the best interventions to decrease AI risk. I guess the best marginal grants to decrease AI risk are much less than 59.8 M times as cost-effective as those to decrease nuclear risk. At the same time:
I believe it would be a surprising and suspicious convergence if the best interventions to decrease nuclear risk based on the more direct effects of nuclear war also happened to be the best with respect to the more indirect effects. I would argue directly optimising the indirect effects tends to be better.
You dismiss that [âeffects on our civilization beyond casualties and local damage of a nuclear warâ] here:
> Then discussions move to more poorly understood aspects of the risk (e.g. how the distribution of values after a nuclear war affects the longterm values of transformative AI).
Note I mention right after this that:
In any case, I recognise it is a crucial consideration whether nearterm annual risk of human extinction from nuclear war is a good proxy for the importance of decreasing nuclear risk from a longtermist perspective. I would agree further research on this is really valuable.
You say that:
I donât think itâs a huge stretch to say that a war with Russia largely destroying the NATO economies (and their semiconductor supply chains), leaving the PRC to dominate the world system and the onrushing creation of powerful AGI, makes a big difference to the chance of locked-in permanent totalitarianism and the values of one dictator running roughshod over the low-hanging fruit of many othersâ values, one very large compared to these extinction effects. It also doesnât require bets on extreme and plausibly exaggerated nuclear winter magnitude.
I agree these are relevant considerations. On the other hand:
The US may want to attack China in order not to relenquish its position as global hegemon.
I feel like there has been little research on questions like:
How much it would matter if powerful AI was developped in the West instead of China (or, more broadly, in a democracy instead of autocracy).
The likelihood of lock-in.
On the last point, your piece is a great contribution, but you say:
Note that weâre mostly making claims about feasibility as opposed to likelihood.
However, the likelihood of lock-in is crucial to assess the strength of your points. I would not be surprised if the chance of an AI lock-in due to a nuclear war was less than 10^-8 this century.
In terms of nuclear war indirectly causing extinction:
at least ignoring anthropics, I believe the probability of not fully recovering would only be 0.0513 % (= e^(-10^9/â(132*10^6))), assuming:
An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time to go from i) human extinction due to such an asteroid to ii) evolving a species as capable as humans at steering the future. I supposed this on the basis that:
An exponential distribution with a mean of 66 M years describes the time between extinction threats as well as that to go from i) to ii) conditional on no extinction threats.
Given the above, extinction and full recovery are equally likely. So there is a 50 % chance of full recovery, and one should expect the time until full recovery to be 2 times (= 1â0.50) as long as that conditional on no extinction threats.
The above evolution could take place in the next 1 billion years during which the Earth will remain habitable.
In contrast, if powerful AI caused extinction, control over the future would arguably permanently be lost.
Similarly, the chance of a huge hidden state bioweapons program having its full arsenal released simultaneously (including doomsday pandemic weapons) skyrockets in an all-out WMD war in obvious ways.
Is there any evidence for this?
this applied less to measures to reduce damage to nonbelligerent states
Makes sense. If GiveWellâs top charities are not a cost-effective way of improving the longterm future, then decreasing starvation in low income countries in a nuclear winter may be cost-effective in terms of saving lives, but has semingly negligible impact on the longterm future too. Such countries just have too little influence on transformative technologies.
Nearterm extinction risk from AI is wildly closer to total AI x-risk than the nuclear analog
My guess is that nuclear war interventions powerful enough to be world-beating for future generations would look tremendous in averting current human deaths, and most of the WTP should come from that if one has a lot of WTP related to each of those worldviews
Re suspicious convergence, what do you want to argue with here? Iâve favored allocation on VOI and low-hanging fruit on nuclear risk not leveraging AI related things in the past less than 1% of my marginal AI allocation (because of larger more likely near term risks from AI with more tractability and neglectedness); recent AI developments tend to push that down, but might surface something in the future that is really leveraged on avoiding nuclear war
I agree not much has been published in journals on the impact of AI being developed in dictatorships
Re lock-in I do not think itâs remote (my views are different from what that paper limited itself to) for a CCP-led AGI future,
Re suspicious convergence, what do you want to argue with here?
Sorry for the lack of clarity. Some thoughts:
The 15.3 M$ grantmakers aligned with effective altruism have influenced aiming to decrease nuclear risk seem mostly optimised to decrease the nearterm damage caused by nuclear war (especially the spending on nuclear winter), not the more longterm existential risk linked to permanent global totalitarianism.
As far as I know, there has been little research on how a minor AI catastrophe would influence AI existential risk (although wars over Taiwan have been wargamed). Looking into this seems more relevant than investigating how a non-AI catastrophe would influence AI risk.
The risk from permanent global totalitarianism is still poorly understood, so research on this and how to mitigate it seems more valuable than efforts focussing explicitly on nuclear war. There might well be interventions to increase democracy levels in China which are more effective to decrease that risk than interventions aimed at ensuring that China does not become the sole global hegemon after a nuclear war.
I guess most of the risk from permanent global totalitarianism does not involve any major catastrophes. As a data point, the Metaculusâ community predicts an AI dystopia is 5 (= 0.19/â0.037) times as likely as a paperclipalypse by 2050.
I agree not much has been published in journals on the impact of AI being developed in dictatorships
More broadly, which pieces would you recommend reading on this topic? I am not aware of substantial blogposts, although I have seen the concern raised many times.
Thanks for sharing your thoughts, Carl!
Thanks for mentioning these points. Would you also rely on ordinary CBAs to assess interventions to decrease the direct damage of nuclear war? I think this would still make sense.
At the same time, the nearterm extinction risk from AI also misses most of the existential risk from AI? I guess you are implying that the ratio between nearterm extinction risk and total existential risk is lower for nuclear war than for AI.
Related to your point above, I say that:
Regarding:
Note I mention right after this that:
You say that:
I agree these are relevant considerations. On the other hand:
The US may want to attack China in order not to relenquish its position as global hegemon.
I feel like there has been little research on questions like:
How much it would matter if powerful AI was developped in the West instead of China (or, more broadly, in a democracy instead of autocracy).
The likelihood of lock-in.
On the last point, your piece is a great contribution, but you say:
However, the likelihood of lock-in is crucial to assess the strength of your points. I would not be surprised if the chance of an AI lock-in due to a nuclear war was less than 10^-8 this century.
In terms of nuclear war indirectly causing extinction:
In contrast, if powerful AI caused extinction, control over the future would arguably permanently be lost.
Agreed.
Is there any evidence for this?
Makes sense. If GiveWellâs top charities are not a cost-effective way of improving the longterm future, then decreasing starvation in low income countries in a nuclear winter may be cost-effective in terms of saving lives, but has semingly negligible impact on the longterm future too. Such countries just have too little influence on transformative technologies.
Rapid fire:
Nearterm extinction risk from AI is wildly closer to total AI x-risk than the nuclear analog
My guess is that nuclear war interventions powerful enough to be world-beating for future generations would look tremendous in averting current human deaths, and most of the WTP should come from that if one has a lot of WTP related to each of those worldviews
Re suspicious convergence, what do you want to argue with here? Iâve favored allocation on VOI and low-hanging fruit on nuclear risk not leveraging AI related things in the past less than 1% of my marginal AI allocation (because of larger more likely near term risks from AI with more tractability and neglectedness); recent AI developments tend to push that down, but might surface something in the future that is really leveraged on avoiding nuclear war
I agree not much has been published in journals on the impact of AI being developed in dictatorships
Re lock-in I do not think itâs remote (my views are different from what that paper limited itself to) for a CCP-led AGI future,
Thanks for following up!
Sorry for the lack of clarity. Some thoughts:
The 15.3 M$ grantmakers aligned with effective altruism have influenced aiming to decrease nuclear risk seem mostly optimised to decrease the nearterm damage caused by nuclear war (especially the spending on nuclear winter), not the more longterm existential risk linked to permanent global totalitarianism.
As far as I know, there has been little research on how a minor AI catastrophe would influence AI existential risk (although wars over Taiwan have been wargamed). Looking into this seems more relevant than investigating how a non-AI catastrophe would influence AI risk.
The risk from permanent global totalitarianism is still poorly understood, so research on this and how to mitigate it seems more valuable than efforts focussing explicitly on nuclear war. There might well be interventions to increase democracy levels in China which are more effective to decrease that risk than interventions aimed at ensuring that China does not become the sole global hegemon after a nuclear war.
I guess most of the risk from permanent global totalitarianism does not involve any major catastrophes. As a data point, the Metaculusâ community predicts an AI dystopia is 5 (= 0.19/â0.037) times as likely as a paperclipalypse by 2050.
More broadly, which pieces would you recommend reading on this topic? I am not aware of substantial blogposts, although I have seen the concern raised many times.