Nearterm extinction risk from AI is wildly closer to total AI x-risk than the nuclear analog
My guess is that nuclear war interventions powerful enough to be world-beating for future generations would look tremendous in averting current human deaths, and most of the WTP should come from that if one has a lot of WTP related to each of those worldviews
Re suspicious convergence, what do you want to argue with here? I’ve favored allocation on VOI and low-hanging fruit on nuclear risk not leveraging AI related things in the past less than 1% of my marginal AI allocation (because of larger more likely near term risks from AI with more tractability and neglectedness); recent AI developments tend to push that down, but might surface something in the future that is really leveraged on avoiding nuclear war
I agree not much has been published in journals on the impact of AI being developed in dictatorships
Re lock-in I do not think it’s remote (my views are different from what that paper limited itself to) for a CCP-led AGI future,
Re suspicious convergence, what do you want to argue with here?
Sorry for the lack of clarity. Some thoughts:
The 15.3 M$ grantmakers aligned with effective altruism have influenced aiming to decrease nuclear risk seem mostly optimised to decrease the nearterm damage caused by nuclear war (especially the spending on nuclear winter), not the more longterm existential risk linked to permanent global totalitarianism.
As far as I know, there has been little research on how a minor AI catastrophe would influence AI existential risk (although wars over Taiwan have been wargamed). Looking into this seems more relevant than investigating how a non-AI catastrophe would influence AI risk.
The risk from permanent global totalitarianism is still poorly understood, so research on this and how to mitigate it seems more valuable than efforts focussing explicitly on nuclear war. There might well be interventions to increase democracy levels in China which are more effective to decrease that risk than interventions aimed at ensuring that China does not become the sole global hegemon after a nuclear war.
I guess most of the risk from permanent global totalitarianism does not involve any major catastrophes. As a data point, the Metaculus’ community predicts an AI dystopia is 5 (= 0.19/0.037) times as likely as a paperclipalypse by 2050.
I agree not much has been published in journals on the impact of AI being developed in dictatorships
More broadly, which pieces would you recommend reading on this topic? I am not aware of substantial blogposts, although I have seen the concern raised many times.
Rapid fire:
Nearterm extinction risk from AI is wildly closer to total AI x-risk than the nuclear analog
My guess is that nuclear war interventions powerful enough to be world-beating for future generations would look tremendous in averting current human deaths, and most of the WTP should come from that if one has a lot of WTP related to each of those worldviews
Re suspicious convergence, what do you want to argue with here? I’ve favored allocation on VOI and low-hanging fruit on nuclear risk not leveraging AI related things in the past less than 1% of my marginal AI allocation (because of larger more likely near term risks from AI with more tractability and neglectedness); recent AI developments tend to push that down, but might surface something in the future that is really leveraged on avoiding nuclear war
I agree not much has been published in journals on the impact of AI being developed in dictatorships
Re lock-in I do not think it’s remote (my views are different from what that paper limited itself to) for a CCP-led AGI future,
Thanks for following up!
Sorry for the lack of clarity. Some thoughts:
The 15.3 M$ grantmakers aligned with effective altruism have influenced aiming to decrease nuclear risk seem mostly optimised to decrease the nearterm damage caused by nuclear war (especially the spending on nuclear winter), not the more longterm existential risk linked to permanent global totalitarianism.
As far as I know, there has been little research on how a minor AI catastrophe would influence AI existential risk (although wars over Taiwan have been wargamed). Looking into this seems more relevant than investigating how a non-AI catastrophe would influence AI risk.
The risk from permanent global totalitarianism is still poorly understood, so research on this and how to mitigate it seems more valuable than efforts focussing explicitly on nuclear war. There might well be interventions to increase democracy levels in China which are more effective to decrease that risk than interventions aimed at ensuring that China does not become the sole global hegemon after a nuclear war.
I guess most of the risk from permanent global totalitarianism does not involve any major catastrophes. As a data point, the Metaculus’ community predicts an AI dystopia is 5 (= 0.19/0.037) times as likely as a paperclipalypse by 2050.
More broadly, which pieces would you recommend reading on this topic? I am not aware of substantial blogposts, although I have seen the concern raised many times.