Re suspicious convergence, what do you want to argue with here?
Sorry for the lack of clarity. Some thoughts:
The 15.3 M$ grantmakers aligned with effective altruism have influenced aiming to decrease nuclear risk seem mostly optimised to decrease the nearterm damage caused by nuclear war (especially the spending on nuclear winter), not the more longterm existential risk linked to permanent global totalitarianism.
As far as I know, there has been little research on how a minor AI catastrophe would influence AI existential risk (although wars over Taiwan have been wargamed). Looking into this seems more relevant than investigating how a non-AI catastrophe would influence AI risk.
The risk from permanent global totalitarianism is still poorly understood, so research on this and how to mitigate it seems more valuable than efforts focussing explicitly on nuclear war. There might well be interventions to increase democracy levels in China which are more effective to decrease that risk than interventions aimed at ensuring that China does not become the sole global hegemon after a nuclear war.
I guess most of the risk from permanent global totalitarianism does not involve any major catastrophes. As a data point, the Metaculus’ community predicts an AI dystopia is 5 (= 0.19/​0.037) times as likely as a paperclipalypse by 2050.
I agree not much has been published in journals on the impact of AI being developed in dictatorships
More broadly, which pieces would you recommend reading on this topic? I am not aware of substantial blogposts, although I have seen the concern raised many times.
Thanks for following up!
Sorry for the lack of clarity. Some thoughts:
The 15.3 M$ grantmakers aligned with effective altruism have influenced aiming to decrease nuclear risk seem mostly optimised to decrease the nearterm damage caused by nuclear war (especially the spending on nuclear winter), not the more longterm existential risk linked to permanent global totalitarianism.
As far as I know, there has been little research on how a minor AI catastrophe would influence AI existential risk (although wars over Taiwan have been wargamed). Looking into this seems more relevant than investigating how a non-AI catastrophe would influence AI risk.
The risk from permanent global totalitarianism is still poorly understood, so research on this and how to mitigate it seems more valuable than efforts focussing explicitly on nuclear war. There might well be interventions to increase democracy levels in China which are more effective to decrease that risk than interventions aimed at ensuring that China does not become the sole global hegemon after a nuclear war.
I guess most of the risk from permanent global totalitarianism does not involve any major catastrophes. As a data point, the Metaculus’ community predicts an AI dystopia is 5 (= 0.19/​0.037) times as likely as a paperclipalypse by 2050.
More broadly, which pieces would you recommend reading on this topic? I am not aware of substantial blogposts, although I have seen the concern raised many times.