AI Safety Needs To Get Serious About Chinese Political Culture
I worry that Leopold Aschenbrenner’s “China will use AI to install a global dystopia” take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism / expansionism, and isn’t based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what China’s long term political goals or values are.
I’m no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isn’t an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers.
This or a number of other heuristics / judgements / perspectives could change substantially how we think about whether China would race for AGI, and/or be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West.
Currently, this is a pretty superficial impression of mine, so I don’t think it would be fair to write an article yet. I need to do my homework first:
I need to actually read Leopold’s own writing about this, instead of making impressions based on summaries of it,
I’ve been recommended to look into what CSET and Brian Tse have written about China,
Perhaps there are other things I should hear about this, feel free to make recommendations.
Alternatively, as always, I’d be really happy for someone who’s already done the homework to write about this, particularly anyone specifically with expertise in Chinese political culture or international relations. Even if I write the article, all it’ll really be able to be is an appeal to listen to experts in the field, or for one or more of those experts to step forward and give us some principles to spread in how to think clearly and accurately about this topic.
I think having even like, undergrad-level textbook mainstream summaries of China’s political mission and beliefs posted on the Forum could end up being really valuable if it puts those ideas more in the cultural and intellectual background of AI safety people in general.
This seems like a really crucial question that inevitably takes a central role in our overall strategy, and Leopold’s take isn’t the only one I’m worried about. I think people are already pushing national security concerns about China to the US Government in an effort to push e.g. stronger cybersecurity controls or export controls on AI. I think that’s a noble end but if the China angle becomes inappropriately charged we’re really risking causing more harm than good.
(For the avoidance of doubt, I think the Chinese government is inhumane, and that all undemocratic governments are fundamentally illegitimate. I think exporting democracy and freedom to the world is a good thing, so I’m not against cultural expansionism per se. Nevertheless, assuming China wants to do it when they don’t could be a really serious mistake.)
I recommend the China sections of this recent CNAS report as a starting point for discussion (it’s definitely from a relatively hawkish perspective, and I don’t think of myself as having enough expertise to endorse it, but I did move in this direction after reading).
From the executive summary:
Taken together, perhaps the most underappreciated feature of emerging catastrophic AI risks from this exploration is the outsized likelihood of AI catastrophes originating from China. There, a combination of the Chinese Communist Party’s efforts to accelerate AI development, its track record of authoritarian crisis mismanagement, and its censorship of information on accidents all make catastrophic risks related to AI more acute.
From the “Deficient Safety Cultures” section:
While such an analysis is of relevance in a range of industry- and application-specific cultures, China’s AI sector is particularly worthy of attention and uniquely predisposed to exacerbate catastrophic AI risks [footnote]. China’s funding incentives around scientific and technological advancement generally lend themselves to risky approaches to new technologies, and AI leaders in China have long prided themselves on their government’s large appetite for risk—even if there are more recent signs of some budding AI safety consciousness in the country [footnote, footnote, footnote]. China’s society is the most optimistic in the world on the benefits and risks of AI technology, according to a 2022 survey by the multinational market research firm Institut Public de Sondage d’Opinion Secteur (Ipsos), despite the nation’s history of grisly industrial accidents and mismanaged crises—not least its handling of COVID-19 [footnote, footnote, footnote, footnote]. The government’s sprint to lead the world in AI by 2030 has unnerving resonances with prior grand, government-led attempts to accelerate industries that have ended in tragedy, as in the Great Leap Forward, the commercial satellite launch industry, and a variety of Belt and Road infrastructure projects [footnote, footnote, footnote]. China’s recent track record in other hightech sectors, including space and biotech, also suggests a much greater likelihood of catastrophic outcomes [footnote, footnote, footnote, footnote, footnote].
From “Further Considerations”
In addition to having to grapple with all the same safety challenges that other AI ecosystems must address, China’s broader tech culture is prone to crisis due to its government’s chronic mismanagement of disasters, censorship of information on accidents, and heavy-handed efforts to force technological breakthroughs. In AI, these dynamics are even more pronounced, buoyed by remarkably optimistic public perceptions of the technology and Beijing’s gigantic strategic gamble on boosting its AI sector to international preeminence. And while both the United States and China must reckon with the safety challenges that emerge from interstate technology competitions, historically, nations that perceive themselves to be slightly behind competitors are willing to absorb the greatest risks to catch up in tech races [footnote]. Thus, even while the United States’ AI edge over China may be a strategic advantage, Beijing’s self-perceived disadvantage could nonetheless exacerbate the overall risks of an AI catastrophe.
Also, unless one understands the Chinese situation, one should avoid moves that risk escalating a race, like making loud and confident predictions that a race is the only way.
I think it’s better for people to openly express their models that they see a race as the only option. I think it’s the kind of thing that can then lead to arguments and discourse about whether that’s true or not. I think a huge amount of race dynamics stem from people being worried that other people might or might not be intending to race, or are hiding their intention to race, and so I am generally strongly in favor of transparency.
For those who are not deep China nerds but want a somewhat approachable lowdown, I can highly recommend Bill Bishop’s newsletter Sinocism (enough free issues to be worthwhile) and his podcast Sharp China (the latter is a bit more approachable but requires a subscription to Stratechery).
I’m not a China expert so I won’t make strong claims, but I generally agree that we should not treat China as an unknowable, evil adversary who has exactly the same imperial desires as ‘the west’ or past non-Western regimes. I think it was irresponsible of Aschenbrenner to assume this without better research & understanding, since so much of his argument relies on China behaving in a particular way.
I share your concerns. I spent a decade in China, and I can’t count the number of times I’ve seen people confidently share low-quality or inaccurate perspectives on China. I wish that I had a solution better than “assign everyone the read these [NUMBER] different books.”
Even best selling books and articles by well-respected writers sometimes have misleading and inaccurate narratives in them. But it is hard to parse them critically and to provide a counter argument without both the appropriate background[1], and a large number of hours dedicated to the specific effort.
I would be surprised if someone is able to do so without at least an undergraduate background in something like Chinese studies/sinology (or the equivalent, such as a large amount of self-study and independent exploration).
This reading list is an excellent place to start for getting a sense of China x AI (though it doesn’t have that much about China’s political objectives in general).
Note that you should also understand a) how the US government sees China and why, b) how China sees the US and why in order to be able to have a full analysis here.
Very good point. I hypothesize that the opaque nature of Chinese policy-making (at the national level, setting aside lower-level government) is a key difficulty for anyone outside the upper levels of the Chinese government.
AI Safety Needs To Get Serious About Chinese Political Culture
I worry that Leopold Aschenbrenner’s “China will use AI to install a global dystopia” take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism / expansionism, and isn’t based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what China’s long term political goals or values are.
I’m no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isn’t an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers.
This or a number of other heuristics / judgements / perspectives could change substantially how we think about whether China would race for AGI, and/or be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West.
Currently, this is a pretty superficial impression of mine, so I don’t think it would be fair to write an article yet. I need to do my homework first:
I need to actually read Leopold’s own writing about this, instead of making impressions based on summaries of it,
I’ve been recommended to look into what CSET and Brian Tse have written about China,
Perhaps there are other things I should hear about this, feel free to make recommendations.
Alternatively, as always, I’d be really happy for someone who’s already done the homework to write about this, particularly anyone specifically with expertise in Chinese political culture or international relations. Even if I write the article, all it’ll really be able to be is an appeal to listen to experts in the field, or for one or more of those experts to step forward and give us some principles to spread in how to think clearly and accurately about this topic.
I think having even like, undergrad-level textbook mainstream summaries of China’s political mission and beliefs posted on the Forum could end up being really valuable if it puts those ideas more in the cultural and intellectual background of AI safety people in general.
This seems like a really crucial question that inevitably takes a central role in our overall strategy, and Leopold’s take isn’t the only one I’m worried about. I think people are already pushing national security concerns about China to the US Government in an effort to push e.g. stronger cybersecurity controls or export controls on AI. I think that’s a noble end but if the China angle becomes inappropriately charged we’re really risking causing more harm than good.
(For the avoidance of doubt, I think the Chinese government is inhumane, and that all undemocratic governments are fundamentally illegitimate. I think exporting democracy and freedom to the world is a good thing, so I’m not against cultural expansionism per se. Nevertheless, assuming China wants to do it when they don’t could be a really serious mistake.)
I recommend the China sections of this recent CNAS report as a starting point for discussion (it’s definitely from a relatively hawkish perspective, and I don’t think of myself as having enough expertise to endorse it, but I did move in this direction after reading).
From the executive summary:
From the “Deficient Safety Cultures” section:
From “Further Considerations”
Also, unless one understands the Chinese situation, one should avoid moves that risk escalating a race, like making loud and confident predictions that a race is the only way.
I think it’s better for people to openly express their models that they see a race as the only option. I think it’s the kind of thing that can then lead to arguments and discourse about whether that’s true or not. I think a huge amount of race dynamics stem from people being worried that other people might or might not be intending to race, or are hiding their intention to race, and so I am generally strongly in favor of transparency.
Fair, I’m grumpy about Leopold’s position but my above comment wasn’t careful to target the real problems and doesn’t give a good general rule here.
For those who are not deep China nerds but want a somewhat approachable lowdown, I can highly recommend Bill Bishop’s newsletter Sinocism (enough free issues to be worthwhile) and his podcast Sharp China (the latter is a bit more approachable but requires a subscription to Stratechery).
I’m not a China expert so I won’t make strong claims, but I generally agree that we should not treat China as an unknowable, evil adversary who has exactly the same imperial desires as ‘the west’ or past non-Western regimes. I think it was irresponsible of Aschenbrenner to assume this without better research & understanding, since so much of his argument relies on China behaving in a particular way.
I share your concerns. I spent a decade in China, and I can’t count the number of times I’ve seen people confidently share low-quality or inaccurate perspectives on China. I wish that I had a solution better than “assign everyone the read these [NUMBER] different books.”
Even best selling books and articles by well-respected writers sometimes have misleading and inaccurate narratives in them. But it is hard to parse them critically and to provide a counter argument without both the appropriate background[1], and a large number of hours dedicated to the specific effort.
I would be surprised if someone is able to do so without at least an undergraduate background in something like Chinese studies/sinology (or the equivalent, such as a large amount of self-study and independent exploration).
This reading list is an excellent place to start for getting a sense of China x AI (though it doesn’t have that much about China’s political objectives in general).
Note that you should also understand a) how the US government sees China and why, b) how China sees the US and why in order to be able to have a full analysis here.
Very good point. I hypothesize that the opaque nature of Chinese policy-making (at the national level, setting aside lower-level government) is a key difficulty for anyone outside the upper levels of the Chinese government.