AI Safety Needs To Get Serious About Chinese Political Culture
I worry that Leopold Aschenbrennerâs âChina will use AI to install a global dystopiaâ take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism /â expansionism, and isnât based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what Chinaâs long term political goals or values are.
Iâm no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isnât an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers.
This or a number of other heuristics /â judgements /â perspectives could change substantially how we think about whether China would race for AGI, and/âor be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West.
Currently, this is a pretty superficial impression of mine, so I donât think it would be fair to write an article yet. I need to do my homework first:
I need to actually read Leopoldâs own writing about this, instead of making impressions based on summaries of it,
Iâve been recommended to look into what CSET and Brian Tse have written about China,
Perhaps there are other things I should hear about this, feel free to make recommendations.
Alternatively, as always, Iâd be really happy for someone whoâs already done the homework to write about this, particularly anyone specifically with expertise in Chinese political culture or international relations. Even if I write the article, all itâll really be able to be is an appeal to listen to experts in the field, or for one or more of those experts to step forward and give us some principles to spread in how to think clearly and accurately about this topic.
I think having even like, undergrad-level textbook mainstream summaries of Chinaâs political mission and beliefs posted on the Forum could end up being really valuable if it puts those ideas more in the cultural and intellectual background of AI safety people in general.
This seems like a really crucial question that inevitably takes a central role in our overall strategy, and Leopoldâs take isnât the only one Iâm worried about. I think people are already pushing national security concerns about China to the US Government in an effort to push e.g. stronger cybersecurity controls or export controls on AI. I think thatâs a noble end but if the China angle becomes inappropriately charged weâre really risking causing more harm than good.
(For the avoidance of doubt, I think the Chinese government is inhumane, and that all undemocratic governments are fundamentally illegitimate. I think exporting democracy and freedom to the world is a good thing, so Iâm not against cultural expansionism per se. Nevertheless, assuming China wants to do it when they donât could be a really serious mistake.)
I recommend the China sections of this recent CNAS report as a starting point for discussion (itâs definitely from a relatively hawkish perspective, and I donât think of myself as having enough expertise to endorse it, but I did move in this direction after reading).
From the executive summary:
Taken together, perhaps the most underappreciated feature of emerging catastrophic AI risks from this exploration is the outsized likelihood of AI catastrophes originating from China. There, a combination of the Chinese Communist Partyâs efforts to accelerate AI development, its track record of authoritarian crisis mismanagement, and its censorship of information on accidents all make catastrophic risks related to AI more acute.
From the âDeficient Safety Culturesâ section:
While such an analysis is of relevance in a range of industry- and application-specific cultures, Chinaâs AI sector is particularly worthy of attention and uniquely predisposed to exacerbate catastrophic AI risks [footnote]. Chinaâs funding incentives around scientific and technological advancement generally lend themselves to risky approaches to new technologies, and AI leaders in China have long prided themselves on their governmentâs large appetite for riskâeven if there are more recent signs of some budding AI safety consciousness in the country [footnote, footnote, footnote]. Chinaâs society is the most optimistic in the world on the benefits and risks of AI technology, according to a 2022 survey by the multinational market research firm Institut Public de Sondage dâOpinion Secteur (Ipsos), despite the nationâs history of grisly industrial accidents and mismanaged crisesânot least its handling of COVID-19 [footnote, footnote, footnote, footnote]. The governmentâs sprint to lead the world in AI by 2030 has unnerving resonances with prior grand, government-led attempts to accelerate industries that have ended in tragedy, as in the Great Leap Forward, the commercial satellite launch industry, and a variety of Belt and Road infrastructure projects [footnote, footnote, footnote]. Chinaâs recent track record in other hightech sectors, including space and biotech, also suggests a much greater likelihood of catastrophic outcomes [footnote, footnote, footnote, footnote, footnote].
From âFurther Considerationsâ
In addition to having to grapple with all the same safety challenges that other AI ecosystems must address, Chinaâs broader tech culture is prone to crisis due to its governmentâs chronic mismanagement of disasters, censorship of information on accidents, and heavy-handed efforts to force technological breakthroughs. In AI, these dynamics are even more pronounced, buoyed by remarkably optimistic public perceptions of the technology and Beijingâs gigantic strategic gamble on boosting its AI sector to international preeminence. And while both the United States and China must reckon with the safety challenges that emerge from interstate technology competitions, historically, nations that perceive themselves to be slightly behind competitors are willing to absorb the greatest risks to catch up in tech races [footnote]. Thus, even while the United Statesâ AI edge over China may be a strategic advantage, Beijingâs self-perceived disadvantage could nonetheless exacerbate the overall risks of an AI catastrophe.
Also, unless one understands the Chinese situation, one should avoid moves that risk escalating a race, like making loud and confident predictions that a race is the only way.
I think itâs better for people to openly express their models that they see a race as the only option. I think itâs the kind of thing that can then lead to arguments and discourse about whether thatâs true or not. I think a huge amount of race dynamics stem from people being worried that other people might or might not be intending to race, or are hiding their intention to race, and so I am generally strongly in favor of transparency.
Fair, Iâm grumpy about Leopoldâs position but my above comment wasnât careful to target the real problems and doesnât give a good general rule here.
For those who are not deep China nerds but want a somewhat approachable lowdown, I can highly recommend Bill Bishopâs newsletter Sinocism (enough free issues to be worthwhile) and his podcast Sharp China (the latter is a bit more approachable but requires a subscription to Stratechery).
Iâm not a China expert so I wonât make strong claims, but I generally agree that we should not treat China as an unknowable, evil adversary who has exactly the same imperial desires as âthe westâ or past non-Western regimes. I think it was irresponsible of Aschenbrenner to assume this without better research & understanding, since so much of his argument relies on China behaving in a particular way.
I share your concerns. I spent a decade in China, and I canât count the number of times Iâve seen people confidently share low-quality or inaccurate perspectives on China. I wish that I had a solution better than âassign everyone the read these [NUMBER] different books.â
Even best selling books and articles by well-respected writers sometimes have misleading and inaccurate narratives in them. But it is hard to parse them critically and to provide a counter argument without both the appropriate background[1], and a large number of hours dedicated to the specific effort.
I would be surprised if someone is able to do so without at least an undergraduate background in something like Chinese studies/âsinology (or the equivalent, such as a large amount of self-study and independent exploration).
This reading list is an excellent place to start for getting a sense of China x AI (though it doesnât have that much about Chinaâs political objectives in general).
Note that you should also understand a) how the US government sees China and why, b) how China sees the US and why in order to be able to have a full analysis here.
Very good point. I hypothesize that the opaque nature of Chinese policy-making (at the national level, setting aside lower-level government) is a key difficulty for anyone outside the upper levels of the Chinese government.
AI Safety Needs To Get Serious About Chinese Political Culture
I worry that Leopold Aschenbrennerâs âChina will use AI to install a global dystopiaâ take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism /â expansionism, and isnât based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what Chinaâs long term political goals or values are.
Iâm no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isnât an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers.
This or a number of other heuristics /â judgements /â perspectives could change substantially how we think about whether China would race for AGI, and/âor be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West.
Currently, this is a pretty superficial impression of mine, so I donât think it would be fair to write an article yet. I need to do my homework first:
I need to actually read Leopoldâs own writing about this, instead of making impressions based on summaries of it,
Iâve been recommended to look into what CSET and Brian Tse have written about China,
Perhaps there are other things I should hear about this, feel free to make recommendations.
Alternatively, as always, Iâd be really happy for someone whoâs already done the homework to write about this, particularly anyone specifically with expertise in Chinese political culture or international relations. Even if I write the article, all itâll really be able to be is an appeal to listen to experts in the field, or for one or more of those experts to step forward and give us some principles to spread in how to think clearly and accurately about this topic.
I think having even like, undergrad-level textbook mainstream summaries of Chinaâs political mission and beliefs posted on the Forum could end up being really valuable if it puts those ideas more in the cultural and intellectual background of AI safety people in general.
This seems like a really crucial question that inevitably takes a central role in our overall strategy, and Leopoldâs take isnât the only one Iâm worried about. I think people are already pushing national security concerns about China to the US Government in an effort to push e.g. stronger cybersecurity controls or export controls on AI. I think thatâs a noble end but if the China angle becomes inappropriately charged weâre really risking causing more harm than good.
(For the avoidance of doubt, I think the Chinese government is inhumane, and that all undemocratic governments are fundamentally illegitimate. I think exporting democracy and freedom to the world is a good thing, so Iâm not against cultural expansionism per se. Nevertheless, assuming China wants to do it when they donât could be a really serious mistake.)
I recommend the China sections of this recent CNAS report as a starting point for discussion (itâs definitely from a relatively hawkish perspective, and I donât think of myself as having enough expertise to endorse it, but I did move in this direction after reading).
From the executive summary:
From the âDeficient Safety Culturesâ section:
From âFurther Considerationsâ
Also, unless one understands the Chinese situation, one should avoid moves that risk escalating a race, like making loud and confident predictions that a race is the only way.
I think itâs better for people to openly express their models that they see a race as the only option. I think itâs the kind of thing that can then lead to arguments and discourse about whether thatâs true or not. I think a huge amount of race dynamics stem from people being worried that other people might or might not be intending to race, or are hiding their intention to race, and so I am generally strongly in favor of transparency.
Fair, Iâm grumpy about Leopoldâs position but my above comment wasnât careful to target the real problems and doesnât give a good general rule here.
For those who are not deep China nerds but want a somewhat approachable lowdown, I can highly recommend Bill Bishopâs newsletter Sinocism (enough free issues to be worthwhile) and his podcast Sharp China (the latter is a bit more approachable but requires a subscription to Stratechery).
Iâm not a China expert so I wonât make strong claims, but I generally agree that we should not treat China as an unknowable, evil adversary who has exactly the same imperial desires as âthe westâ or past non-Western regimes. I think it was irresponsible of Aschenbrenner to assume this without better research & understanding, since so much of his argument relies on China behaving in a particular way.
I share your concerns. I spent a decade in China, and I canât count the number of times Iâve seen people confidently share low-quality or inaccurate perspectives on China. I wish that I had a solution better than âassign everyone the read these [NUMBER] different books.â
Even best selling books and articles by well-respected writers sometimes have misleading and inaccurate narratives in them. But it is hard to parse them critically and to provide a counter argument without both the appropriate background[1], and a large number of hours dedicated to the specific effort.
I would be surprised if someone is able to do so without at least an undergraduate background in something like Chinese studies/âsinology (or the equivalent, such as a large amount of self-study and independent exploration).
This reading list is an excellent place to start for getting a sense of China x AI (though it doesnât have that much about Chinaâs political objectives in general).
Note that you should also understand a) how the US government sees China and why, b) how China sees the US and why in order to be able to have a full analysis here.
Very good point. I hypothesize that the opaque nature of Chinese policy-making (at the national level, setting aside lower-level government) is a key difficulty for anyone outside the upper levels of the Chinese government.